A Web-Based System for Mapping Features into ISO 14649-Compliant Machining Workingsteps

The rapid development of manufacturing and information systems has caused significant changes in manufacturing environments in recent decades. Mass production has given way to flexible manufacturing systems, in which an important characteristic is customized or "on demand" production. In this scenario, the seamless and without gaps information flow becomes a key factor for success of enterprises. In this paper we present a framework to support the mapping of features into machining workingsteps compliant with the ISO 14649 standard (known as STEP-NC). The system determines how the features can be made with the available manufacturing resources. Examples of the mapping method are presented for features such as a pocket with a general surface.

Watermark Bit Rate in Diverse Signal Domains

A study of the obtainable watermark data rate for information hiding algorithms is presented in this paper. As the perceptual entropy for wideband monophonic audio signals is in the range of four to five bits per sample, a significant amount of additional information can be inserted into signal without causing any perceptual distortion. Experimental results showed that transform domain watermark embedding outperforms considerably watermark embedding in time domain and that signal decompositions with a high gain of transform coding, like the wavelet transform, are the most suitable for high data rate information hiding. Keywords?Digital watermarking, information hiding, audio watermarking, watermark data rate.

Wind Tunnel Investigation of the Turbulent Flow around the Panorama Giustinelli Building for VAWT Application

A boundary layer wind tunnel facility has been adopted in order to conduct experimental measurements of the flow field around a model of the Panorama Giustinelli Building, Trieste (Italy). Information on the main flow structures has been obtained by means of flow visualization techniques and has been compared to the numerical predictions of the vortical structures spread on top of the roof, in order to investigate the optimal positioning for a vertical-axis wind energy conversion system, registering a good agreement between experimental measurements and numerical predictions.

Digital Hypertexts vs. Traditional Books: An Inquiry into Non-Linearity

The current study begins with an awareness that today-s media environment is characterized by technological development and a new way of reading caused by the introduction of the Internet. The researcher conducted a meta analysis framed within Technological Determinism to investigate the process of hypertext reading, its differences from linear reading and the effects such differences can have on people-s ways of mentally structuring their world. The relationship between literacy and the comprehension achieved by reading hypertexts is also investigated. The results show hypertexts are not always user friendly. People experience hyperlinks as interruptions that distract their attention generating comprehension and disorientation. On one hand hypertextual jumping reading generates interruptions that finally make people lose their concentration. On the other hand hypertexts fascinate people who would rather read a document in such a format even though the outcome is often frustrating and affects their ability to elaborate and retain information.

Protocol and Method for Preventing Attacks from the Web

Nowadays, computer worms, viruses and Trojan horse become popular, and they are collectively called malware. Those malware just spoiled computers by deleting or rewriting important files a decade ago. However, recent malware seems to be born to earn money. Some of malware work for collecting personal information so that malicious people can find secret information such as password for online banking, evidence for a scandal or contact address which relates with the target. Moreover, relation between money and malware becomes more complex. Many kinds of malware bear bots to get springboards. Meanwhile, for ordinary internet users, countermeasures against malware come up against a blank wall. Pattern matching becomes too much waste of computer resources, since matching tools have to deal with a lot of patterns derived from subspecies. Virus making tools can automatically bear subspecies of malware. Moreover, metamorphic and polymorphic malware are no longer special. Recently there appears malware checking sites that check contents in place of users' PC. However, there appears a new type of malicious sites that avoids check by malware checking sites. In this paper, existing protocols and methods related with the web are reconsidered in terms of protection from current attacks, and new protocol and method are indicated for the purpose of security of the web.

An Approach of Quantum Steganography through Special SSCE Code

Encrypted messages sending frequently draws the attention of third parties, perhaps causing attempts to break and reveal the original messages. Steganography is introduced to hide the existence of the communication by concealing a secret message in an appropriate carrier like text, image, audio or video. Quantum steganography where the sender (Alice) embeds her steganographic information into the cover and sends it to the receiver (Bob) over a communication channel. Alice and Bob share an algorithm and hide quantum information in the cover. An eavesdropper (Eve) without access to the algorithm can-t find out the existence of the quantum message. In this paper, a text quantum steganography technique based on the use of indefinite articles (a) or (an) in conjunction with the nonspecific or non-particular nouns in English language and quantum gate truth table have been proposed. The authors also introduced a new code representation technique (SSCE - Secret Steganography Code for Embedding) at both ends in order to achieve high level of security. Before the embedding operation each character of the secret message has been converted to SSCE Value and then embeds to cover text. Finally stego text is formed and transmits to the receiver side. At the receiver side different reverse operation has been carried out to get back the original information.

Quality Evaluation of Compressed MRI Medical Images for Telemedicine Applications

Medical image modalities such as computed tomography (CT), magnetic resonance imaging (MRI), ultrasound (US), X-ray are adapted to diagnose disease. These modalities provide flexible means of reviewing anatomical cross-sections and physiological state in different parts of the human body. The raw medical images have a huge file size and need large storage requirements. So it should be such a way to reduce the size of those image files to be valid for telemedicine applications. Thus the image compression is a key factor to reduce the bit rate for transmission or storage while maintaining an acceptable reproduction quality, but it is natural to rise the question of how much an image can be compressed and still preserve sufficient information for a given clinical application. Many techniques for achieving data compression have been introduced. In this study, three different MRI modalities which are Brain, Spine and Knee have been compressed and reconstructed using wavelet transform. Subjective and objective evaluation has been done to investigate the clinical information quality of the compressed images. For the objective evaluation, the results show that the PSNR which indicates the quality of the reconstructed image is ranging from (21.95 dB to 30.80 dB, 27.25 dB to 35.75 dB, and 26.93 dB to 34.93 dB) for Brain, Spine, and Knee respectively. For the subjective evaluation test, the results show that the compression ratio of 40:1 was acceptable for brain image, whereas for spine and knee images 50:1 was acceptable.

An Efficient Biometric Cryptosystem using Autocorrelators

Cryptography provides the secure manner of information transmission over the insecure channel. It authenticates messages based on the key but not on the user. It requires a lengthy key to encrypt and decrypt the sending and receiving the messages, respectively. But these keys can be guessed or cracked. Moreover, Maintaining and sharing lengthy, random keys in enciphering and deciphering process is the critical problem in the cryptography system. A new approach is described for generating a crypto key, which is acquired from a person-s iris pattern. In the biometric field, template created by the biometric algorithm can only be authenticated with the same person. Among the biometric templates, iris features can efficiently be distinguished with individuals and produces less false positives in the larger population. This type of iris code distribution provides merely less intra-class variability that aids the cryptosystem to confidently decrypt messages with an exact matching of iris pattern. In this proposed approach, the iris features are extracted using multi resolution wavelets. It produces 135-bit iris codes from each subject and is used for encrypting/decrypting the messages. The autocorrelators are used to recall original messages from the partially corrupted data produced by the decryption process. It intends to resolve the repudiation and key management problems. Results were analyzed in both conventional iris cryptography system (CIC) and non-repudiation iris cryptography system (NRIC). It shows that this new approach provides considerably high authentication in enciphering and deciphering processes.

Academic Staff Perceptions of the Value of the Elements of an Online Learning Environment

Based on 276 responses from academic staff in an evaluation of an online learning environment (OLE), this paper identifies those elements of the OLE that were most used and valued by staff, those elements of the OLE that staff most wanted to see improved, and those factors that most contributed to staff perceptions that the use of the OLE enhanced their teaching. The most used and valued elements were core functions, including accessing unit information, accessing lecture/tutorial/lab notes, and reading online discussions. The elements identified as most needing attention related to online assessment: submitting assignments, managing assessment items, and receiving feedback on assignments. Staff felt that using the OLE enhanced their teaching when they were satisfied that their students were able to access and use their learning materials, and when they were satisfied with the professional development they received and were confident with their ability to teach with the OLE.

A Framework for Ranking Quality of Information on Weblog

The vast amount of information on the World Wide Web is created and published by many different types of providers. Unlike books and journals, most of this information is not subject to editing or peer review by experts. This lack of quality control and the explosion of web sites make the task of finding quality information on the web especially critical. Meanwhile new facilities for producing web pages such as Blogs make this issue more significant because Blogs have simple content management tools enabling nonexperts to build easily updatable web diaries or online journals. On the other hand despite a decade of active research in information quality (IQ) there is no framework for measuring information quality on the Blogs yet. This paper presents a novel experimental framework for ranking quality of information on the Weblog. The results of data analysis revealed seven IQ dimensions for the Weblog. For each dimension, variables and related coefficients were calculated so that presented framework is able to assess IQ of Weblogs automatically.

Application of Company Financial Crisis Early Warning Model- Use of “Financial Reference Database“

In July 1, 2007, Taiwan Stock Exchange (TWSE) on market observation post system (MOPS) adds a new "Financial reference database" for investors to do investment reference. This database as a warning to public offering companies listed on the public financial information and it original within eight targets. In this paper, this database provided by the indicators for the application of company financial crisis early warning model verify that the database provided by the indicator forecast for the financial crisis, whether or not companies have a high accuracy rate as opposed to domestic and foreign scholars have positive results. There is use of Logistic Regression Model application of the financial early warning model, in which no joined back-conditions is the first model, joined it in is the second model, has been taken occurred in the financial crisis of companies to research samples and then business took place before the financial crisis point with T-1 and T-2 sample data to do positive analysis. The results show that this database provided the debt ratio and net per share for the best forecast variables.

A New Routing Algorithm: MIRAD

LSP routing is among the prominent issues in MPLS networks traffic engineering. The objective of this routing is to increase number of the accepted requests while guaranteeing the quality of service (QoS). Requested bandwidth is the most important QoS criterion that is considered in literatures, and a various number of heuristic algorithms have been presented with that regards. Many of these algorithms prevent flows through bottlenecks of the network in order to perform load balancing, which impedes optimum operation of the network. Here, a modern routing algorithm is proposed as MIRAD: having a little information of the network topology, links residual bandwidth, and any knowledge of the prospective requests it provides every request with a maximum bandwidth as well as minimum end-to-end delay via uniform load distribution across the network. Simulation results of the proposed algorithm show a better efficiency in comparison with similar algorithms.

Fault Localization and Alarm Correlation in Optical WDM Networks

For several high speed networks, providing resilience against failures is an essential requirement. The main feature for designing next generation optical networks is protecting and restoring high capacity WDM networks from the failures. Quick detection, identification and restoration make networks more strong and consistent even though the failures cannot be avoided. Hence, it is necessary to develop fast, efficient and dependable fault localization or detection mechanisms. In this paper we propose a new fault localization algorithm for WDM networks which can identify the location of a failure on a failed lightpath. Our algorithm detects the failed connection and then attempts to reroute data stream through an alternate path. In addition to this, we develop an algorithm to analyze the information of the alarms generated by the components of an optical network, in the presence of a fault. It uses the alarm correlation in order to reduce the list of suspected components shown to the network operators. By our simulation results, we show that our proposed algorithms achieve less blocking probability and delay while getting higher throughput.

An AHP-Delphi Multi-Criteria Usage Cases Model with Application to Citrogypsum Decisions, Case Study: Kimia Gharb Gostar Industries Company

Today, advantage of biotechnology especially in environmental issues compared to other technologies is irrefragable. Kimia Gharb Gostar Industries Company, as a largest producer of citric acid in Middle East, applies biotechnology for this goal. Citrogypsum is a by–product of citric acid production and it considered as a valid residuum of this company. At this paper summary of acid citric production and condition of Citrogypsum production in company were introduced in addition to defmition of Citrogypsum production and its applications in world. According to these information and evaluation of present conditions about Iran needing to Citrogypsum, the best priority was introduced and emphasized on strategy selection and proper programming for self-sufficiency. The Delphi technique was used to elicit expert opinions about criteria for evaluating the usages. The criteria identified by the experts were profitability, capacity of production, the degree of investment, marketable, production ease and time production. The Analytical Hierarchy Process (ARP) and Expert Choice software were used to compare the alternatives on the criteria derived from the Delphi process.

Clubs Forming on Crazyvote -The Blurred Social Boundary Between Online Communities and the Real World

With the rapid growth and development of information and communication technology, the Internet has played a definite and irreplaceable role in people-s social lives in Taiwan like in other countries. In July 2008, on a general social website, an unexpected phenomenon was noticed – that there were more than one hundred users who started forming clubs voluntarily and having face-to-face gatherings for specific purposes. In this study, it-s argued whether or not teenagers- social contact on the Internet is involved in their life context, and tried to reveal the teenagers- social preferences, values, and needs, which merge with and influence teenagers- social activities. Therefore, the study conducts multiple user experience research methods, which include practical observations and qualitative analysis by contextual inquiries and in-depth interviews. Based on the findings, several design implications for software related to social interactions and cultural inheritance are offered. It is concluded that the inherent values of a social behaviors might be a key issue in developing computer-mediated communication or interaction designs in the future.

Underlying Cognitive Complexity Measure Computation with Combinatorial Rules

Measuring the complexity of software has been an insoluble problem in software engineering. Complexity measures can be used to predict critical information about testability, reliability, and maintainability of software systems from automatic analysis of the source code. During the past few years, many complexity measures have been invented based on the emerging Cognitive Informatics discipline. These software complexity measures, including cognitive functional size, lend themselves to the approach of the total cognitive weights of basic control structures such as loops and branches. This paper shows that the current existing calculation method can generate different results that are algebraically equivalence. However, analysis of the combinatorial meanings of this calculation method shows significant flaw of the measure, which also explains why it does not satisfy Weyuker's properties. Based on the findings, improvement directions, such as measures fusion, and cumulative variable counting scheme are suggested to enhance the effectiveness of cognitive complexity measures.

Diversity and Public Decision Making

Within the realm of e-government, the development has moved towards testing new means for democratic decisionmaking, like e-panels, electronic discussion forums, and polls. Although such new developments seem promising, they are not problem-free, and the outcomes are seldom used in the subsequent formal political procedures. Nevertheless, process models offer promising potential when it comes to structuring and supporting transparency of decision processes in order to facilitate the integration of the public into decision-making procedures in a reasonable and manageable way. Based on real-life cases of urban planning processes in Sweden, we present an outline for an integrated framework for public decision making to: a) provide tools for citizens to organize discussion and create opinions; b) enable governments, authorities, and institutions to better analyse these opinions; and c) enable governments to account for this information in planning and societal decision making by employing a process model for structured public decision making.

Protein Graph Partitioning by Mutually Maximization of cycle-distributions

The classification of the protein structure is commonly not performed for the whole protein but for structural domains, i.e., compact functional units preserved during evolution. Hence, a first step to a protein structure classification is the separation of the protein into its domains. We approach the problem of protein domain identification by proposing a novel graph theoretical algorithm. We represent the protein structure as an undirected, unweighted and unlabeled graph which nodes correspond the secondary structure elements of the protein. This graph is call the protein graph. The domains are then identified as partitions of the graph corresponding to vertices sets obtained by the maximization of an objective function, which mutually maximizes the cycle distributions found in the partitions of the graph. Our algorithm does not utilize any other kind of information besides the cycle-distribution to find the partitions. If a partition is found, the algorithm is iteratively applied to each of the resulting subgraphs. As stop criterion, we calculate numerically a significance level which indicates the stability of the predicted partition against a random rewiring of the protein graph. Hence, our algorithm terminates automatically its iterative application. We present results for one and two domain proteins and compare our results with the manually assigned domains by the SCOP database and differences are discussed.

Efficient CT Image Volume Rendering for Diagnosis

Volume rendering is widely used in medical CT image visualization. Applying 3D image visualization to diagnosis application can require accurate volume rendering with high resolution. Interpolation is important in medical image processing applications such as image compression or volume resampling. However, it can distort the original image data because of edge blurring or blocking effects when image enhancement procedures were applied. In this paper, we proposed adaptive tension control method exploiting gradient information to achieve high resolution medical image enhancement in volume visualization, where restored images are similar to original images as much as possible. The experimental results show that the proposed method can improve image quality associated with the adaptive tension control efficacy.

Influenza Pattern Analysis System through Mining Weblogs

Weblogs are resource of social structure to discover and track the various type of information written by blogger. In this paper, we proposed to use mining weblogs technique for identifying the trends of influenza where blogger had disseminated their opinion for the anomaly disease. In order to identify the trends, web crawler is applied to perform a search and generated a list of visited links based on a set of influenza keywords. This information is used to implement the analytics report system for monitoring and analyzing the pattern and trends of influenza (H1N1). Statistical and graphical analysis reports are generated. Both types of the report have shown satisfactory reports that reflect the awareness of Malaysian on the issue of influenza outbreak through blogs.