Evolutionary Training of Hybrid Systems of Recurrent Neural Networks and Hidden Markov Models

We present a hybrid architecture of recurrent neural networks (RNNs) inspired by hidden Markov models (HMMs). We train the hybrid architecture using genetic algorithms to learn and represent dynamical systems. We train the hybrid architecture on a set of deterministic finite-state automata strings and observe the generalization performance of the hybrid architecture when presented with a new set of strings which were not present in the training data set. In this way, we show that the hybrid system of HMM and RNN can learn and represent deterministic finite-state automata. We ran experiments with different sets of population sizes in the genetic algorithm; we also ran experiments to find out which weight initializations were best for training the hybrid architecture. The results show that the hybrid architecture of recurrent neural networks inspired by hidden Markov models can train and represent dynamical systems. The best training and generalization performance is achieved when the hybrid architecture is initialized with random real weight values of range -15 to 15.

Coding of DWT Coefficients using Run-length Coding and Huffman Coding for the Purpose of Color Image Compression

In present paper we proposed a simple and effective method to compress an image. Here we found success in size reduction of an image without much compromising with it-s quality. Here we used Haar Wavelet Transform to transform our original image and after quantization and thresholding of DWT coefficients Run length coding and Huffman coding schemes have been used to encode the image. DWT is base for quite populate JPEG 2000 technique.

Enhanced Conference Organization Based On Correlation of Web Information and Ontology Based Expertise Search

From the importance of the conference and its constructive role in the studies discussion, there must be a strong organization that allows the exploitation of the discussions in opening new horizons. The vast amount of information scattered across the web, make it difficult to find experts, who can play a prominent role in organizing conferences. In this paper we proposed a new approach of extracting researchers- information from various Web resources and correlating them in order to confirm their correctness. As a validator of this approach, we propose a service that will be useful to set up a conference. Its main objective is to find appropriate experts, as well as the social events for a conference. For this application we us Semantic Web technologies like RDF and ontology to represent the confirmed information, which are linked to another ontology (skills ontology) that are used to present and compute the expertise.

Adaptive Fuzzy Control of Stewart Platform under Actuator Saturation

A novel adaptive fuzzy trajectory tracking algorithm of Stewart platform based motion platform is proposed to compensate path deviation and degradation of controller-s performance due to actuator torque limit. The algorithm can be divided into two parts: the real-time trajectory shaping part and the joint space adaptive fuzzy controller part. For a reference trajectory in task space whenever any of the actuators is saturated, the desired acceleration of the reference trajectory is modified on-line by using dynamic model of motion platform. Meanwhile an additional action with respect to the difference between the nominal and modified trajectories is utilized in the non-saturated region of actuators to reduce the path error. Using modified trajectory as input, the joint space controller incorporates compute torque controller, leg velocity observer and fuzzy disturbance observer with saturation compensation. It can ensure stability and tracking performance of controller in present of external disturbance and position only measurement. Simulation results verify the effectiveness of proposed control scheme.

Optimizing Feature Selection for Recognizing Handwritten Arabic Characters

Recognition of characters greatly depends upon the features used. Several features of the handwritten Arabic characters are selected and discussed. An off-line recognition system based on the selected features was built. The system was trained and tested with realistic samples of handwritten Arabic characters. Evaluation of the importance and accuracy of the selected features is made. The recognition based on the selected features give average accuracies of 88% and 70% for the numbers and letters, respectively. Further improvements are achieved by using feature weights based on insights gained from the accuracies of individual features.

Classification of Initial Stripe Height Patterns using Radial Basis Function Neural Network for Proportional Gain Prediction

This paper aims to improve a fine lapping process of hard disk drive (HDD) lapping machines by removing materials from each slider together with controlling the strip height (SH) variation to minimum value. The standard deviation is the key parameter to evaluate the strip height variation, hence it is minimized. In this paper, a design of experiment (DOE) with factorial analysis by twoway analysis of variance (ANOVA) is adopted to obtain a statistically information. The statistics results reveal that initial stripe height patterns affect the final SH variation. Therefore, initial SH classification using a radial basis function neural network is implemented to achieve the proportional gain prediction.

The Game of Synchronized Quadromineering

In synchronized games players make their moves simultaneously rather than alternately. Synchronized Quadromineering is the synchronized version of Quadromineering, a variants of a classical two-player combinatorial game called Domineering. Experimental results for small m × n boards (with m + n < 15) and some theoretical results for general k × n boards (with k = 4, 5, 6) are presented. Moreover, some Synchronized Quadromineering variants are also investigated.

A Collusion-Resistant Distributed Signature Delegation Based on Anonymous Mobile Agent

This paper presents a novel method that allows an agent host to delegate its signing power to an anonymous mobile agent in such away that the mobile agent does not reveal any information about its host-s identity and, at the same time, can be authenticated by the service host, hence, ensuring fairness of service provision. The solution introduces a verification server to verify the signature generated by the mobile agent in such a way that even if colluding with the service host, both parties will not get more information than what they already have. The solution incorporates three methods: Agent Signature Key Generation method, Agent Signature Generation method, Agent Signature Verification method. The most notable feature of the solution is that, in addition to allowing secure and anonymous signature delegation, it enables tracking of malicious mobile agents when a service host is attacked. The security properties of the proposed solution are analyzed, and the solution is compared with the most related work.

XML Data Management in Compressed Relational Database

XML is an important standard of data exchange and representation. As a mature database system, using relational database to support XML data may bring some advantages. But storing XML in relational database has obvious redundancy that wastes disk space, bandwidth and disk I/O when querying XML data. For the efficiency of storage and query XML, it is necessary to use compressed XML data in relational database. In this paper, a compressed relational database technology supporting XML data is presented. Original relational storage structure is adaptive to XPath query process. The compression method keeps this feature. Besides traditional relational database techniques, additional query process technologies on compressed relations and for special structure for XML are presented. In this paper, technologies for XQuery process in compressed relational database are presented..

Program Camouflage: A Systematic Instruction Hiding Method for Protecting Secrets

This paper proposes an easy-to-use instruction hiding method to protect software from malicious reverse engineering attacks. Given a source program (original) to be protected, the proposed method (1) takes its modified version (fake) as an input, (2) differences in assembly code instructions between original and fake are analyzed, and, (3) self-modification routines are introduced so that fake instructions become correct (i.e., original instructions) before they are executed and that they go back to fake ones after they are executed. The proposed method can add a certain amount of security to a program since the fake instructions in the resultant program confuse attackers and it requires significant effort to discover and remove all the fake instructions and self-modification routines. Also, this method is easy to use (with little effort) because all a user (who uses the proposed method) has to do is to prepare a fake source code by modifying the original source code.

On Identity Disclosure Risk Measurement for Shared Microdata

Probability-based identity disclosure risk measurement may give the same overall risk for different anonymization strategy of the same dataset. Some entities in the anonymous dataset may have higher identification risks than the others. Individuals are more concerned about higher risks than the average and are more interested to know if they have a possibility of being under higher risk. A notation of overall risk in the above measurement method doesn-t indicate whether some of the involved entities have higher identity disclosure risk than the others. In this paper, we have introduced an identity disclosure risk measurement method that not only implies overall risk, but also indicates whether some of the members have higher risk than the others. The proposed method quantifies the overall risk based on the individual risk values, the percentage of the records that have a risk value higher than the average and how larger the higher risk values are compared to the average. We have analyzed the disclosure risks for different disclosure control techniques applied to original microdata and present the results.

A Novel Methodology for Synthesis of Fault Trees from MATLAB-Simulink Model

Fault tree analysis is a well-known method for reliability and safety assessment of engineering systems. In the last 3 decades, a number of methods have been introduced, in the literature, for automatic construction of fault trees. The main difference between these methods is the starting model from which the tree is constructed. This paper presents a new methodology for the construction of static and dynamic fault trees from a system Simulink model. The method is introduced and explained in detail, and its correctness and completeness is experimentally validated by using an example, taken from literature. Advantages of the method are also mentioned.

Decision Making with Dempster-Shafer Theory of Evidence Using Geometric Operators

We study the problem of decision making with Dempster-Shafer belief structure. We analyze the previous work developed by Yager about using the ordered weighted averaging (OWA) operator in the aggregation of the Dempster-Shafer decision process. We discuss the possibility of aggregating with an ascending order in the OWA operator for the cases where the smallest value is the best result. We suggest the introduction of the ordered weighted geometric (OWG) operator in the Dempster-Shafer framework. In this case, we also discuss the possibility of aggregating with an ascending order and we find that it is completely necessary as the OWG operator cannot aggregate negative numbers. Finally, we give an illustrative example where we can see the different results obtained by using the OWA, the Ascending OWA (AOWA), the OWG and the Ascending OWG (AOWG) operator.

Evaluation of Clustering Based on Preprocessing in Gene Expression Data

Microarrays have become the effective, broadly used tools in biological and medical research to address a wide range of problems, including classification of disease subtypes and tumors. Many statistical methods are available for analyzing and systematizing these complex data into meaningful information, and one of the main goals in analyzing gene expression data is the detection of samples or genes with similar expression patterns. In this paper, we express and compare the performance of several clustering methods based on data preprocessing including strategies of normalization or noise clearness. We also evaluate each of these clustering methods with validation measures for both simulated data and real gene expression data. Consequently, clustering methods which are common used in microarray data analysis are affected by normalization and degree of noise and clearness for datasets.

Towards Cloud Computing Anatomy

Cloud Computing has recently emerged as a compelling paradigm for managing and delivering services over the internet. The rise of Cloud Computing is rapidly changing the landscape of information technology, and ultimately turning the longheld promise of utility computing into a reality. As the development of Cloud Computing paradigm is speedily progressing, concepts, and terminologies are becoming imprecise and ambiguous, as well as different technologies are interfering. Thus, it becomes crucial to clarify the key concepts and definitions. In this paper, we present the anatomy of Cloud Computing, covering its essential concepts, prominent characteristics, its affects, architectural design and key technologies. We differentiate various service and deployment models. Also, significant challenges and risks need are tackled in order to guarantee the long-term success of Cloud Computing. The aim of this paper is to provide a better understanding of the anatomy of Cloud Computing and pave the way for further research in this area.

Cartoon Effect and Ambient Illumination Based Depth Perception Assessment of 3D Video

Monitored 3-Dimensional (3D) video experience can be utilized as “feedback information” to fine tune the service parameters for providing a better service to the demanding 3D service customers. The 3D video experience which includes both video quality and depth perception is influenced by several contextual and content related factors (e.g., ambient illumination condition, content characteristics, etc) due to the complex nature of the 3D video. Therefore, effective factors on this experience should be utilized while assessing it. In this paper, structural information of the depth map sequences of the 3D video is considered as content related factor effective on the depth perception assessment. Cartoon-like filter is utilized to abstract the significant depth levels in the depth map sequences to determine the structural information. Moreover, subjective experiments are conducted using 3D videos associated with cartoon-like depth map sequences to investigate the effectiveness of ambient illumination condition, which is a contextual factor, on depth perception. Using the knowledge gained through this study, 3D video experience metrics can be developed to deliver better service to the 3D video service users. 

User Satisfaction and Acceptability of Dialogue Systems for Detecting Counterfeit Drugs

The menace of counterfeiting pharmaceuticals/drugs has become a major threat to consumers, healthcare providers, drug manufacturers and governments. It is a source of public health concern both in the developed and developing nations. Several solutions for detecting and authenticating counterfeit drugs have been adopted by different nations of the world. In this article, a dialogue system-based drug counterfeiting detection system was developed and the results of the user satisfaction and acceptability of the system are presented. The results show that the users were satisfied with the system and the system was widely accepted as a means of fighting counterfeited drugs.

A Study on Finding Similar Document with Multiple Categories

Searching similar documents and document management subjects have important place in text mining. One of the most important parts of similar document research studies is the process of classifying or clustering the documents. In this study, a similar document search approach that includes discussion of out the case of belonging to multiple categories (multiple categories problem) has been carried. The proposed method that based on Fuzzy Similarity Classification (FSC) has been compared with Rocchio algorithm and naive Bayes method which are widely used in text mining. Empirical results show that the proposed method is quite successful and can be applied effectively. For the second stage, multiple categories vector method based on information of categories regarding to frequency of being seen together has been used. Empirical results show that achievement is increased almost two times, when proposed method is compared with classical approach.

Identifying New Sequence Features for Exon-Intron Discrimination by Rescaled-Range Frameshift Analysis

For identifying the discriminative sequence features between exons and introns, a new paradigm, rescaled-range frameshift analysis (RRFA), was proposed. By RRFA, two new sequence features, the frameshift sensitivity (FS) and the accumulative penta-mer complexity (APC), were discovered which were further integrated into a new feature of larger scale, the persistency in anti-mutation (PAM). The feature-validation experiments were performed on six model organisms to test the power of discrimination. All the experimental results highly support that FS, APC and PAM were all distinguishing features between exons and introns. These identified new sequence features provide new insights into the sequence composition of genes and they have great potentials of forming a new basis for recognizing the exonintron boundaries in gene sequences.

Searching for Similar Informational Articles in the Internet Channel

In terms of total online audience, newspapers are the most successful form of online content to date. The online audience for newspapers continues to demand higher-quality services, including personalized news services. News providers should be able to offer suitable users appropriate content. In this paper, a news article recommender system is suggested based on a user-s preference when he or she visits an Internet news site and reads the published articles. This system helps raise the user-s satisfaction, increase customer loyalty toward the content provider.