Effective Sonar Target Classification via Parallel Structure of Minimal Resource Allocation Network

In this paper, the processing of sonar signals has been carried out using Minimal Resource Allocation Network (MRAN) and a Probabilistic Neural Network (PNN) in differentiation of commonly encountered features in indoor environments. The stability-plasticity behaviors of both networks have been investigated. The experimental result shows that MRAN possesses lower network complexity but experiences higher plasticity than PNN. An enhanced version called parallel MRAN (pMRAN) is proposed to solve this problem and is proven to be stable in prediction and also outperformed the original MRAN.

File Format of Flow Chart Simulation Software - CFlow

CFlow is a flow chart software, it contains facilities to draw and evaluate a flow chart. A flow chart evaluation applies a simulation method to enable presentation of work flow in a flow chart solution. Flow chart simulation of CFlow is executed by manipulating the CFlow data file which is saved in a graphical vector format. These text-based data are organised by using a data classification technic based on a Library classification-scheme. This paper describes the file format for flow chart simulation software of CFlow.

A Software of Intrusion Detection Mechanism for Virtual Platforms

Security is an interesting and significance issue for popular virtual platforms, such as virtualization cluster and cloud platforms. Virtualization is the powerful technology for cloud computing services, there are a lot of benefits by using virtual machine tools which be called hypervisors, such as it can quickly deploy all kinds of virtual Operating Systems in single platform, able to control all virtual system resources effectively, cost down for system platform deployment, ability of customization, high elasticity and high reliability. However, some important security problems need to take care and resolved in virtual platforms that include terrible viruses, evil programs, illegal operations and intrusion behavior. In this paper, we present useful Intrusion Detection Mechanism (IDM) software that not only can auto to analyze all system-s operations with the accounting journal database, but also is able to monitor the system-s state for virtual platforms.

Multi-Scale Gabor Feature Based Eye Localization

Eye localization is necessary for face recognition and related application areas. Most of eye localization algorithms reported so far still need to be improved about precision and computational time for successful applications. In this paper, we propose an eye location method based on multi-scale Gabor feature vectors, which is more robust with respect to initial points. The eye localization based on Gabor feature vectors first needs to constructs an Eye Model Bunch for each eye (left or right eye) which consists of n Gabor jets and average eye coordinates of each eyes obtained from n model face images, and then tries to localize eyes in an incoming face image by utilizing the fact that the true eye coordinates is most likely to be very close to the position where the Gabor jet will have the best Gabor jet similarity matching with a Gabor jet in the Eye Model Bunch. Similar ideas have been already proposed in such as EBGM (Elastic Bunch Graph Matching). However, the method used in EBGM is known to be not robust with respect to initial values and may need extensive search range for achieving the required performance, but extensive search ranges will cause much more computational burden. In this paper, we propose a multi-scale approach with a little increased computational burden where one first tries to localize eyes based on Gabor feature vectors in a coarse face image obtained from down sampling of the original face image, and then localize eyes based on Gabor feature vectors in the original resolution face image by using the eye coordinates localized in the coarse scaled image as initial points. Several experiments and comparisons with other eye localization methods reported in the other papers show the efficiency of our proposed method.

Design of an Intelligent Location Identification Scheme Based On LANDMARC and BPNs

Radio frequency identification (RFID) applications have grown rapidly in many industries, especially in indoor location identification. The advantage of using received signal strength indicator (RSSI) values as an indoor location measurement method is a cost-effective approach without installing extra hardware. Because the accuracy of many positioning schemes using RSSI values is limited by interference factors and the environment, thus it is challenging to use RFID location techniques based on integrating positioning algorithm design. This study proposes the location estimation approach and analyzes a scheme relying on RSSI values to minimize location errors. In addition, this paper examines different factors that affect location accuracy by integrating the backpropagation neural network (BPN) with the LANDMARC algorithm in a training phase and an online phase. First, the training phase computes coordinates obtained from the LANDMARC algorithm, which uses RSSI values and the real coordinates of reference tags as training data for constructing an appropriate BPN architecture and training length. Second, in the online phase, the LANDMARC algorithm calculates the coordinates of tracking tags, which are then used as BPN inputs to obtain location estimates. The results show that the proposed scheme can estimate locations more accurately compared to LANDMARC without extra devices.

Crash Severity Modeling in Urban Highways Using Backward Regression Method

Identifying and classifying intersections according to severity is very important for implementation of safety related counter measures and effective models are needed to compare and assess the severity. Highway safety organizations have considered intersection safety among their priorities. In spite of significant advances in highways safety, the large numbers of crashes with high severities still occur in the highways. Investigation of influential factors on crashes enables engineers to carry out calculations in order to reduce crash severity. Previous studies lacked a model capable of simultaneous illustration of the influence of human factors, road, vehicle, weather conditions and traffic features including traffic volume and flow speed on the crash severity. Thus, this paper is aimed at developing the models to illustrate the simultaneous influence of these variables on the crash severity in urban highways. The models represented in this study have been developed using binary Logit Models. SPSS software has been used to calibrate the models. It must be mentioned that backward regression method in SPSS was used to identify the significant variables in the model. Consider to obtained results it can be concluded that the main factor in increasing of crash severity in urban highways are driver age, movement with reverse gear, technical defect of the vehicle, vehicle collision with motorcycle and bicycle, bridge, frontal impact collisions, frontal-lateral collisions and multi-vehicle crashes in urban highways which always increase the crash severity in urban highways.

Neural Network Based Predictive DTC Algorithm for Induction Motors

In this paper, a Neural Network based predictive DTC algorithm is proposed .This approach is used as an alternative to classical approaches .An appropriate riate Feed - forward network is chosen and based on its value of derivative electromagnetic torque ; optimal stator voltage vector is determined to be applied to the induction motor (by inverter). Moreover, an appropriate torque and flux observer is proposed.

Courses Pre-Required Visualization Using Force Directed Placement Technique

Visualizing “Courses – Pre – Required - Architecture" on the screen has proven to be useful and helpful for university actors and specially for students. In fact, these students can easily identify courses and their pre required, perceive the courses to follow in the future, and then can choose rapidly the appropriate course to register in. Given a set of courses and their prerequired, we present an algorithm for visualization a graph entitled “Courses-Pre-Required-Graph" that present courses and their prerequired in order to help students to recognize, lonely, what courses to take in the future and perceive the contain of all courses that they will study. Our algorithm using “Force Directed Placement" technique visualizes the “Courses-Pre-Required-Graph" in such way that courses are easily identifiable. The time complexity of our drawing algorithm is O (n2), where n is the number of courses in the “Courses-Pre-Required-Graph".

High Capacity Data Hiding based on Predictor and Histogram Modification

In this paper, we propose a high capacity image hiding technology based on pixel prediction and the difference of modified histogram. This approach is used the pixel prediction and the difference of modified histogram to calculate the best embedding point. This approach can improve the predictive accuracy and increase the pixel difference to advance the hiding capacity. We also use the histogram modification to prevent the overflow and underflow. Experimental results demonstrate that our proposed method within the same average hiding capacity can still keep high quality of image and low distortion

Fast Complex Valued Time Delay Neural Networks

Here, a new idea to speed up the operation of complex valued time delay neural networks is presented. The whole data are collected together in a long vector and then tested as a one input pattern. The proposed fast complex valued time delay neural networks uses cross correlation in the frequency domain between the tested data and the input weights of neural networks. It is proved mathematically that the number of computation steps required for the presented fast complex valued time delay neural networks is less than that needed by classical time delay neural networks. Simulation results using MATLAB confirm the theoretical computations.

Reversible Watermarking on Stereo Image Sequences

In this paper, a new reversible watermarking method is presented that reduces the size of a stereoscopic image sequence while keeping its content visible. The proposed technique embeds the residuals of the right frames to the corresponding frames of the left sequence, halving the total capacity. The residual frames may result in after a disparity compensated procedure between the two video streams or by a joint motion and disparity compensation. The residuals are usually lossy compressed before embedding because of the limited embedding capacity of the left frames. The watermarked frames are visible at a high quality and at any instant the stereoscopic video may be recovered by an inverse process. In fact, the left frames may be exactly recovered whereas the right ones are slightly distorted as the residuals are not embedded intact. The employed embedding method reorders the left frame into an array of consecutive pixel pairs and embeds a number of bits according to their intensity difference. In this way, it hides a number of bits in intensity smooth areas and most of the data in textured areas where resulting distortions are less visible. The experimental evaluation demonstrates that the proposed scheme is quite effective.

An Efficient Algorithm for Reliability Lower Bound of Distributed Systems

The reliability of distributed systems and computer networks have been modeled by a probabilistic network or a graph G. Computing the residual connectedness reliability (RCR), denoted by R(G), under the node fault model is very useful, but is an NP-hard problem. Since it may need exponential time of the network size to compute the exact value of R(G), it is important to calculate its tight approximate value, especially its lower bound, at a moderate calculation time. In this paper, we propose an efficient algorithm for reliability lower bound of distributed systems with unreliable nodes. We also applied our algorithm to several typical classes of networks to evaluate the lower bounds and show the effectiveness of our algorithm.

Eye Location Based on Structure Feature for Driver Fatigue Monitoring

One of the most important problems to solve is eye location for a driver fatigue monitoring system. This paper presents an efficient method to achieve fast and accurate eye location in grey level images obtained in the real-word driving conditions. The structure of eye region is used as a robust cue to find possible eye pairs. Candidates of eye pair at different scales are selected by finding regions which roughly match with the binary eye pair template. To obtain real one, all the eye pair candidates are then verified by using support vector machines. Finally, eyes are precisely located by using binary vertical projection and eye classifier in eye pair images. The proposed method is robust to deal with illumination changes, moderate rotations, glasses wearing and different eye states. Experimental results demonstrate its effectiveness.

Evolved Strokes in Non Photo–Realistic Rendering

We describe a work with an evolutionary computing algorithm for non photo–realistic rendering of a target image. The renderings are produced by genetic programming. We have used two different types of strokes: “empty triangle" and “filled triangle" in color level. We compare both empty and filled triangular strokes to find which one generates more aesthetic pleasing images. We found the filled triangular strokes have better fitness and generate more aesthetic images than empty triangular strokes.

Query Optimization Techniques for XML Databases

Over the past few years, XML (eXtensible Mark-up Language) has emerged as the standard for information representation and data exchange over the Internet. This paper provides a kick-start for new researches venturing in XML databases field. We survey the storage representation for XML document, review the XML query processing and optimization techniques with respect to the particular storage instance. Various optimization technologies have been developed to solve the query retrieval and updating problems. Towards the later year, most researchers proposed hybrid optimization techniques. Hybrid system opens the possibility of covering each technology-s weakness by its strengths. This paper reviews the advantages and limitations of optimization techniques.

Meta-reasoning for Multi-agent Communication of Semantic Web Information

Meta-reasoning is essential for multi-agent communication. In this paper we propose a framework of multi-agent communication in which agents employ meta-reasoning to reason with agent and ontology locations in order to communicate semantic information with other agents on the semantic web and also reason with multiple distributed ontologies. We shall argue that multi-agent communication of Semantic Web information cannot be realized without the need to reason with agent and ontology locations. This is because for an agent to be able to communicate with another agent, it must know where and how to send a message to that agent. Similarly, for an agent to be able to reason with an external semantic web ontology, it must know where and how to access to that ontology. The agent framework and its communication mechanism are formulated entirely in meta-logic.

EML-Estimation of Multivariate t Copulas with Heuristic Optimization

In recent years, copulas have become very popular in financial research and actuarial science as they are more flexible in modelling the co-movements and relationships of risk factors as compared to the conventional linear correlation coefficient by Pearson. However, a precise estimation of the copula parameters is vital in order to correctly capture the (possibly nonlinear) dependence structure and joint tail events. In this study, we employ two optimization heuristics, namely Differential Evolution and Threshold Accepting to tackle the parameter estimation of multivariate t distribution models in the EML approach. Since the evolutionary optimizer does not rely on gradient search, the EML approach can be applied to estimation of more complicated copula models such as high-dimensional copulas. Our experimental study shows that the proposed method provides more robust and more accurate estimates as compared to the IFM approach.

A Comparative Study of Page Ranking Algorithms for Information Retrieval

This paper gives an introduction to Web mining, then describes Web Structure mining in detail, and explores the data structure used by the Web. This paper also explores different Page Rank algorithms and compare those algorithms used for Information Retrieval. In Web Mining, the basics of Web mining and the Web mining categories are explained. Different Page Rank based algorithms like PageRank (PR), WPR (Weighted PageRank), HITS (Hyperlink-Induced Topic Search), DistanceRank and DirichletRank algorithms are discussed and compared. PageRanks are calculated for PageRank and Weighted PageRank algorithms for a given hyperlink structure. Simulation Program is developed for PageRank algorithm because PageRank is the only ranking algorithm implemented in the search engine (Google). The outputs are shown in a table and chart format.

A Robust Watermarking using Blind Source Separation

In this paper, we present a robust and secure algorithm for watermarking, the watermark is first transformed into the frequency domain using the discrete wavelet transform (DWT). Then the entire DWT coefficient except the LL (Band) discarded, these coefficients are permuted and encrypted by specific mixing. The encrypted coefficients are inserted into the most significant spectral components of the stego-image using a chaotic system. This technique makes our watermark non-vulnerable to the attack (like compression, and geometric distortion) of an active intruder, or due to noise in the transmission link.

Face Recognition with PCA and KPCA using Elman Neural Network and SVM

In this paper, in order to categorize ORL database face pictures, principle Component Analysis (PCA) and Kernel Principal Component Analysis (KPCA) methods by using Elman neural network and Support Vector Machine (SVM) categorization methods are used. Elman network as a recurrent neural network is proposed for modeling storage systems and also it is used for reviewing the effect of using PCA numbers on system categorization precision rate and database pictures categorization time. Categorization stages are conducted with various components numbers and the obtained results of both Elman neural network categorization and support vector machine are compared. In optimum manner 97.41% recognition accuracy is obtained.