An Efficient Algorithm for Motion Detection Based Facial Expression Recognition using Optical Flow

One of the popular methods for recognition of facial expressions such as happiness, sadness and surprise is based on deformation of facial features. Motion vectors which show these deformations can be specified by the optical flow. In this method, for detecting emotions, the resulted set of motion vectors are compared with standard deformation template that caused by facial expressions. In this paper, a new method is introduced to compute the quantity of likeness in order to make decision based on the importance of obtained vectors from an optical flow approach. For finding the vectors, one of the efficient optical flow method developed by Gautama and VanHulle[17] is used. The suggested method has been examined over Cohn-Kanade AU-Coded Facial Expression Database, one of the most comprehensive collections of test images available. The experimental results show that our method could correctly recognize the facial expressions in 94% of case studies. The results also show that only a few number of image frames (three frames) are sufficient to detect facial expressions with rate of success of about 83.3%. This is a significant improvement over the available methods.

Deficiencies of Lung Segmentation Techniques using CT Scan Images for CAD

Segmentation is an important step in medical image analysis and classification for radiological evaluation or computer aided diagnosis. This paper presents the problem of inaccurate lung segmentation as observed in algorithms presented by researchers working in the area of medical image analysis. The different lung segmentation techniques have been tested using the dataset of 19 patients consisting of a total of 917 images. We obtained datasets of 11 patients from Ackron University, USA and of 8 patients from AGA Khan Medical University, Pakistan. After testing the algorithms against datasets, the deficiencies of each algorithm have been highlighted.

Understanding and Designing Situation-Aware Mobile and Ubiquitous Computing Systems

Using spatial models as a shared common basis of information about the environment for different kinds of contextaware systems has been a heavily researched topic in the last years. Thereby the research focused on how to create, to update, and to merge spatial models so as to enable highly dynamic, consistent and coherent spatial models at large scale. In this paper however, we want to concentrate on how context-aware applications could use this information so as to adapt their behavior according to the situation they are in. The main idea is to provide the spatial model infrastructure with a situation recognition component based on generic situation templates. A situation template is – as part of a much larger situation template library – an abstract, machinereadable description of a certain basic situation type, which could be used by different applications to evaluate their situation. In this paper, different theoretical and practical issues – technical, ethical and philosophical ones – are discussed important for understanding and developing situation dependent systems based on situation templates. A basic system design is presented which allows for the reasoning with uncertain data using an improved version of a learning algorithm for the automatic adaption of situation templates. Finally, for supporting the development of adaptive applications, we present a new situation-aware adaptation concept based on workflows.

Comparing Hilditch, Rosenfeld, Zhang-Suen,and Nagendraprasad -Wang-Gupta Thinning

This paper compares Hilditch, Rosenfeld, Zhang- Suen, dan Nagendraprasad Wang Gupta (NWG) thinning algorithms for Javanese character image recognition. Thinning is an effective process when the focus in not on the size of the pattern, but rather on the relative position of the strokes in the pattern. The research analyzes the thinning of 60 Javanese characters. Time-wise, Zhang-Suen algorithm gives the best results with the average process time being 0.00455188 seconds. But if we look at the percentage of pixels that meet one-pixel thickness, Rosenfelt algorithm gives the best results, with a 99.98% success rate. From the number of pixels that are erased, NWG algorithm gives the best results with the average number of pixels erased being 84.12%. It can be concluded that the Hilditch algorithm performs least successfully compared to the other three algorithms.

A Self Adaptive Genetic Based Algorithm for the Identification and Elimination of Bad Data

The identification and elimination of bad measurements is one of the basic functions of a robust state estimator as bad data have the effect of corrupting the results of state estimation according to the popular weighted least squares method. However this is a difficult problem to handle especially when dealing with multiple errors from the interactive conforming type. In this paper, a self adaptive genetic based algorithm is proposed. The algorithm utilizes the results of the classical linearized normal residuals approach to tune the genetic operators thus instead of making a randomized search throughout the whole search space it is more likely to be a directed search thus the optimum solution is obtained at very early stages(maximum of 5 generations). The algorithm utilizes the accumulating databases of already computed cases to reduce the computational burden to minimum. Tests are conducted with reference to the standard IEEE test systems. Test results are very promising.

An Energy-Efficient Distributed Unequal Clustering Protocol for Wireless Sensor Networks

The wireless sensor networks have been extensively deployed and researched. One of the major issues in wireless sensor networks is a developing energy-efficient clustering protocol. Clustering algorithm provides an effective way to prolong the lifetime of a wireless sensor networks. In the paper, we compare several clustering protocols which significantly affect a balancing of energy consumption. And we propose an Energy-Efficient Distributed Unequal Clustering (EEDUC) algorithm which provides a new way of creating distributed clusters. In EEDUC, each sensor node sets the waiting time. This waiting time is considered as a function of residual energy, number of neighborhood nodes. EEDUC uses waiting time to distribute cluster heads. We also propose an unequal clustering mechanism to solve the hot-spot problem. Simulation results show that EEDUC distributes the cluster heads, balances the energy consumption well among the cluster heads and increases the network lifetime.

Topological Properties of an Exponential Random Geometric Graph Process

In this paper we consider a one-dimensional random geometric graph process with the inter-nodal gaps evolving according to an exponential AR(1) process. The transition probability matrix and stationary distribution are derived for the Markov chains concerning connectivity and the number of components. We analyze the algorithm for hitting time regarding disconnectivity. In addition to dynamical properties, we also study topological properties for static snapshots. We obtain the degree distributions as well as asymptotic precise bounds and strong law of large numbers for connectivity threshold distance and the largest nearest neighbor distance amongst others. Both exact results and limit theorems are provided in this paper.

Multiobjective Optimization Solution for Shortest Path Routing Problem

The shortest path routing problem is a multiobjective nonlinear optimization problem with constraints. This problem has been addressed by considering Quality of service parameters, delay and cost objectives separately or as a weighted sum of both objectives. Multiobjective evolutionary algorithms can find multiple pareto-optimal solutions in one single run and this ability makes them attractive for solving problems with multiple and conflicting objectives. This paper uses an elitist multiobjective evolutionary algorithm based on the Non-dominated Sorting Genetic Algorithm (NSGA), for solving the dynamic shortest path routing problem in computer networks. A priority-based encoding scheme is proposed for population initialization. Elitism ensures that the best solution does not deteriorate in the next generations. Results for a sample test network have been presented to demonstrate the capabilities of the proposed approach to generate well-distributed pareto-optimal solutions of dynamic routing problem in one single run. The results obtained by NSGA are compared with single objective weighting factor method for which Genetic Algorithm (GA) was applied.

Optimal Allocation of DG Units for Power Loss Reduction and Voltage Profile Improvement of Distribution Networks using PSO Algorithm

This paper proposes a Particle Swarm Optimization (PSO) based technique for the optimal allocation of Distributed Generation (DG) units in the power systems. In this paper our aim is to decide optimal number, type, size and location of DG units for voltage profile improvement and power loss reduction in distribution network. Two types of DGs are considered and the distribution load flow is used to calculate exact loss. Load flow algorithm is combined appropriately with PSO till access to acceptable results of this operation. The suggested method is programmed under MATLAB software. Test results indicate that PSO method can obtain better results than the simple heuristic search method on the 30-bus and 33- bus radial distribution systems. It can obtain maximum loss reduction for each of two types of optimally placed multi-DGs. Moreover, voltage profile improvement is achieved.

A New Heuristic Statistical Methodology for Optimizing Queuing Networks Using Discreet Event Simulation

Most of the real queuing systems include special properties and constraints, which can not be analyzed directly by using the results of solved classical queuing models. Lack of Markov chains features, unexponential patterns and service constraints, are the mentioned conditions. This paper represents an applied general algorithm for analysis and optimizing the queuing systems. The algorithm stages are described through a real case study. It is consisted of an almost completed non-Markov system with limited number of customers and capacities as well as lots of common exception of real queuing networks. Simulation is used for optimizing this system. So introduced stages over the following article include primary modeling, determining queuing system kinds, index defining, statistical analysis and goodness of fit test, validation of model and optimizing methods of system with simulation.

The Design of Self-evolving Artificial Immune System II for Permutation Flow-shop Problem

Artificial Immune System is adopted as a Heuristic Algorithm to solve the combinatorial problems for decades. Nevertheless, many of these applications took advantage of the benefit for applications but seldom proposed approaches for enhancing the efficiency. In this paper, we continue the previous research to develop a Self-evolving Artificial Immune System II via coordinating the T and B cell in Immune System and built a block-based artificial chromosome for speeding up the computation time and better performance for different complexities of problems. Through the design of Plasma cell and clonal selection which are relative the function of the Immune Response. The Immune Response will help the AIS have the global and local searching ability and preventing trapped in local optima. From the experimental result, the significant performance validates the SEAIS II is effective when solving the permutation flows-hop problems.

2-Dimensional Finger Gesture Based Mobile Robot Control Using Touch Screen

The purpose of this study was to present a reliable mean for human-computer interfacing based on finger gestures made in two dimensions, which could be interpreted and adequately used in controlling a remote robot's movement. The gestures were captured and interpreted using an algorithm based on trigonometric functions, in calculating the angular displacement from one point of touch to another as the user-s finger moved within a time interval; thereby allowing for pattern spotting of the captured gesture. In this paper the design and implementation of such a gesture based user interface was presented, utilizing the aforementioned algorithm. These techniques were then used to control a remote mobile robot's movement. A resistive touch screen was selected as the gesture sensor, then utilizing a programmed microcontroller to interpret them respectively.

A Novel VLSI Architecture for Image Compression Model Using Low power Discrete Cosine Transform

In Image processing the Image compression can improve the performance of the digital systems by reducing the cost and time in image storage and transmission without significant reduction of the Image quality. This paper describes hardware architecture of low complexity Discrete Cosine Transform (DCT) architecture for image compression[6]. In this DCT architecture, common computations are identified and shared to remove redundant computations in DCT matrix operation. Vector processing is a method used for implementation of DCT. This reduction in computational complexity of 2D DCT reduces power consumption. The 2D DCT is performed on 8x8 matrix using two 1-Dimensional Discrete cosine transform blocks and a transposition memory [7]. Inverse discrete cosine transform (IDCT) is performed to obtain the image matrix and reconstruct the original image. The proposed image compression algorithm is comprehended using MATLAB code. The VLSI design of the architecture is implemented Using Verilog HDL. The proposed hardware architecture for image compression employing DCT was synthesized using RTL complier and it was mapped using 180nm standard cells. . The Simulation is done using Modelsim. The simulation results from MATLAB and Verilog HDL are compared. Detailed analysis for power and area was done using RTL compiler from CADENCE. Power consumption of DCT core is reduced to 1.027mW with minimum area[1].

The Tag Authentication Scheme using Self-Shrinking Generator on RFID System

Since communications between tag and reader in RFID system are by radio, anyone can access the tag and obtain its any information. And a tag always replies with the same ID so that it is hard to distinguish between a real and a fake tag. Thus, there are many security problems in today-s RFID System. Firstly, unauthorized reader can easily read the ID information of any Tag. Secondly, Adversary can easily cheat the legitimate reader using the collected Tag ID information, such as the any legitimate Tag. These security problems can be typically solved by encryption of messages transmitted between Tag and Reader and by authentication for Tag. In this paper, to solve these security problems on RFID system, we propose the Tag Authentication Scheme based on self shrinking generator (SSG). SSG Algorithm using in our scheme is proposed by W.Meier and O.Staffelbach in EUROCRYPT-94. This Algorithm is organized that only one LFSR and selection logic in order to generate random stream. Thus it is optimized to implement the hardware logic on devices with extremely limited resource, and the output generating from SSG at each time do role as random stream so that it is allow our to design the light-weight authentication scheme with security against some network attacks. Therefore, we propose the novel tag authentication scheme which use SSG to encrypt the Tag-ID transmitted from tag to reader and achieve authentication of tag.

Local Curvelet Based Classification Using Linear Discriminant Analysis for Face Recognition

In this paper, an efficient local appearance feature extraction method based the multi-resolution Curvelet transform is proposed in order to further enhance the performance of the well known Linear Discriminant Analysis(LDA) method when applied to face recognition. Each face is described by a subset of band filtered images containing block-based Curvelet coefficients. These coefficients characterize the face texture and a set of simple statistical measures allows us to form compact and meaningful feature vectors. The proposed method is compared with some related feature extraction methods such as Principal component analysis (PCA), as well as Linear Discriminant Analysis LDA, and independent component Analysis (ICA). Two different muti-resolution transforms, Wavelet (DWT) and Contourlet, were also compared against the Block Based Curvelet-LDA algorithm. Experimental results on ORL, YALE and FERET face databases convince us that the proposed method provides a better representation of the class information and obtains much higher recognition accuracies.

Feature Reduction of Nearest Neighbor Classifiers using Genetic Algorithm

The design of a pattern classifier includes an attempt to select, among a set of possible features, a minimum subset of weakly correlated features that better discriminate the pattern classes. This is usually a difficult task in practice, normally requiring the application of heuristic knowledge about the specific problem domain. The selection and quality of the features representing each pattern have a considerable bearing on the success of subsequent pattern classification. Feature extraction is the process of deriving new features from the original features in order to reduce the cost of feature measurement, increase classifier efficiency, and allow higher classification accuracy. Many current feature extraction techniques involve linear transformations of the original pattern vectors to new vectors of lower dimensionality. While this is useful for data visualization and increasing classification efficiency, it does not necessarily reduce the number of features that must be measured since each new feature may be a linear combination of all of the features in the original pattern vector. In this paper a new approach is presented to feature extraction in which feature selection, feature extraction, and classifier training are performed simultaneously using a genetic algorithm. In this approach each feature value is first normalized by a linear equation, then scaled by the associated weight prior to training, testing, and classification. A knn classifier is used to evaluate each set of feature weights. The genetic algorithm optimizes a vector of feature weights, which are used to scale the individual features in the original pattern vectors in either a linear or a nonlinear fashion. By this approach, the number of features used in classifying can be finely reduced.

Statistics over Lyapunov Exponents for Feature Extraction: Electroencephalographic Changes Detection Case

A new approach based on the consideration that electroencephalogram (EEG) signals are chaotic signals was presented for automated diagnosis of electroencephalographic changes. This consideration was tested successfully using the nonlinear dynamics tools, like the computation of Lyapunov exponents. This paper presented the usage of statistics over the set of the Lyapunov exponents in order to reduce the dimensionality of the extracted feature vectors. Since classification is more accurate when the pattern is simplified through representation by important features, feature extraction and selection play an important role in classifying systems such as neural networks. Multilayer perceptron neural network (MLPNN) architectures were formulated and used as basis for detection of electroencephalographic changes. Three types of EEG signals (EEG signals recorded from healthy volunteers with eyes open, epilepsy patients in the epileptogenic zone during a seizure-free interval, and epilepsy patients during epileptic seizures) were classified. The selected Lyapunov exponents of the EEG signals were used as inputs of the MLPNN trained with Levenberg- Marquardt algorithm. The classification results confirmed that the proposed MLPNN has potential in detecting the electroencephalographic changes.

Adaptive Hierarchical Key Structure Generation for Key Management in Wireless Sensor Networks using A*

Wireless Sensor networks have a wide spectrum of civil and military applications that call for secure communication such as the terrorist tracking, target surveillance in hostile environments. For the secure communication in these application areas, we propose a method for generating a hierarchical key structure for the efficient group key management. In this paper, we apply A* algorithm in generating a hierarchical key structure by considering the history data of the ratio of addition and eviction of sensor nodes in a location where sensor nodes are deployed. Thus generated key tree structure provides an efficient way of managing the group key in terms of energy consumption when addition and eviction event occurs. A* algorithm tries to minimize the number of messages needed for group key management by the history data. The experimentation with the tree shows efficiency of the proposed method.

Self-tuned LMS Algorithm for Sinusoidal Time Delay Tracking

In this paper the problem of estimating the time delay between two spatially separated noisy sinusoidal signals by system identification modeling is addressed. The system is assumed to be perturbed by both input and output additive white Gaussian noise. The presence of input noise introduces bias in the time delay estimates. Normally the solution requires a priori knowledge of the input-output noise variance ratio. We utilize the cascade of a self-tuned filter with the time delay estimator, thus making the delay estimates robust to input noise. Simulation results are presented to confirm the superiority of the proposed approach at low input signal-to-noise ratios.

Using Memetic Algorithms for the Solution of Technical Problems

The intention of this paper is, to help the user of evolutionary algorithms to adapt them easier to their problem at hand. For a lot of problems in the technical field it is not necessary to reach an optimum solution, but to reach a good solution in time. In many cases the solution is undetermined or there doesn-t exist a method to determine the solution. For these cases an evolutionary algorithm can be useful. This paper intents to give the user rules of thumb with which it is easier to decide if the problem is suitable for an evolutionary algorithm and how to design them.