Facial Expressions Recognition from Complex Background using Face Context and Adaptively Weighted sub-Pattern PCA

A new approach for facial expressions recognition based on face context and adaptively weighted sub-pattern PCA (Aw-SpPCA) has been presented in this paper. The facial region and others part of the body have been segmented from the complex environment based on skin color model. An algorithm has been proposed to accurate detection of face region from the segmented image based on constant ratio of height and width of face (δ= 1.618). The paper also discusses on new concept to detect the eye and mouth position. The desired part of the face has been cropped to analysis the expression of a person. Unlike PCA based on a whole image pattern, Aw-SpPCA operates directly on its sub patterns partitioned from an original whole pattern and separately extracts features from them. Aw-SpPCA can adaptively compute the contributions of each part and a classification task in order to enhance the robustness to both expression and illumination variations. Experiments on single standard face with five types of facial expression database shows that the proposed method is competitive.

Mathematical Modeling of SISO based Timoshenko Structures – A Case Study

This paper features the mathematical modeling of a single input single output based Timoshenko smart beam. Further, this mathematical model is used to design a multirate output feedback based discrete sliding mode controller using Bartoszewicz law to suppress the flexural vibrations. The first 2 dominant vibratory modes is retained. Here, an application of the discrete sliding mode control in smart systems is presented. The algorithm uses a fast output sampling based sliding mode control strategy that would avoid the use of switching in the control input and hence avoids chattering. This method does not need the measurement of the system states for feedback as it makes use of only the output samples for designing the controller. Thus, this methodology is more practical and easy to implement.

Surface Flattening based on Linear-Elastic Finite Element Method

This paper presents a linear-elastic finite element method based flattening algorithm for three dimensional triangular surfaces. First, an intrinsic characteristic preserving method is used to obtain the initial developing graph, which preserves the angles and length ratios between two adjacent edges. Then, an iterative equation is established based on linear-elastic finite element method and the flattening result with an equilibrium state of internal force is obtained by solving this iterative equation. The results show that complex surfaces can be dealt with this proposed method, which is an efficient tool for the applications in computer aided design, such as mould design.

Clustering based Voltage Control Areas for Localized Reactive Power Management in Deregulated Power System

In this paper, a new K-means clustering based approach for identification of voltage control areas is developed. Voltage control areas are important for efficient reactive power management in power systems operating under deregulated environment. Although, voltage control areas are formed using conventional hierarchical clustering based method, but the present paper investigate the capability of K-means clustering for the purpose of forming voltage control areas. The proposed method is tested and compared for IEEE 14 bus and IEEE 30 bus systems. The results show that this K-means based method is competing with conventional hierarchical approach

Universal Method for Timetable Construction based on Evolutionary Approach

Timetabling problems are often hard and timeconsuming to solve. Most of the methods of solving them concern only one problem instance or class. This paper describes a universal method for solving large, highly constrained timetabling problems from different domains. The solution is based on evolutionary algorithm-s framework and operates on two levels – first-level evolutionary algorithm tries to find a solution basing on given set of operating parameters, second-level algorithm is used to establish those parameters. Tabu search is employed to speed up the solution finding process on first level. The method has been used to solve three different timetabling problems with promising results.

Integrated Approaches to Enhance Aggregate Production Planning with Inventory Uncertainty Based On Improved Harmony Search Algorithm

This work presents a multiple objective linear programming (MOLP) model based on the desirability function approach for solving the aggregate production planning (APP) decision problem upon Masud and Hwang-s model. The proposed model minimises total production costs, carrying or backordering costs and rates of change in labor levels. An industrial case demonstrates the feasibility of applying the proposed model to the APP problems with three scenarios of inventory levels. The proposed model yields an efficient compromise solution and the overall levels of DM satisfaction with the multiple combined response levels. There has been a trend to solve complex planning problems using various metaheuristics. Therefore, in this paper, the multi-objective APP problem is solved by hybrid metaheuristics of the hunting search (HuSIHSA) and firefly (FAIHSA) mechanisms on the improved harmony search algorithm. Results obtained from the solution of are then compared. It is observed that the FAIHSA can be used as a successful alternative solution mechanism for solving APP problems over three scenarios. Furthermore, the FAIHSA provides a systematic framework for facilitating the decision-making process, enabling a decision maker interactively to modify the desirability function approach and related model parameters until a good optimal solution is obtained with proper selection of control parameters when compared.

Application of Artificial Intelligence for Tuning the Parameters of an AGC

This paper deals with the tuning of parameters for Automatic Generation Control (AGC). A two area interconnected hydrothermal system with PI controller is considered. Genetic Algorithm (GA) and Particle Swarm optimization (PSO) algorithms have been applied to optimize the controller parameters. Two objective functions namely Integral Square Error (ISE) and Integral of Time-multiplied Absolute value of the Error (ITAE) are considered for optimization. The effectiveness of an objective function is considered based on the variation in tie line power and change in frequency in both the areas. MATLAB/SIMULINK was used as a simulation tool. Simulation results reveal that ITAE is a better objective function than ISE. Performances of optimization algorithms are also compared and it was found that genetic algorithm gives better results than particle swarm optimization algorithm for the problems of AGC.

A Modular On-line Profit Sharing Approach in Multiagent Domains

How to coordinate the behaviors of the agents through learning is a challenging problem within multi-agent domains. Because of its complexity, recent work has focused on how coordinated strategies can be learned. Here we are interested in using reinforcement learning techniques to learn the coordinated actions of a group of agents, without requiring explicit communication among them. However, traditional reinforcement learning methods are based on the assumption that the environment can be modeled as Markov Decision Process, which usually cannot be satisfied when multiple agents coexist in the same environment. Moreover, to effectively coordinate each agent-s behavior so as to achieve the goal, it-s necessary to augment the state of each agent with the information about other existing agents. Whereas, as the number of agents in a multiagent environment increases, the state space of each agent grows exponentially, which will cause the combinational explosion problem. Profit sharing is one of the reinforcement learning methods that allow agents to learn effective behaviors from their experiences even within non-Markovian environments. In this paper, to remedy the drawback of the original profit sharing approach that needs much memory to store each state-action pair during the learning process, we firstly address a kind of on-line rational profit sharing algorithm. Then, we integrate the advantages of modular learning architecture with on-line rational profit sharing algorithm, and propose a new modular reinforcement learning model. The effectiveness of the technique is demonstrated using the pursuit problem.

Inheritance Growth: a Biology Inspired Method to Build Structures in P2P

IT infrastructures are becoming more and more difficult. Therefore, in the first industrial IT systems, the P2P paradigm has replaced the traditional client server and methods of self-organization are gaining more and more importance. From the past it is known that especially regular structures like grids may significantly improve the system behavior and performance. This contribution introduces a new algorithm based on a biologic analogue, which may provide the growth of several regular structures on top of anarchic grown P2P- or social network structures.

Analysis of Relation between Unlabeled and Labeled Data to Self-Taught Learning Performance

Obtaining labeled data in supervised learning is often difficult and expensive, and thus the trained learning algorithm tends to be overfitting due to small number of training data. As a result, some researchers have focused on using unlabeled data which may not necessary to follow the same generative distribution as the labeled data to construct a high-level feature for improving performance on supervised learning tasks. In this paper, we investigate the impact of the relationship between unlabeled and labeled data for classification performance. Specifically, we will apply difference unlabeled data which have different degrees of relation to the labeled data for handwritten digit classification task based on MNIST dataset. Our experimental results show that the higher the degree of relation between unlabeled and labeled data, the better the classification performance. Although the unlabeled data that is completely from different generative distribution to the labeled data provides the lowest classification performance, we still achieve high classification performance. This leads to expanding the applicability of the supervised learning algorithms using unsupervised learning.

FPGA Based Parallel Architecture for the Computation of Third-Order Cross Moments

Higher-order Statistics (HOS), also known as cumulants, cross moments and their frequency domain counterparts, known as poly spectra have emerged as a powerful signal processing tool for the synthesis and analysis of signals and systems. Algorithms used for the computation of cross moments are computationally intensive and require high computational speed for real-time applications. For efficiency and high speed, it is often advantageous to realize computation intensive algorithms in hardware. A promising solution that combines high flexibility together with the speed of a traditional hardware is Field Programmable Gate Array (FPGA). In this paper, we present FPGA-based parallel architecture for the computation of third-order cross moments. The proposed design is coded in Very High Speed Integrated Circuit (VHSIC) Hardware Description Language (VHDL) and functionally verified by implementing it on Xilinx Spartan-3 XC3S2000FG900-4 FPGA. Implementation results are presented and it shows that the proposed design can operate at a maximum frequency of 86.618 MHz.

Modified Fuzzy ARTMAP and Supervised Fuzzy ART: Comparative Study with Multispectral Classification

In this article a modification of the algorithm of the fuzzy ART network, aiming at returning it supervised is carried out. It consists of the search for the comparison, training and vigilance parameters giving the minimum quadratic distances between the output of the training base and those obtained by the network. The same process is applied for the determination of the parameters of the fuzzy ARTMAP giving the most powerful network. The modification consist in making learn the fuzzy ARTMAP a base of examples not only once as it is of use, but as many time as its architecture is in evolution or than the objective error is not reached . In this way, we don-t worry about the values to impose on the eight (08) parameters of the network. To evaluate each one of these three networks modified, a comparison of their performances is carried out. As application we carried out a classification of the image of Algiers-s bay taken by SPOT XS. We use as criterion of evaluation the training duration, the mean square error (MSE) in step control and the rate of good classification per class. The results of this study presented as curves, tables and images show that modified fuzzy ARTMAP presents the best compromise quality/computing time.

Modified Levenberg-Marquardt Method for Neural Networks Training

In this paper a modification on Levenberg-Marquardt algorithm for MLP neural network learning is proposed. The proposed algorithm has good convergence. This method reduces the amount of oscillation in learning procedure. An example is given to show usefulness of this method. Finally a simulation verifies the results of proposed method.

Study on Performance of Wigner Ville Distribution for Linear FM and Transient Signal Analysis

This research paper presents some methods to assess the performance of Wigner Ville Distribution for Time-Frequency representation of non-stationary signals, in comparison with the other representations like STFT, Spectrogram etc. The simultaneous timefrequency resolution of WVD is one of the important properties which makes it preferable for analysis and detection of linear FM and transient signals. There are two algorithms proposed here to assess the resolution and to compare the performance of signal detection. First method is based on the measurement of area under timefrequency plot; in case of a linear FM signal analysis. A second method is based on the instantaneous power calculation and is used in case of transient, non-stationary signals. The implementation is explained briefly for both methods with suitable diagrams. The accuracy of the measurements is validated to show the better performance of WVD representation in comparison with STFT and Spectrograms.

Learning to Order Terms: Supervised Interestingness Measures in Terminology Extraction

Term Extraction, a key data preparation step in Text Mining, extracts the terms, i.e. relevant collocation of words, attached to specific concepts (e.g. genetic-algorithms and decisiontrees are terms associated to the concept “Machine Learning" ). In this paper, the task of extracting interesting collocations is achieved through a supervised learning algorithm, exploiting a few collocations manually labelled as interesting/not interesting. From these examples, the ROGER algorithm learns a numerical function, inducing some ranking on the collocations. This ranking is optimized using genetic algorithms, maximizing the trade-off between the false positive and true positive rates (Area Under the ROC curve). This approach uses a particular representation for the word collocations, namely the vector of values corresponding to the standard statistical interestingness measures attached to this collocation. As this representation is general (over corpora and natural languages), generality tests were performed by experimenting the ranking function learned from an English corpus in Biology, onto a French corpus of Curriculum Vitae, and vice versa, showing a good robustness of the approaches compared to the state-of-the-art Support Vector Machine (SVM).

OPTIMAL Placement of FACTS Devices by Genetic Algorithm for the Increased Load Ability of a Power System

This paper presents Genetic Algorithm (GA) based approach for the allocation of FACTS (Flexible AC Transmission System) devices for the improvement of Power transfer capacity in an interconnected Power System. The GA based approach is applied on IEEE 30 BUS System. The system is reactively loaded starting from base to 200% of base load. FACTS devices are installed in the different locations of the power system and system performance is noticed with and without FACTS devices. First, the locations, where the FACTS devices to be placed is determined by calculating active and reactive power flows in the lines. Genetic Algorithm is then applied to find the amount of magnitudes of the FACTS devices. This approach of GA based placement of FACTS devices is tremendous beneficial both in terms of performance and economy is clearly observed from the result obtained.

Design and Implementation of a Hybrid Fuzzy Controller for a High-Performance Induction

This paper proposes an effective algorithm approach to hybrid control systems combining fuzzy logic and conventional control techniques of controlling the speed of induction motor assumed to operate in high-performance drives environment. The introducing of fuzzy logic in the control systems helps to achieve good dynamical response, disturbance rejection and low sensibility to parameter variations and external influences. Some fundamentals of the fuzzy logic control are preliminary illustrated. The developed control algorithm is robust, efficient and simple. It also assures precise trajectory tracking with the prescribed dynamics. Experimental results have shown excellent tracking performance of the proposed control system, and have convincingly demonstrated the validity and the usefulness of the hybrid fuzzy controller in high-performance drives with parameter and load uncertainties. Satisfactory performance was observed for most reference tracks.

Modeling and Optimization of Aggregate Production Planning - A Genetic Algorithm Approach

The Aggregate Production Plan (APP) is a schedule of the organization-s overall operations over a planning horizon to satisfy demand while minimizing costs. It is the baseline for any further planning and formulating the master production scheduling, resources, capacity and raw material planning. This paper presents a methodology to model the Aggregate Production Planning problem, which is combinatorial in nature, when optimized with Genetic Algorithms. This is done considering a multitude of constraints of contradictory nature and the optimization criterion – overall cost, made up of costs with production, work force, inventory, and subcontracting. A case study of substantial size, used to develop the model, is presented, along with the genetic operators.

Low Energy Method for Data Delivery in Ubiquitous Network

Recent advances in wireless sensor networks have led to many routing methods designed for energy-efficiency in wireless sensor networks. Despite that many routing methods have been proposed in USN, a single routing method cannot be energy-efficient if the environment of the ubiquitous sensor network varies. We present the controlling network access to various hosts and the services they offer, rather than on securing them one by one with a network security model. When ubiquitous sensor networks are deployed in hostile environments, an adversary may compromise some sensor nodes and use them to inject false sensing reports. False reports can lead to not only false alarms but also the depletion of limited energy resource in battery powered networks. The interleaved hop-by-hop authentication scheme detects such false reports through interleaved authentication. This paper presents a LMDD (Low energy method for data delivery) algorithm that provides energy-efficiency by dynamically changing protocols installed at the sensor nodes. The algorithm changes protocols based on the output of the fuzzy logic which is the fitness level of the protocols for the environment.

Low Cost Chip Set Selection Algorithm for Multi-way Partitioning of Digital System

This paper considers the problem of finding low cost chip set for a minimum cost partitioning of a large logic circuits. Chip sets are selected from a given library. Each chip in the library has a different price, area, and I/O pin. We propose a low cost chip set selection algorithm. Inputs to the algorithm are a netlist and a chip information in the library. Output is a list of chip sets satisfied with area and maximum partitioning number and it is sorted by cost. The algorithm finds the sorted list of chip sets from minimum cost to maximum cost. We used MCNC benchmark circuits for experiments. The experimental results show that all of chip sets found satisfy the multiple partitioning constraints.