Abstract: This research paper deals with the implementation of face recognition using neural network (recognition classifier) on low-resolution images. The proposed system contains two parts, preprocessing and face classification. The preprocessing part converts original images into blurry image using average filter and equalizes the histogram of those image (lighting normalization). The bi-cubic interpolation function is applied onto equalized image to get resized image. The resized image is actually low-resolution image providing faster processing for training and testing. The preprocessed image becomes the input to neural network classifier, which uses back-propagation algorithm to recognize the familiar faces. The crux of proposed algorithm is its beauty to use single neural network as classifier, which produces straightforward approach towards face recognition. The single neural network consists of three layers with Log sigmoid, Hyperbolic tangent sigmoid and Linear transfer function respectively. The training function, which is incorporated in our work, is Gradient descent with momentum (adaptive learning rate) back propagation. The proposed algorithm was trained on ORL (Olivetti Research Laboratory) database with 5 training images. The empirical results provide the accuracy of 94.50%, 93.00% and 90.25% for 20, 30 and 40 subjects respectively, with time delay of 0.0934 sec per image.
Abstract: With the explosive growth of information sources available on the World Wide Web, it has become increasingly difficult to identify the relevant pieces of information, since web pages are often cluttered with irrelevant content like advertisements, navigation-panels, copyright notices etc., surrounding the main content of the web page. Hence, tools for the mining of data regions, data records and data items need to be developed in order to provide value-added services. Currently available automatic techniques to mine data regions from web pages are still unsatisfactory because of their poor performance and tag-dependence. In this paper a novel method to extract data items from the web pages automatically is proposed. It comprises of two steps: (1) Identification and Extraction of the data regions based on visual clues information. (2) Identification of data records and extraction of data items from a data region. For step1, a novel and more effective method is proposed based on visual clues, which finds the data regions formed by all types of tags using visual clues. For step2 a more effective method namely, Extraction of Data Items from web Pages (EDIP), is adopted to mine data items. The EDIP technique is a list-based approach in which the list is a linear data structure. The proposed technique is able to mine the non-contiguous data records and can correctly identify data regions, irrespective of the type of tag in which it is bound. Our experimental results show that the proposed technique performs better than the existing techniques.
Abstract: Over the years, many implementations have been
proposed for solving IA networks. These implementations are
concerned with finding a solution efficiently. The primary goal of
our implementation is simplicity and ease of use.
We present an IA network implementation based on finite domain
non-binary CSPs, and constraint logic programming. The
implementation has a GUI which permits the drawing of arbitrary IA
networks. We then show how the implementation can be extended to
find all the solutions to an IA network. One application of finding all
the solutions, is solving probabilistic IA networks.
Abstract: Interactions among proteins are the basis of various
life events. So, it is important to recognize and research protein
interaction sites. A control set that contains 149 protein molecules
were used here. Then 10 features were extracted and 4 sample sets
that contained 9 sliding windows were made according to features.
These 4 sample sets were calculated by Radial Basis Functional neutral
networks which were optimized by Particle Swarm Optimization
respectively. Then 4 groups of results were obtained. Finally, these 4
groups of results were integrated by decision fusion (DF) and Genetic
Algorithm based Selected Ensemble (GASEN). A better accuracy was
got by DF and GASEN. So, the integrated methods were proved to
be effective.
Abstract: Because of increasing demands for security in today-s
society and also due to paying much more attention to machine
vision, biometric researches, pattern recognition and data retrieval in
color images, face detection has got more application. In this article
we present a scientific approach for modeling human skin color, and
also offer an algorithm that tries to detect faces within color images
by combination of skin features and determined threshold in the
model. Proposed model is based on statistical data in different color
spaces. Offered algorithm, using some specified color threshold, first,
divides image pixels into two groups: skin pixel group and non-skin
pixel group and then based on some geometric features of face
decides which area belongs to face.
Two main results that we received from this research are as follow:
first, proposed model can be applied easily on different databases and
color spaces to establish proper threshold. Second, our algorithm can
adapt itself with runtime condition and its results demonstrate
desirable progress in comparison with similar cases.
Abstract: In this paper a unified approach via block-pulse functions (BPFs) or shifted Legendre polynomials (SLPs) is presented to solve the linear-quadratic-Gaussian (LQG) control problem. Also a recursive algorithm is proposed to solve the above problem via BPFs. By using the elegant operational properties of orthogonal functions (BPFs or SLPs) these computationally attractive algorithms are developed. To demonstrate the validity of the proposed approaches a numerical example is included.
Abstract: Many studies have shown that parallelization decreases efficiency [1], [2]. There are many reasons for these decrements. This paper investigates those which appear in the context of parallel data integration. Integration processes generally cannot be allocated to packages of identical size (i. e. tasks of identical complexity). The reason for this is unknown heterogeneous input data which result in variable task lengths. Process delay is defined by the slowest processing node. It leads to a detrimental effect on the total processing time. With a real world example, this study will show that while process delay does initially increase with the introduction of more nodes it ultimately decreases again after a certain point. The example will make use of the cloud computing platform Hadoop and be run inside Amazon-s EC2 compute cloud. A stochastic model will be set up which can explain this effect.
Abstract: The security of power systems against malicious cyberphysical
data attacks becomes an important issue. The adversary
always attempts to manipulate the information structure of the power
system and inject malicious data to deviate state variables while
evading the existing detection techniques based on residual test. The
solutions proposed in the literature are capable of immunizing the
power system against false data injection but they might be too costly
and physically not practical in the expansive distribution network.
To this end, we define an algebraic condition for trustworthy power
system to evade malicious data injection. The proposed protection
scheme secures the power system by deterministically reconfiguring
the information structure and corresponding residual test. More
importantly, it does not require any physical effort in either microgrid
or network level. The identification scheme of finding meters being
attacked is proposed as well. Eventually, a well-known IEEE 30-bus
system is adopted to demonstrate the effectiveness of the proposed
schemes.
Abstract: The Artificial immune systems algorithms are Meta
heuristic optimization method, which are used for clustering and
pattern recognition applications are abundantly. These algorithms in
multimodal optimization problems are more efficient than genetic
algorithms. A major drawback in these algorithms is their slow
convergence to global optimum and their weak stability can be
considered in various running of these algorithms. In this paper,
improved Artificial Immune System Algorithm is introduced for the
first time to overcome its problems of artificial immune system. That
use of the small size of a local search around the memory antibodies
is used for improving the algorithm efficiently. The credibility of the
proposed approach is evaluated by simulations, and it is shown that
the proposed approach achieves better results can be achieved
compared to the standard artificial immune system algorithms
Abstract: The paper discusses complexity of component-based
development (CBD) of embedded systems. Although CBD has its
merits, it must be augmented with methods to control the complexities
that arise due to resource constraints, timeliness, and run-time deployment
of components in embedded system development. Software
component specification, system-level testing, and run-time reliability
measurement are some ways to control the complexity.
Abstract: An additive fuzzy system comprising m rules with
n inputs and p outputs in each rule has at least t m(2n + 2 p + 1)
parameters needing to be tuned. The system consists of a large
number of if-then fuzzy rules and takes a long time to tune its
parameters especially in the case of a large amount of training data
samples. In this paper, a new learning strategy is investigated to cope
with this obstacle. Parameters that tend toward constant values at the
learning process are initially fixed and they are not tuned till the end
of the learning time. Experiments based on applications of the
additive fuzzy system in function approximation demonstrate that the
proposed approach reduces the learning time and hence improves
convergence speed considerably.
Abstract: This paper studies the dependability of componentbased
applications, especially embedded ones, from the diagnosis
point of view. The principle of the diagnosis technique is to
implement inter-component tests in order to detect and locate the
faulty components without redundancy. The proposed approach for
diagnosing faulty components consists of two main aspects. The first
one concerns the execution of the inter-component tests which
requires integrating test functionality within a component. This is the
subject of this paper. The second one is the diagnosis process itself
which consists of the analysis of inter-component test results to
determine the fault-state of the whole system. Advantage of this
diagnosis method when compared to classical redundancy faulttolerant
techniques are application autonomy, cost-effectiveness and
better usage of system resources. Such advantage is very important
for many systems and especially for embedded ones.
Abstract: Sensor relocation is to repair coverage holes caused by node failures. One way to repair coverage holes is to find redundant nodes to replace faulty nodes. Most researches took a long time to find redundant nodes since they randomly scattered redundant nodes around the sensing field. To record the precise position of sensor nodes, most researches assumed that GPS was installed in sensor nodes. However, high costs and power-consumptions of GPS are heavy burdens for sensor nodes. Thus, we propose a fast sensor relocation algorithm to arrange redundant nodes to form redundant walls without GPS. Redundant walls are constructed in the position where the average distance to each sensor node is the shortest. Redundant walls can guide sensor nodes to find redundant nodes in the minimum time. Simulation results show that our algorithm can find the proper redundant node in the minimum time and reduce the relocation time with low message complexity.
Abstract: Considering payload, reliability, security and operational lifetime as major constraints in transmission of images we put forward in this paper a steganographic technique implemented at the physical layer. We suggest transmission of Halftoned images (payload constraint) in wireless sensor networks to reduce the amount of transmitted data. For low power and interference limited applications Turbo codes provide suitable reliability. Ensuring security is one of the highest priorities in many sensor networks. The Turbo Code structure apart from providing forward error correction can be utilized to provide for encryption. We first consider the Halftoned image and then the method of embedding a block of data (called secret) in this Halftoned image during the turbo encoding process is presented. The small modifications required at the turbo decoder end to extract the embedded data are presented next. The implementation complexity and the degradation of the BER (bit error rate) in the Turbo based stego system are analyzed. Using some of the entropy based crypt analytic techniques we show that the strength of our Turbo based stego system approaches that found in the OTPs (one time pad).
Abstract: A robust wheel slip controller for electric vehicles is
introduced. The proposed wheel slip controller exploits the dynamics
of electric traction drives and conventional hydraulic brakes for
achieving maximum energy efficiency and driving safety. Due to
the control of single wheel traction motors in combination with a
hydraulic braking system, it can be shown, that energy recuperation
and vehicle stability control can be realized simultaneously. The
derivation of a sliding mode wheel slip controller accessing two
drivetrain actuators is outlined and a comparison to a conventionally
braked vehicle is shown by means of simulation.
Abstract: Petri Net (PN) has proven to be effective graphical, mathematical, simulation, and control tool for Discrete Event Systems (DES). But, with the growth in the complexity of modern industrial, and communication systems, PN found themselves inadequate to address the problems of uncertainty, and imprecision in data. This gave rise to amalgamation of Fuzzy logic with Petri nets and a new tool emerged with the name of Fuzzy Petri Nets (FPN). Although there had been a lot of research done on FPN and a number of their applications have been anticipated, but their basic types and structure are still ambiguous. Therefore, in this research, an effort is made to categorize FPN according to their structure and algorithms Further, literature review of the applications of FPN in the light of their classifications has been done.
Abstract: We discuss the application of matching in the area of resource discovery and resource allocation in grid computing. We present a formal definition of matchmaking, overview algorithms to evaluate different matchmaking expressions, and develop a matchmaking service for an intelligent grid environment.
Abstract: The problem of ranking (rank regression) has become popular in the machine learning community. This theory relates to problems, in which one has to predict (guess) the order between objects on the basis of vectors describing their observed features. In many ranking algorithms a convex loss function is used instead of the 0-1 loss. It makes these procedures computationally efficient. Hence, convex risk minimizers and their statistical properties are investigated in this paper. Fast rates of convergence are obtained under conditions, that look similarly to the ones from the classification theory. Methods used in this paper come from the theory of U-processes as well as empirical processes.
Abstract: In many applications, data is in graph structure, which
can be naturally represented as graph-structured XML. Existing
queries defined on tree-structured and graph-structured XML data
mainly focus on subgraph matching, which can not cover all the
requirements of querying on graph. In this paper, a new kind of
queries, topological query on graph-structured XML is presented.
This kind of queries consider not only the structure of subgraph but
also the topological relationship between subgraphs. With existing
subgraph query processing algorithms, efficient algorithms for topological
query processing are designed. Experimental results show the
efficiency of implementation algorithms.
Abstract: In this paper, we present a novel technique called Self-Learning Expert System (SLES). Unlike Expert System, where there is a need for an expert to impart experiences and knowledge to create the knowledge base, this technique tries to acquire the experience and knowledge automatically. To display this technique at work, a simulation of a mobile robot navigating through an environment with obstacles is employed using visual basic. The mobile robot will move through this area without colliding with any obstacle and save the path that it took. If the mobile robot has to go through a similar environment again, then it will apply this experience to help it move through quicker without having to check for collision.