Abstract: This research paper deals with the implementation of face recognition using neural network (recognition classifier) on low-resolution images. The proposed system contains two parts, preprocessing and face classification. The preprocessing part converts original images into blurry image using average filter and equalizes the histogram of those image (lighting normalization). The bi-cubic interpolation function is applied onto equalized image to get resized image. The resized image is actually low-resolution image providing faster processing for training and testing. The preprocessed image becomes the input to neural network classifier, which uses back-propagation algorithm to recognize the familiar faces. The crux of proposed algorithm is its beauty to use single neural network as classifier, which produces straightforward approach towards face recognition. The single neural network consists of three layers with Log sigmoid, Hyperbolic tangent sigmoid and Linear transfer function respectively. The training function, which is incorporated in our work, is Gradient descent with momentum (adaptive learning rate) back propagation. The proposed algorithm was trained on ORL (Olivetti Research Laboratory) database with 5 training images. The empirical results provide the accuracy of 94.50%, 93.00% and 90.25% for 20, 30 and 40 subjects respectively, with time delay of 0.0934 sec per image.
Abstract: X-ray mammography is the most effective method for
the early detection of breast diseases. However, the typical diagnostic
signs such as microcalcifications and masses are difficult to detect
because mammograms are of low-contrast and noisy. In this paper, a
new algorithm for image denoising and enhancement in Orthogonal
Polynomials Transformation (OPT) is proposed for radiologists to
screen mammograms. In this method, a set of OPT edge coefficients
are scaled to a new set by a scale factor called OPT scale factor. The
new set of coefficients is then inverse transformed resulting in
contrast improved image. Applications of the proposed method to
mammograms with subtle lesions are shown. To validate the
effectiveness of the proposed method, we compare the results to
those obtained by the Histogram Equalization (HE) and the Unsharp
Masking (UM) methods. Our preliminary results strongly suggest
that the proposed method offers considerably improved enhancement
capability over the HE and UM methods.
Abstract: Because of increasing demands for security in today-s
society and also due to paying much more attention to machine
vision, biometric researches, pattern recognition and data retrieval in
color images, face detection has got more application. In this article
we present a scientific approach for modeling human skin color, and
also offer an algorithm that tries to detect faces within color images
by combination of skin features and determined threshold in the
model. Proposed model is based on statistical data in different color
spaces. Offered algorithm, using some specified color threshold, first,
divides image pixels into two groups: skin pixel group and non-skin
pixel group and then based on some geometric features of face
decides which area belongs to face.
Two main results that we received from this research are as follow:
first, proposed model can be applied easily on different databases and
color spaces to establish proper threshold. Second, our algorithm can
adapt itself with runtime condition and its results demonstrate
desirable progress in comparison with similar cases.
Abstract: Based on the fuzzy set theory this work develops two
adaptations of iterative methods that solve mathematical programming
problems with uncertainties in the objective function and in
the set of constraints. The first one uses the approach proposed by
Zimmermann to fuzzy linear programming problems as a basis and
the second one obtains cut levels and later maximizes the membership
function of fuzzy decision making using the bound search method.
We outline similarities between the two iterative methods studied.
Selected examples from the literature are presented to validate the
efficiency of the methods addressed.
Abstract: The adaptive power control of Code Division Multiple
Access (CDMA) communications using Remote Radio Head
(RRH) between multiple Unmanned Aerial Vehicles (UAVs) with
a link-budget based Signal-to-Interference Ratio (SIR) estimate is
applied to four inner loop power control algorithms. It is concluded
that Base Station (BS) can calculate not only UAV distance using
linearity between speed and Consecutive Transmit-Power-Control
Ratio (CTR) of Adaptive Step-size Closed Loop Power Control (ASCLPC),
Consecutive TPC Ratio Step-size Closed Loop Power Control
(CS-CLPC), Fixed Step-size Power Control (FSPC), but also UAV
position with Received Signal Strength Indicator (RSSI) ratio of
RRHs.
Abstract: In this paper, a new learning approach for network
intrusion detection using naïve Bayesian classifier and ID3 algorithm
is presented, which identifies effective attributes from the training
dataset, calculates the conditional probabilities for the best attribute
values, and then correctly classifies all the examples of training and
testing dataset. Most of the current intrusion detection datasets are
dynamic, complex and contain large number of attributes. Some of
the attributes may be redundant or contribute little for detection
making. It has been successfully tested that significant attribute
selection is important to design a real world intrusion detection
systems (IDS). The purpose of this study is to identify effective
attributes from the training dataset to build a classifier for network
intrusion detection using data mining algorithms. The experimental
results on KDD99 benchmark intrusion detection dataset demonstrate
that this new approach achieves high classification rates and reduce
false positives using limited computational resources.
Abstract: Supply Chain Management (SCM) is the integration
between manufacturer, transporter and customer in order to form one
seamless chain that allows smooth flow of raw materials, information
and products throughout the entire network that help in minimizing
all related efforts and costs. The main objective of this paper is to
develop a model that can accept a specified number of spare-parts
within the supply chain, simulating its inventory operations
throughout all stages in order to minimize the inventory holding
costs, base-stock, safety-stock, and to find the optimum quantity of
inventory levels, thereby suggesting a way forward to adapt some
factors of Just-In-Time to minimizing the inventory costs throughout
the entire supply chain. The model has been developed using Micro-
Soft Excel & Visual Basic in order to study inventory allocations in
any network of the supply chain. The application and reproducibility
of this model were tested by comparing the actual system that was
implemented in the case study with the results of the developed
model. The findings showed that the total inventory costs of the
developed model are about 50% less than the actual costs of the
inventory items within the case study.
Abstract: Using spatial models as a shared common basis of
information about the environment for different kinds of contextaware
systems has been a heavily researched topic in the last years.
Thereby the research focused on how to create, to update, and to
merge spatial models so as to enable highly dynamic, consistent and
coherent spatial models at large scale. In this paper however, we
want to concentrate on how context-aware applications could use this
information so as to adapt their behavior according to the situation
they are in. The main idea is to provide the spatial model
infrastructure with a situation recognition component based on
generic situation templates. A situation template is – as part of a
much larger situation template library – an abstract, machinereadable
description of a certain basic situation type, which could be
used by different applications to evaluate their situation. In this
paper, different theoretical and practical issues – technical, ethical
and philosophical ones – are discussed important for understanding
and developing situation dependent systems based on situation
templates. A basic system design is presented which allows for the
reasoning with uncertain data using an improved version of a
learning algorithm for the automatic adaption of situation templates.
Finally, for supporting the development of adaptive applications, we
present a new situation-aware adaptation concept based on
workflows.
Abstract: The identification and elimination of bad
measurements is one of the basic functions of a robust state estimator
as bad data have the effect of corrupting the results of state
estimation according to the popular weighted least squares method.
However this is a difficult problem to handle especially when dealing
with multiple errors from the interactive conforming type. In this
paper, a self adaptive genetic based algorithm is proposed. The
algorithm utilizes the results of the classical linearized normal
residuals approach to tune the genetic operators thus instead of
making a randomized search throughout the whole search space it is
more likely to be a directed search thus the optimum solution is
obtained at very early stages(maximum of 5 generations). The
algorithm utilizes the accumulating databases of already computed
cases to reduce the computational burden to minimum. Tests are
conducted with reference to the standard IEEE test systems. Test
results are very promising.
Abstract: The growth of the aquaculture industry has been
associated with negative environmental impacts through the
discharge of raw effluents into the adjacent receiving water bodies.
Macrophytes from natural saline lakes, which have adaptability to the
high salinity, can be suitable for saline effluent treatment. Eight
emergent species from natural saline area were planted in an
experimental gravel bed hydroponic mesocosm (GBH) which was
treated with effluent water from an intensive fish farm using
geothermal water. In order to examine the applicability of the
halophytes in treatment processes, we tested the relative efficacy of
total nitrogen (TN), total phosphorus (TP), potassium (K), sodium
(Na), magnesium (Mg) and calcium (Ca) removal for the saline
wastewater treatment. Four of the eight species, which were
Phragmites australis, Typha angustifolia, Glyceria maxima, Scirpus
lacustris spp. tabernaemontani could survive and contribute the
experimental treatment.
Abstract: Cosmic showers, during the transit through space, produce
sub - products as a result of interactions with the intergalactic
or interstellar medium which after entering earth generate secondary
particles called Extensive Air Shower (EAS). Detection and analysis
of High Energy Particle Showers involve a plethora of theoretical and
experimental works with a host of constraints resulting in inaccuracies
in measurements. Therefore, there exist a necessity to develop a
readily available system based on soft-computational approaches
which can be used for EAS analysis. This is due to the fact that soft
computational tools such as Artificial Neural Network (ANN)s can be
trained as classifiers to adapt and learn the surrounding variations. But
single classifiers fail to reach optimality of decision making in many
situations for which Multiple Classifier System (MCS) are preferred
to enhance the ability of the system to make decisions adjusting
to finer variations. This work describes the formation of an MCS
using Multi Layer Perceptron (MLP), Recurrent Neural Network
(RNN) and Probabilistic Neural Network (PNN) with data inputs
from correlation mapping Self Organizing Map (SOM) blocks and
the output optimized by another SOM. The results show that the setup
can be adopted for real time practical applications for prediction
of primary energy and location of EAS from density values captured
using detectors in a circular grid.
Abstract: Wireless Sensor networks have a wide spectrum of civil and military applications that call for secure communication such as the terrorist tracking, target surveillance in hostile environments. For the secure communication in these application areas, we propose a method for generating a hierarchical key structure for the efficient group key management. In this paper, we apply A* algorithm in generating a hierarchical key structure by considering the history data of the ratio of addition and eviction of sensor nodes in a location where sensor nodes are deployed. Thus generated key tree structure provides an efficient way of managing the group key in terms of energy consumption when addition and eviction event occurs. A* algorithm tries to minimize the number of messages needed for group key management by the history data. The experimentation with the tree shows efficiency of the proposed method.
Abstract: This paper addresses functional projective lag synchronization of Lorenz system with four unknown parameters, where the output of the master system lags behind the output of the slave system proportionally. For this purpose, an adaptive control law is proposed to make the states of two identical Lorenz systems asymptotically synchronize up. Based on Lyapunov stability theory, a novel criterion is given for asymptotical stability of the null solution of an error dynamics. Finally, some numerical examples are provided to show the effectiveness of our results.
Abstract: In the traditional architecture, buildings were designed
to achieve human comfort by using locally available building materials and construction technology which were more responsive to
their climatic and geographic condition. This paper will try to bring out the wisdom of the local masons and builders, often the inhabitants
themselves, about their way of living, and shaping their built environment, indoor and outdoor spaces, as a response to the local
climatic conditions, from the findings of a field
settlement.
Abstract: Background: Tissue Doppler Echocardiography
(TDE) assesses diastolic function more accurately than routine pulse
Doppler echo. Assessment of the effects of dynamic and static
exercises on the heart by using TDE can provides new information
about the athlete-s heart syndrome. Methods: This study was
conducted on 20 elite wrestlers, 14 endurance runners at national
level and 21 non-athletes as the control group. Participants underwent
two-dimensional echocardiography, standard Doppler and TDE.
Results: Wrestlers had the highest left ventricular mass index, enddiastolic
inter-ventricular septum thickness and left ventricular
Posterior wall thickness. Runners had the highest Left ventricular
end-diastolic volume, LV ejection fraction, stroke volume and
cardiac output. In TDE, the early diastolic velocity of mitral annulus
to the late diastolic velocity ratio in athletic groups was greater than
the controls with no significant difference. Conclusion: In spite of
cardiac morphological changes in athletes, TDE shows that cardiac
diastolic function won-t be adversely affected.
Abstract: An adaptive neural network controller for
autonomous underwater vehicles (AUVs) is presented in this paper.
The AUV model is highly nonlinear because of many factors, such as
hydrodynamic drag, damping, and lift forces, Coriolis and centripetal
forces, gravity and buoyancy forces, as well as forces from thruster.
In this regards, a nonlinear neural network is used to approximate the
nonlinear uncertainties of AUV dynamics, thus overcoming some
limitations of conventional controllers and ensure good performance.
The uniform ultimate boundedness of AUV tracking errors and the
stability of the proposed control system are guaranteed based on
Lyapunov theory. Numerical simulation studies for motion control of
an AUV are performed to demonstrate the effectiveness of the
proposed controller.
Abstract: We consider the problem of bandwidth allocation in a
substrate network as an optimization problem for the aggregate utility
of multiple applications with diverse requirements and describe a
simulation scheme for dynamically adaptive bandwidth allocation
protocols. The proposed simulation model based on Coloured Petri
Nets (CPN) is realized using CPN Tools.
Abstract: The intention of this paper is, to help the user of evolutionary algorithms to adapt them easier to their problem at hand. For a lot of problems in the technical field it is not necessary to reach an optimum solution, but to reach a good solution in time. In many cases the solution is undetermined or there doesn-t exist a method to determine the solution. For these cases an evolutionary algorithm can be useful. This paper intents to give the user rules of thumb with which it is easier to decide if the problem is suitable for an evolutionary algorithm and how to design them.
Abstract: Due to today-s turbulent environment, manufacturing resources, particularly in assembly, must be reconfigured frequently. These reconfigurations are caused by various, partly cyclic, influencing factors. Hence, it is important to evaluate the innovation ability - the capability of resources to implement innovations quickly and efficiently without large expense - of manufacturing resources. For this purpose, a new methodology is presented in this article. Within the methodology, design structure matrices and graph theory are used. The results of the methodology include different indices to evaluate the innovation ability of the manufacturing resources. Due to the cyclicity of the influencing factors, the methodology can be used to synchronize the realization of adaptations.
Abstract: This paper presents a model of case based corporate
memory named ReCaRo (REsource, CAse, ROle). The approach
suggested in ReCaRo decomposes the domain to model through a set
of components. These components represent the objects developed by
the company during its activity. They are reused, and sometimes,
while bringing adaptations. These components are enriched by
knowledge after each reuse. ReCaRo builds the corporate memory on
the basis of these components. It models two types of knowledge: 1)
Business Knowledge, which constitutes the main knowledge capital
of the company, refers to its basic skill, thus, directly to the
components and 2) the Experience Knowledge which is a specialised
knowledge and represents the experience gained during the handling
of business knowledge. ReCaRo builds corporate memories which
are made up of five communicating ones.