Abstract: The paper describes a new approach for fingerprint
classification, based on the distribution of local features (minute
details or minutiae) of the fingerprints. The main advantage is that
fingerprint classification provides an indexing scheme to facilitate
efficient matching in a large fingerprint database. A set of rules based
on heuristic approach has been proposed. The area around the core
point is treated as the area of interest for extracting the minutiae
features as there are substantial variations around the core point as
compared to the areas away from the core point. The core point in a
fingerprint has been located at a point where there is maximum
curvature. The experimental results report an overall average
accuracy of 86.57 % in fingerprint classification.
Abstract: Throughput is an important measure of performance of production system. Analyzing and modeling of production throughput is complex in today-s dynamic production systems due to uncertainties of production system. The main reasons are that uncertainties are materialized when the production line faces changes in setup time, machinery break down, lead time of manufacturing, and scraps. Besides, demand changes are fluctuating from time to time for each product type. These uncertainties affect the production performance. This paper proposes Bayesian inference for throughput modeling under five production uncertainties. Bayesian model utilized prior distributions related to previous information about the uncertainties where likelihood distributions are associated to the observed data. Gibbs sampling algorithm as the robust procedure of Monte Carlo Markov chain was employed for sampling unknown parameters and estimating the posterior mean of uncertainties. The Bayesian model was validated with respect to convergence and efficiency of its outputs. The results presented that the proposed Bayesian models were capable to predict the production throughput with accuracy of 98.3%.
Abstract: In wireless sensor networks,the mobile agent technology is used in data fusion. According to the node residual energy and the results of partial integration,we design the node clustering algorithm. Optimization of mobile agent in the routing within the cluster strategy for wireless sensor networks to further reduce the amount of data transfer. Through the experiments, using mobile agents in the integration process within the cluster can be reduced the path loss in some extent.
Abstract: The greenhouse effect and limitations on carbon
dioxide emissions concern engine maker and the future of the
internal combustion engines should go toward substantially and
improved thermal efficiency engine. Homogeneous charge
compression ignition (HCCI) is an alternative high-efficiency
technology for combustion engines to reduce exhaust emissions and
fuel consumption. However, there are still tough challenges in the
successful operation of HCCI engines, such as controlling the
combustion phasing, extending the operating range, and high
unburned hydrocarbon and CO emissions. HCCI and the exploitation
of ethanol as an alternative fuel is one way to explore new frontiers
of internal combustion engines with an eye towards maintaining its
sustainability. This study was done to extend database knowledge
about HCCI with ethanol a fuel.
Abstract: The charge-exchange xenon (CEX) ion generated by ion thruster can backflow to the surface of spacecraft and threaten to the safety of spacecraft operation. In order to evaluate the effects of the induced plasma environment in backflow regions on the spacecraft, we designed a spherical single Langmuir probe of 5.8cm in diameter for measuring low-density plasma parameters in backflow region of ion thruster. In practice, the tests are performed in a two-dimensional array (40cm×60cm) composed of 20 sites. The experiment results illustrate that the electron temperature ranges from 3.71eV to 3.96eV, with the mean value of 3.82eV and the standard deviation of 0.064eV. The electron density ranges from 8.30×1012/m3 to 1.66×1013/m3, with the mean value of 1.30×1013/m3 and the standard deviation of 2.15×1012/m3. All data is analyzed according to the “ideal" plasma conditions of Maxwellian distributions.
Abstract: A new approach for facial expressions recognition based on face context and adaptively weighted sub-pattern PCA (Aw-SpPCA) has been presented in this paper. The facial region and others part of the body have been segmented from the complex environment based on skin color model. An algorithm has been proposed to accurate detection of face region from the segmented image based on constant ratio of height and width of face (δ= 1.618). The paper also discusses on new concept to detect the eye and mouth position. The desired part of the face has been cropped to analysis the expression of a person. Unlike PCA based on a whole image pattern, Aw-SpPCA operates directly on its sub patterns partitioned from an original whole pattern and separately extracts features from them. Aw-SpPCA can adaptively compute the contributions of each part and a classification task in order to enhance the robustness to both expression and illumination variations. Experiments on single standard face with five types of facial expression database shows that the proposed method is competitive.
Abstract: Smoothing or filtering of data is first preprocessing step
for noise suppression in many applications involving data analysis.
Moving average is the most popular method of smoothing the data,
generalization of this led to the development of Savitzky-Golay filter.
Many window smoothing methods were developed by convolving
the data with different window functions for different applications;
most widely used window functions are Gaussian or Kaiser. Function
approximation of the data by polynomial regression or Fourier
expansion or wavelet expansion also gives a smoothed data. Wavelets
also smooth the data to great extent by thresholding the wavelet
coefficients. Almost all smoothing methods destroys the peaks and
flatten them when the support of the window is increased. In certain
applications it is desirable to retain peaks while smoothing the data
as much as possible. In this paper we present a methodology called
as peak-wise smoothing that will smooth the data to any desired level
without losing the major peak features.
Abstract: The population structure of the Tor tambroides was
investigated with morphometric data (i.e. morphormetric
measurement and truss measurement). A morphometric analysis was
conducted to compare specimens from three waterfalls: Sunanta, Nan
Chong Fa and Wang Muang waterfalls at Khao Nan National Park,
Nakhon Si Thammarat, Southern Thailand. The results of stepwise
discriminant analysis on seven morphometric variables and 21 truss
variables per individual were the same as from a neural network. Fish
from three waterfalls were separated into three groups based on their
morphometric measurements. The morphometric data shows that the
nerual network model performed better than the stepwise
discriminant analysis.
Abstract: Human activity is a major concern in a wide variety of
applications, such as video surveillance, human computer interface
and face image database management. Detecting and recognizing
faces is a crucial step in these applications. Furthermore, major
advancements and initiatives in security applications in the past years
have propelled face recognition technology into the spotlight. The
performance of existing face recognition systems declines significantly
if the resolution of the face image falls below a certain level.
This is especially critical in surveillance imagery where often, due to
many reasons, only low-resolution video of faces is available. If these
low-resolution images are passed to a face recognition system, the
performance is usually unacceptable. Hence, resolution plays a key
role in face recognition systems. In this paper we introduce a new
low resolution face recognition system based on mixture of expert
neural networks. In order to produce the low resolution input images
we down-sampled the 48 × 48 ORL images to 12 × 12 ones using
the nearest neighbor interpolation method and after that applying
the bicubic interpolation method yields enhanced images which is
given to the Principal Component Analysis feature extractor system.
Comparison with some of the most related methods indicates that
the proposed novel model yields excellent recognition rate in low
resolution face recognition that is the recognition rate of 100% for
the training set and 96.5% for the test set.
Abstract: The mineral having chemical compositional formula MgAl2O4 is called “spinel". The ferrites crystallize in spinel structure are known as spinel-ferrites or ferro-spinels. The spinel structure has a fcc cage of oxygen ions and the metallic cations are distributed among tetrahedral (A) and octahedral (B) interstitial voids (sites). The X-ray diffraction (XRD) intensity of each Bragg plane is sensitive to the distribution of cations in the interstitial voids of the spinel lattice. This leads to the method of determination of distribution of cations in the spinel oxides through XRD intensity analysis. The computer program for XRD intensity analysis has been developed in C language and also tested for the real experimental situation by synthesizing the spinel ferrite materials Mg0.6Zn0.4AlxFe2- xO4 and characterized them by X-ray diffractometry. The compositions of Mg0.6Zn0.4AlxFe2-xO4(x = 0.0 to 0.6) ferrites have been prepared by ceramic method and powder X-ray diffraction patterns were recorded. Thus, the authenticity of the program is checked by comparing the theoretically calculated data using computer simulation with the experimental ones. Further, the deduced cation distributions were used to fit the magnetization data using Localized canting of spins approach to explain the “recovery" of collinear spin structure due to Al3+ - substitution in Mg-Zn ferrites which is the case if A-site magnetic dilution and non-collinear spin structure. Since the distribution of cations in the spinel ferrites plays a very important role with regard to their electrical and magnetic properties, it is essential to determine the cation distribution in spinel lattice.
Abstract: This paper proposes a prototype of a lower-limb
rehabilitation system for recovering and strengthening patients-
injured lower limbs. The system is composed of traction motors for
each leg position, a treadmill as a walking base, tension sensors,
microcontrollers controlling motor functions and a main system with
graphic user interface. For derivation of reference or normal velocity
profiles of the body segment point, kinematic method is applied based
on the humanoid robot model using the reference joint angle data of
normal walking.
Abstract: Long Term Evolution (LTE) is a 4G wireless
broadband technology developed by the Third Generation
Partnership Project (3GPP) release 8, and it's represent the
competitiveness of Universal Mobile Telecommunications System
(UMTS) for the next 10 years and beyond. The concepts for LTE
systems have been introduced in 3GPP release 8, with objective of
high-data-rate, low-latency and packet-optimized radio access
technology. In this paper, performance of different TCP variants
during LTE network investigated. The performance of TCP over
LTE is affected mostly by the links of the wired network and total
bandwidth available at the serving base station. This paper describes
an NS-2 based simulation analysis of TCP-Vegas, TCP-Tahoe, TCPReno,
TCP-Newreno, TCP-SACK, and TCP-FACK, with full
modeling of all traffics of LTE system. The Evaluation of the
network performance with all TCP variants is mainly based on
throughput, average delay and lost packet. The analysis of TCP
performance over LTE ensures that all TCP's have a similar
throughput and the best performance return to TCP-Vegas than other
variants.
Abstract: The prediction of financial time series is a very
complicated process. If the efficient market hypothesis holds, then the predictability of most financial time series would be a rather
controversial issue, due to the fact that the current price contains already all available information in the market. This paper extends
the Adaptive Neuro Fuzzy Inference System for High Frequency
Trading which is an expert system that is capable of using fuzzy reasoning combined with the pattern recognition capability of neural networks to be used in financial forecasting and trading in high
frequency. However, in order to eliminate unnecessary input in the
training phase a new event based volatility model was proposed.
Taking volatility and the scaling laws of financial time series into consideration has brought about the development of the Intraday Seasonality Observation Model. This new model allows the observation of specific events and seasonalities in data and subsequently removes any unnecessary data. This new event based
volatility model provides the ANFIS system with more accurate input
and has increased the overall performance of the system.
Abstract: Obtaining labeled data in supervised learning is often
difficult and expensive, and thus the trained learning algorithm tends
to be overfitting due to small number of training data. As a result,
some researchers have focused on using unlabeled data which may
not necessary to follow the same generative distribution as the labeled
data to construct a high-level feature for improving performance on
supervised learning tasks. In this paper, we investigate the impact of
the relationship between unlabeled and labeled data for classification
performance. Specifically, we will apply difference unlabeled data
which have different degrees of relation to the labeled data for
handwritten digit classification task based on MNIST dataset. Our
experimental results show that the higher the degree of relation
between unlabeled and labeled data, the better the classification
performance. Although the unlabeled data that is completely from
different generative distribution to the labeled data provides the lowest
classification performance, we still achieve high classification performance.
This leads to expanding the applicability of the supervised
learning algorithms using unsupervised learning.
Abstract: Analysis of the elastic scattering of protons on 6,7Li
nuclei has been done in the framework of the optical model at the
beam energies up to 50 MeV. Differential cross sections for the 6,7Li +
p scattering were measured over the proton laboratory–energy range
from 400 to 1050 keV. The elastic scattering of 6,7Li+p data at
different proton incident energies have been analyzed using singlefolding
model. In each case the real potential obtained from the
folding model was supplemented by a phenomenological imaginary
potential, and during the fitting process the real potential was
normalized and the imaginary potential optimized. Normalization
factor NR is calculated in the range between 0.70 and 0.84.
Abstract: Banishing hunger from the face of earth has been
frequently expressed in various international, national and regional
level conferences since 1974. Providing food security has become
important issue across the world particularly in developing countries.
In a developing country like India, where growth rate of population is
more than that of the food grains production, food security is a
question of great concern. According to the International Food Policy
Research Institute's Global Hunger Index, 2011, India ranks 67 of the
81 countries of the world with the worst food security status. After
Green Revolution, India became a food surplus country. Its
production has increased from 74.23 million tonnes in 1966-67 to
257.44 million tonnes in 2011-12. But after achieving selfsufficiency
in food during last three decades, the country is now
facing new challenges due to increasing population, climate change,
stagnation in farm productivity. Therefore, the main objective of the
present paper is to examine the food security situation at national
level in the country and further to explain the paradox of food
insecurity in a food surplus state of India i.e in Punjab at micro level.
In order to achieve the said objectives, secondary data collected from
the Ministry of Agriculture and the Agriculture department of Punjab
State was analyzed. The result of the study showed that despite
having surplus food production the country is still facing food
insecurity problem at micro level. Within the Kandi belt of Punjab
state, the area adjacent to plains is food secure while the area along
the hills falls in food insecure zone.
The present paper is divided into following three sections (i)
Introduction, (ii) Analysis of food security situation at national level
as well as micro level (Kandi belt of Punjab State) (iii) Concluding
Observations
Abstract: Gene expression profiling is rapidly evolving into a
powerful technique for investigating tumor malignancies. The
researchers are overwhelmed with the microarray-based platforms
and methods that confer them the freedom to conduct large-scale
gene expression profiling measurements. Simultaneously,
investigations into cross-platform integration methods have started
gaining momentum due to their underlying potential to help
comprehend a myriad of broad biological issues in tumor diagnosis,
prognosis, and therapy. However, comparing results from different
platforms remains to be a challenging task as various inherent
technical differences exist between the microarray platforms. In this
paper, we explain a simple ratio-transformation method, which can
provide some common ground for cDNA and Affymetrix platform
towards cross-platform integration. The method is based on the
characteristic data attributes of Affymetrix- and cDNA- platform. In
the work, we considered seven childhood leukemia patients and their
gene expression levels in either platform. With a dataset of 822
differentially expressed genes from both these platforms, we carried
out a specific ratio-treatment to Affymetrix data, which subsequently
showed an improvement in the relationship with the cDNA data.
Abstract: Efficient preprocessing is very essential for automatic
recognition of handwritten documents. In this paper, techniques on
segmenting words in handwritten Arabic text are presented. Firstly,
connected components (ccs) are extracted, and distances among
different components are analyzed. The statistical distribution of this
distance is then obtained to determine an optimal threshold for words
segmentation. Meanwhile, an improved projection based method is
also employed for baseline detection. The proposed method has been
successfully tested on IFN/ENIT database consisting of 26459
Arabic words handwritten by 411 different writers, and the results
were promising and very encouraging in more accurate detection of
the baseline and segmentation of words for further recognition.
Abstract: In recent years, IT convergence technology has been developed to get creative solution by combining robotics or sports science technology. Object detection and recognition have mainly applied to sports science field that has processed by recognizing face and by tracking human body. But object detection and recognition using vision sensor is challenge task in real world because of illumination. In this paper, object detection and recognition using vision sensor applied to sports simulator has been introduced. Face recognition has been processed to identify user and to update automatically a person athletic recording. Human body has tracked to offer a most accurate way of riding horse simulator. Combined image processing has been processed to reduce illumination adverse affect because illumination has caused low performance in detection and recognition in real world application filed. Face has recognized using standard face graph and human body has tracked using pose model, which has composed of feature nodes generated diverse face and pose images. Face recognition using Gabor wavelet and pose recognition using pose graph is robust to real application. We have simulated using ETRI database, which has constructed on horse riding simulator.
Abstract: Term Extraction, a key data preparation step in Text
Mining, extracts the terms, i.e. relevant collocation of words,
attached to specific concepts (e.g. genetic-algorithms and decisiontrees
are terms associated to the concept “Machine Learning" ). In
this paper, the task of extracting interesting collocations is achieved
through a supervised learning algorithm, exploiting a few
collocations manually labelled as interesting/not interesting. From
these examples, the ROGER algorithm learns a numerical function,
inducing some ranking on the collocations. This ranking is optimized
using genetic algorithms, maximizing the trade-off between the false
positive and true positive rates (Area Under the ROC curve). This
approach uses a particular representation for the word collocations,
namely the vector of values corresponding to the standard statistical
interestingness measures attached to this collocation. As this
representation is general (over corpora and natural languages),
generality tests were performed by experimenting the ranking
function learned from an English corpus in Biology, onto a French
corpus of Curriculum Vitae, and vice versa, showing a good
robustness of the approaches compared to the state-of-the-art Support
Vector Machine (SVM).