Abstract: The evaluation and measurement of human body
dimensions are achieved by physical anthropometry. This research
was conducted in view of the importance of anthropometric indices
of the face in forensic medicine, surgery, and medical imaging. The
main goal of this research is to optimization of facial feature point by
establishing a mathematical relationship among facial features and
used optimize feature points for age classification. Since selected
facial feature points are located to the area of mouth, nose, eyes and
eyebrow on facial images, all desire facial feature points are extracted
accurately. According this proposes method; sixteen Euclidean
distances are calculated from the eighteen selected facial feature
points vertically as well as horizontally. The mathematical
relationships among horizontal and vertical distances are established.
Moreover, it is also discovered that distances of the facial feature
follows a constant ratio due to age progression. The distances
between the specified features points increase with respect the age
progression of a human from his or her childhood but the ratio of the
distances does not change (d = 1 .618 ) . Finally, according to the
proposed mathematical relationship four independent feature
distances related to eight feature points are selected from sixteen
distances and eighteen feature point-s respectively. These four feature
distances are used for classification of age using Support Vector
Machine (SVM)-Sequential Minimal Optimization (SMO) algorithm
and shown around 96 % accuracy. Experiment result shows the
proposed system is effective and accurate for age classification.
Abstract: The Influence Diagrams (IDs) is a kind of Probabilistic Belief Networks for graphic modeling. The usage of IDs can improve the communication among field experts, modelers, and decision makers, by showing the issue frame discussed from a high-level point of view. This paper enhances the Time-Sliced Influence Diagrams (TSIDs, or called Dynamic IDs) based formalism from a Discrete Event Systems Modeling and Simulation (DES M&S) perspective, for Exploring Analysis (EA) modeling. The enhancements enable a modeler to specify times occurred of endogenous events dynamically with stochastic sampling as model running and to describe the inter- influences among them with variable nodes in a dynamic situation that the existing TSIDs fails to capture. The new class of model is named Dynamic-Stochastic Influence Diagrams (DSIDs). The paper includes a description of the modeling formalism and the hiberarchy simulators implementing its simulation algorithm, and shows a case study to illustrate its enhancements.
Abstract: Some of the students' problems in writing skill stem
from inadequate preparation for the writing assignment. Students
should be taught how to write well when they arrive in language
classes. Having selected a topic, the students examine and explore the
theme from as large a variety of viewpoints as their background and
imagination make possible. Another strategy is that the students
prepare an Outline before writing the paper. The comparison between
the two mentioned thought provoking techniques was carried out
between the two class groups –students of Islamic Azad University of
Dezful who were studying “Writing 2" as their main course. Each
class group was assigned to write five compositions separately in
different periods of time. Then a t-test for each pair of exams between
the two class groups showed that the t-observed in each pair was
more than the t-critical. Consequently, the first hypothesis which
states those who utilize Brainstorming as a thought provoking
technique in prewriting phase are more successful than those who
outline the papers before writing was verified.
Abstract: Large volumes of fingerprints are collected and stored
every day in a wide range of applications, including forensics, access
control etc. It is evident from the database of Federal Bureau of
Investigation (FBI) which contains more than 70 million finger
prints. Compression of this database is very important because of this
high Volume. The performance of existing image coding standards
generally degrades at low bit-rates because of the underlying block
based Discrete Cosine Transform (DCT) scheme. Over the past
decade, the success of wavelets in solving many different problems
has contributed to its unprecedented popularity. Due to
implementation constraints scalar wavelets do not posses all the
properties which are needed for better performance in compression.
New class of wavelets called 'Multiwavelets' which posses more
than one scaling filters overcomes this problem. The objective of this
paper is to develop an efficient compression scheme and to obtain
better quality and higher compression ratio through multiwavelet
transform and embedded coding of multiwavelet coefficients through
Set Partitioning In Hierarchical Trees algorithm (SPIHT) algorithm.
A comparison of the best known multiwavelets is made to the best
known scalar wavelets. Both quantitative and qualitative measures of
performance are examined for Fingerprints.
Abstract: When acid is pumped into damaged reservoirs for
damage removal/stimulation, distorted inflow of acid into the
formation occurs caused by acid preferentially traveling into highly
permeable regions over low permeable regions, or (in general) into
the path of least resistance. This can lead to poor zonal coverage and
hence warrants diversion to carry out an effective placement of acid.
Diversion is desirably a reversible technique of temporarily reducing
the permeability of high perm zones, thereby forcing the acid into
lower perm zones.
The uniqueness of each reservoir can pose several challenges to
engineers attempting to devise optimum and effective diversion
strategies. Diversion techniques include mechanical placement and/or
chemical diversion of treatment fluids, further sub-classified into ball
sealers, bridge plugs, packers, particulate diverters, viscous gels,
crosslinked gels, relative permeability modifiers (RPMs), foams,
and/or the use of placement techniques, such as coiled tubing (CT)
and the maximum pressure difference and injection rate (MAPDIR)
methodology.
It is not always realized that the effectiveness of diverters greatly
depends on reservoir properties, such as formation type, temperature,
reservoir permeability, heterogeneity, and physical well
characteristics (e.g., completion type, well deviation, length of
treatment interval, multiple intervals, etc.). This paper reviews the
mechanisms by which each variety of diverter functions and
discusses the effect of various reservoir properties on the efficiency
of diversion techniques. Guidelines are recommended to help
enhance productivity from zones of interest by choosing the best
methods of diversion while pumping an optimized amount of
treatment fluid. The success of an overall acid treatment often
depends on the effectiveness of the diverting agents.
Abstract: This research paper deals with the implementation of face recognition using neural network (recognition classifier) on low-resolution images. The proposed system contains two parts, preprocessing and face classification. The preprocessing part converts original images into blurry image using average filter and equalizes the histogram of those image (lighting normalization). The bi-cubic interpolation function is applied onto equalized image to get resized image. The resized image is actually low-resolution image providing faster processing for training and testing. The preprocessed image becomes the input to neural network classifier, which uses back-propagation algorithm to recognize the familiar faces. The crux of proposed algorithm is its beauty to use single neural network as classifier, which produces straightforward approach towards face recognition. The single neural network consists of three layers with Log sigmoid, Hyperbolic tangent sigmoid and Linear transfer function respectively. The training function, which is incorporated in our work, is Gradient descent with momentum (adaptive learning rate) back propagation. The proposed algorithm was trained on ORL (Olivetti Research Laboratory) database with 5 training images. The empirical results provide the accuracy of 94.50%, 93.00% and 90.25% for 20, 30 and 40 subjects respectively, with time delay of 0.0934 sec per image.
Abstract: There are two types of drought as conceptual drought
and operational drought. The three parameters as the beginning, the
end and the degree of severity of the drought can be identifying in
operational drought by average precipitation in the whole region. One
of the methods classified to measure drought is Reconnaissance
Drought Index (RDI). Evapotranspiration is calculated using
Penman-Monteith method by analyzing thirty nine years prolong
climatic data. The evapotranspiration is then utilized in RDI to
classify normalized and standardized RDI. These RDI classifications
led to what kind of drought faced in Bhavnagar region on 12 month
time scale basis. The comparison between actual drought conditions
and RDI method used to find out drought are also illustrated. It can
be concluded that the index results of drought in a particular year are
same in both methods but having different index values where as
severity remain same.
Abstract: SDMA (Space-Division Multiple Access) is a MIMO
(Multiple-Input and Multiple-Output) based wireless communication
network architecture which has the potential to significantly increase
the spectral efficiency and the system performance. The maximum
likelihood (ML) detection provides the optimal performance, but its
complexity increases exponentially with the constellation size of
modulation and number of users. The QR decomposition (QRD)
MUD can be a substitute to ML detection due its low complexity and
near optimal performance. The minimum mean-squared-error
(MMSE) multiuser detection (MUD) minimises the mean square
error (MSE), which may not give guarantee that the BER of the
system is also minimum. But the minimum bit error rate (MBER)
MUD performs better than the classic MMSE MUD in term of
minimum probability of error by directly minimising the BER cost
function. Also the MBER MUD is able to support more users than
the number of receiving antennas, whereas the rest of MUDs fail in
this scenario. In this paper the performance of various MUD
techniques is verified for the correlated MIMO channel models based
on IEEE 802.16n standard.
Abstract: This paper studies the dependability of componentbased
applications, especially embedded ones, from the diagnosis
point of view. The principle of the diagnosis technique is to
implement inter-component tests in order to detect and locate the
faulty components without redundancy. The proposed approach for
diagnosing faulty components consists of two main aspects. The first
one concerns the execution of the inter-component tests which
requires integrating test functionality within a component. This is the
subject of this paper. The second one is the diagnosis process itself
which consists of the analysis of inter-component test results to
determine the fault-state of the whole system. Advantage of this
diagnosis method when compared to classical redundancy faulttolerant
techniques are application autonomy, cost-effectiveness and
better usage of system resources. Such advantage is very important
for many systems and especially for embedded ones.
Abstract: This paper presents a methodology to harvest the kinetic energy of the raindrops using piezoelectric devices. In the study 1m×1m PVDF (Polyvinylidene fluoride) piezoelectric membrane, which is fixed by the four edges, is considered for the numerical simulation on deformation of the membrane due to the impact of the raindrops. Then according to the drop size of the rain, the simulation is performed classifying the rainfall types into three categories as light stratiform rain, moderate stratiform rain and heavy thundershower. The impact force of the raindrop is dependent on the terminal velocity of the raindrop, which is a function of raindrop diameter. The results were then analyzed to calculate the harvestable energy from the deformation of the piezoelectric membrane.
Abstract: Many studies have focused on the nonlinear analysis
of electroencephalography (EEG) mainly for the characterization of
epileptic brain states. It is assumed that at least two states of the
epileptic brain are possible: the interictal state characterized by a
normal apparently random, steady-state EEG ongoing activity; and
the ictal state that is characterized by paroxysmal occurrence of
synchronous oscillations and is generally called in neurology, a
seizure.
The spatial and temporal dynamics of the epileptogenic process is
still not clear completely especially the most challenging aspects of
epileptology which is the anticipation of the seizure. Despite all the
efforts we still don-t know how and when and why the seizure
occurs. However actual studies bring strong evidence that the
interictal-ictal state transition is not an abrupt phenomena. Findings
also indicate that it is possible to detect a preseizure phase.
Our approach is to use the neural network tool to detect interictal
states and to predict from those states the upcoming seizure ( ictal
state). Analysis of the EEG signal based on neural networks is used
for the classification of EEG as either seizure or non-seizure. By
applying prediction methods it will be possible to predict the
upcoming seizure from non-seizure EEG.
We will study the patients admitted to the epilepsy monitoring
unit for the purpose of recording their seizures. Preictal, ictal, and
post ictal EEG recordings are available on such patients for analysis
The system will be induced by taking a body of samples then
validate it using another. Distinct from the two first ones a third body
of samples is taken to test the network for the achievement of
optimum prediction. Several methods will be tried 'Backpropagation
ANN' and 'RBF'.
Abstract: Petri Net (PN) has proven to be effective graphical, mathematical, simulation, and control tool for Discrete Event Systems (DES). But, with the growth in the complexity of modern industrial, and communication systems, PN found themselves inadequate to address the problems of uncertainty, and imprecision in data. This gave rise to amalgamation of Fuzzy logic with Petri nets and a new tool emerged with the name of Fuzzy Petri Nets (FPN). Although there had been a lot of research done on FPN and a number of their applications have been anticipated, but their basic types and structure are still ambiguous. Therefore, in this research, an effort is made to categorize FPN according to their structure and algorithms Further, literature review of the applications of FPN in the light of their classifications has been done.
Abstract: The problem of ranking (rank regression) has become popular in the machine learning community. This theory relates to problems, in which one has to predict (guess) the order between objects on the basis of vectors describing their observed features. In many ranking algorithms a convex loss function is used instead of the 0-1 loss. It makes these procedures computationally efficient. Hence, convex risk minimizers and their statistical properties are investigated in this paper. Fast rates of convergence are obtained under conditions, that look similarly to the ones from the classification theory. Methods used in this paper come from the theory of U-processes as well as empirical processes.
Abstract: This paper investigates experimental studies on
vibration suppression for a cantilever beam using an
Electro-Rheological (ER) sandwich shock absorber. ER fluid (ERF) is a
class of smart materials that can undergo significant reversible changes
immediately in its rheological and mechanical properties under the
influence of an applied electric field. Firstly, an ER sandwich beam is
fabricated by inserting a starch-based ERF into a hollow composite
beam. At the same time, experimental investigations are focused on the
frequency response of the ERF sandwich beam. Second, the ERF
sandwich beam is attached to a cantilever beam to become as a shock
absorber. Finally, a fuzzy semi-active vibration control is designed to
suppress the vibration of the cantilever beam via the ERF sandwich
shock absorber. To check the consistency of the proposed fuzzy
controller, the real-time implementation validated the performance of
the controller.
Abstract: Petrol Fuel Station (PFS) has potential hazards to the
people, asset, environment and reputation of an operating company.
Fire hazards, static electricity air pollution evoked by aliphatic and
aromatic organic compounds are major causes of accident/incident
occurrence at fuel station. Activities such as carelessness,
maintenance, housekeeping, slips trips and falls, transportation
hazard, major and minor injuries, robbery and snake bites has a
potential to create unsafe conditions. The level of risk of these
hazards varies according to location and country. The emphasis on
safety considerations by the government is variable all around the
world. Developed countries safety records are much better as
compared to developing countries safety statistics. There is no
significant approach available to highlight the unsafe acts and unsafe
conditions during operation and maintenance of fuel station. Fuel
station is the most commonly available facilities that contain
flammable and hazardous materials. Due to continuous operation of
fuel station they pose various hazards to people, environment and
assets of an organization. To control these hazards, there is a need for
specific approach. PFS operation is unique as compared to other
businesses. For smooth operations it demands an involvement of
operating company, contractor and operator group. This study will
focus to address hazard contributing factors that have a potential to
make PFS operation risky. One year data collected, 902 activities
analyzed, comparisons were made to highlight significant
contributing factors. The study will provide help and assistance to
PFS outlet marketing companies to make their fuel station operation
safer. It will help health safety and environment (HSE) professionals
to arrest the gap available related to safety matters at PFS.
Abstract: In this work a new offline signature recognition system
based on Radon Transform, Fractal Dimension (FD) and Support Vector Machine (SVM) is presented. In the first step, projections of
original signatures along four specified directions have been performed using radon transform. Then, FDs of four obtained
vectors are calculated to construct a feature vector for each
signature. These vectors are then fed into SVM classifier for recognition of signatures. In order to evaluate the effectiveness of
the system several experiments are carried out. Offline signature
database from signature verification competition (SVC) 2004 is used
during all of the tests. Experimental result indicates that the proposed method achieved high accuracy rate in signature recognition.
Abstract: In this paper, a new learning approach for network
intrusion detection using naïve Bayesian classifier and ID3 algorithm
is presented, which identifies effective attributes from the training
dataset, calculates the conditional probabilities for the best attribute
values, and then correctly classifies all the examples of training and
testing dataset. Most of the current intrusion detection datasets are
dynamic, complex and contain large number of attributes. Some of
the attributes may be redundant or contribute little for detection
making. It has been successfully tested that significant attribute
selection is important to design a real world intrusion detection
systems (IDS). The purpose of this study is to identify effective
attributes from the training dataset to build a classifier for network
intrusion detection using data mining algorithms. The experimental
results on KDD99 benchmark intrusion detection dataset demonstrate
that this new approach achieves high classification rates and reduce
false positives using limited computational resources.
Abstract: Segmentation is an important step in medical image
analysis and classification for radiological evaluation or computer
aided diagnosis. This paper presents the problem of inaccurate lung
segmentation as observed in algorithms presented by researchers
working in the area of medical image analysis. The different lung
segmentation techniques have been tested using the dataset of 19
patients consisting of a total of 917 images. We obtained datasets of
11 patients from Ackron University, USA and of 8 patients from
AGA Khan Medical University, Pakistan. After testing the algorithms
against datasets, the deficiencies of each algorithm have been
highlighted.
Abstract: This paper presents an approach based on the
adoption of a distributed cognition framework and a non parametric
multicriteria evaluation methodology (DEA) designed specifically to
compare e-commerce websites from the consumer/user viewpoint. In
particular, the framework considers a website relative efficiency as a
measure of its quality and usability. A website is modelled as a black
box capable to provide the consumer/user with a set of
functionalities. When the consumer/user interacts with the website to
perform a task, he/she is involved in a cognitive activity, sustaining a
cognitive cost to search, interpret and process information, and
experiencing a sense of satisfaction. The degree of ambiguity and
uncertainty he/she perceives and the needed search time determine
the effort size – and, henceforth, the cognitive cost amount – he/she
has to sustain to perform his/her task. On the contrary, task
performing and result achievement induce a sense of gratification,
satisfaction and usefulness. In total, 9 variables are measured,
classified in a set of 3 website macro-dimensions (user experience,
site navigability and structure). The framework is implemented to
compare 40 websites of businesses performing electronic commerce
in the information technology market. A questionnaire to collect
subjective judgements for the websites in the sample was purposely
designed and administered to 85 university students enrolled in
computer science and information systems engineering
undergraduate courses.
Abstract: The identification and elimination of bad
measurements is one of the basic functions of a robust state estimator
as bad data have the effect of corrupting the results of state
estimation according to the popular weighted least squares method.
However this is a difficult problem to handle especially when dealing
with multiple errors from the interactive conforming type. In this
paper, a self adaptive genetic based algorithm is proposed. The
algorithm utilizes the results of the classical linearized normal
residuals approach to tune the genetic operators thus instead of
making a randomized search throughout the whole search space it is
more likely to be a directed search thus the optimum solution is
obtained at very early stages(maximum of 5 generations). The
algorithm utilizes the accumulating databases of already computed
cases to reduce the computational burden to minimum. Tests are
conducted with reference to the standard IEEE test systems. Test
results are very promising.