Abstract: This research’s objective is to select the model with
most accurate value by using Neural Network Technique as a way to
filter potential students who enroll in IT course by Electronic learning
at Suan Suanadha Rajabhat University. It is designed to help students
selecting the appropriate courses by themselves. The result showed
that the most accurate model was 100 Folds Cross-validation which
had 73.58% points of accuracy.
Abstract: It is sometimes difficult to differentiate between
innocent murmurs and pathological murmurs during auscultation. In
these difficult cases, an intelligent stethoscope with decision support
abilities would be of great value. In this study, using a dog model,
phonocardiographic recordings were obtained from 27 boxer dogs
with various degrees of aortic stenosis (AS) severity. As a reference
for severity assessment, continuous wave Doppler was used. The data
were analyzed with recurrence quantification analysis (RQA) with
the aim to find features able to distinguish innocent murmurs from
murmurs caused by AS. Four out of eight investigated RQA features
showed significant differences between innocent murmurs and
pathological murmurs. Using a plain linear discriminant analysis
classifier, the best pair of features (recurrence rate and entropy)
resulted in a sensitivity of 90% and a specificity of 88%. In
conclusion, RQA provide valid features which can be used for
differentiation between innocent murmurs and murmurs caused by
AS.
Abstract: The affect of the attendance percentage, the overall
GPA and the number of credit hours the student is enrolled in at
specific semester on the grade attained in specific course has been
studied. This study has been performed on three courses offered in
industrial engineering department at the Hashemite University in
Jordan. Study has revealed that the grade attained by a student is
strongly affected by the attendance percentage and his overall GPA
with a value of R2 of 52.5%. Another model that has been
investigated is the relation between the semester GPA and the
attendance percentage, the number of credit hours enrolled in at
specific semester, and the overall GPA. This model gave us a strong
relationship between the semester GPA and attendance percentage
and the overall GPA with a value of R2 of 76.2%.
Abstract: In modern era, the biggest challenge facing the
software industry is the upcoming of new technologies. So, the
software engineers are gearing up themselves to meet and manage
change in large software system. Also they find it difficult to deal
with software cognitive complexities. In the last few years many
metrics were proposed to measure the cognitive complexity of
software. This paper aims at a comprehensive survey of the metric of
software cognitive complexity. Some classic and efficient software
cognitive complexity metrics, such as Class Complexity (CC),
Weighted Class Complexity (WCC), Extended Weighted Class
Complexity (EWCC), Class Complexity due to Inheritance (CCI) and
Average Complexity of a program due to Inheritance (ACI), are
discussed and analyzed. The comparison and the relationship of these
metrics of software complexity are also presented.
Abstract: The objective of this research is to study principal
component analysis for classification of 67 soil samples collected from
different agricultural areas in the western part of Thailand. Six soil
properties were measured on the soil samples and are used as original
variables. Principal component analysis is applied to reduce the
number of original variables. A model based on the first two
principal components accounts for 72.24% of total variance. Score
plots of first two principal components were used to map with
agricultural areas divided into horticulture, field crops and wetland.
The results showed some relationships between soil properties and
agricultural areas. PCA was shown to be a useful tool for agricultural
areas classification based on soil properties.
Abstract: Deprivation indices are widely used in public health
study. These indices are also referred as the index of inequalities or
disadvantage. Even though, there are many indices that have been
built before, it is believed to be less appropriate to use the existing
indices to be applied in other countries or areas which had different
socio-economic conditions and different geographical characteristics.
The objective of this study is to construct the index based on the
geographical and socio-economic factors in Peninsular Malaysia
which is defined as the weighted household-based deprivation index.
This study has employed the variables based on household items,
household facilities, school attendance and education level obtained
from Malaysia 2000 census report. The factor analysis is used to
extract the latent variables from indicators, or reducing the
observable variable into smaller amount of components or factor.
Based on the factor analysis, two extracted factors were selected,
known as Basic Household Amenities and Middle-Class Household
Item factor. It is observed that the district with a lower index values
are located in the less developed states like Kelantan, Terengganu
and Kedah. Meanwhile, the areas with high index values are located
in developed states such as Pulau Pinang, W.P. Kuala Lumpur and
Selangor.
Abstract: Compensating physiological motion in the context
of minimally invasive cardiac surgery has become an attractive
issue since it outperforms traditional cardiac procedures offering
remarkable benefits. Owing to space restrictions, computer vision
techniques have proven to be the most practical and suitable solution.
However, the lack of robustness and efficiency of existing methods
make physiological motion compensation an open and challenging
problem. This work focusses on increasing robustness and efficiency
via exploration of the classes of 1−and 2−regularized optimization,
emphasizing the use of explicit regularization. Both approaches are
based on natural features of the heart using intensity information.
Results pointed out the 1−regularized optimization class as the best
since it offered the shortest computational cost, the smallest average
error and it proved to work even under complex deformations.
Abstract: One-way functions are functions that are easy to
compute but hard to invert. Their existence is an open conjecture; it
would imply the existence of intractable problems (i.e. NP-problems
which are not in the P complexity class).
If true, the existence of one-way functions would have an impact
on the theoretical framework of physics, in particularly, quantum
mechanics. Such aspect of one-way functions has never been shown
before.
In the present work, we put forward the following.
We can calculate the microscopic state (say, the particle spin in the
z direction) of a macroscopic system (a measuring apparatus
registering the particle z-spin) by the system macroscopic state (the
apparatus output); let us call this association the function F. The
question is: can we compute the function F in the inverse direction?
In other words, can we compute the macroscopic state of the system
through its microscopic state (the preimage F -1)?
In the paper, we assume that the function F is a one-way function.
The assumption implies that at the macroscopic level the Schrödinger
equation becomes unfeasible to compute. This unfeasibility plays a
role of limit of the validity of the linear Schrödinger equation.
Abstract: This paper develops driver reaction-time models for
car-following analysis based on human factors. The reaction time
was classified as brake-reaction time (BRT) and
acceleration/deceleration reaction time (ADRT). The BRT occurs
when the lead vehicle is barking and its brake light is on, while the
ADRT occurs when the driver reacts to adjust his/her speed using the
gas pedal only. The study evaluates the effect of driver
characteristics and traffic kinematic conditions on the driver reaction
time in a car-following environment. The kinematic conditions
introduced urgency and expectancy based on the braking behaviour
of the lead vehicle at different speeds and spacing. The kinematic
conditions were used for evaluating the BRT and are classified as
normal, surprised, and stationary. Data were collected on a driving
simulator integrated into a real car and included the BRT and ADRT
(as dependent variables) and driver-s age, gender, driving experience,
driving intensity (driving hours per week), vehicle speed, and
spacing (as independent variables). The results showed that there was
a significant difference in the BRT at normal, surprised, and
stationary scenarios and supported the hypothesis that both urgency
and expectancy had significant effects on BRT. Driver-s age, gender,
speed, and spacing were found to be significant variables for the
BRT in all scenarios. The results also showed that driver-s age and
gender were significant variables for the ADRT. The research
presented in this paper is part of a larger project to develop a driversensitive
in-vehicle rear-end collision warning system.
Abstract: In this paper, a clustering algorithm named KHarmonic
means (KHM) was employed in the training of Radial
Basis Function Networks (RBFNs). KHM organized the data in
clusters and determined the centres of the basis function. The popular
clustering algorithms, namely K-means (KM) and Fuzzy c-means
(FCM), are highly dependent on the initial identification of elements
that represent the cluster well. In KHM, the problem can be avoided.
This leads to improvement in the classification performance when
compared to other clustering algorithms. A comparison of the
classification accuracy was performed between KM, FCM and KHM.
The classification performance is based on the benchmark data sets:
Iris Plant, Diabetes and Breast Cancer. RBFN training with the KHM
algorithm shows better accuracy in classification problem.
Abstract: This paper presents a semi-supervised learning algorithm called Iterative-Cross Training (ICT) to solve the Web pages classification problems. We apply Inductive logic programming (ILP) as a strong learner in ICT. The objective of this research is to evaluate the potential of the strong learner in order to boost the performance of the weak learner of ICT. We compare the result with the supervised Naive Bayes, which is the well-known algorithm for the text classification problem. The performance of our learning algorithm is also compare with other semi-supervised learning algorithms which are Co-Training and EM. The experimental results show that ICT algorithm outperforms those algorithms and the performance of the weak learner can be enhanced by ILP system.
Abstract: In the traditional theory of non-uniform torsion the
axial displacement field is expressed as the product of the unit twist
angle and the warping function. The first one, variable along the
beam axis, is obtained by a global congruence condition; the second
one, instead, defined over the cross-section, is determined by solving
a Neumann problem associated to the Laplace equation, as well as for
the uniform torsion problem.
So, as in the classical theory the warping function doesn-t punctually
satisfy the first indefinite equilibrium equation, the principal aim of
this work is to develop a new theory for non-uniform torsion of
beams with axial symmetric cross-section, fully restrained on both
ends and loaded by a constant torque, that permits to punctually
satisfy the previous equation, by means of a trigonometric expansion
of the axial displacement and unit twist angle functions.
Furthermore, as the classical theory is generally applied with good
results to the global and local analysis of ship structures, two beams
having the first one an open profile, the second one a closed section,
have been analyzed, in order to compare the two theories.
Abstract: In the past years a lot of effort has been made in the
field of face detection. The human face contains important features
that can be used by vision-based automated systems in order to
identify and recognize individuals. Face location, the primary step of
the vision-based automated systems, finds the face area in the input
image. An accurate location of the face is still a challenging task.
Viola-Jones framework has been widely used by researchers in order
to detect the location of faces and objects in a given image. Face
detection classifiers are shared by public communities, such as
OpenCV. An evaluation of these classifiers will help researchers to
choose the best classifier for their particular need. This work focuses
of the evaluation of face detection classifiers minding facial
landmarks.
Abstract: In this paper, a mathematical model is proposed to
estimate the dropping probabilities of cellular wireless networks by
queuing handoff instead of reserving guard channels. Usually, prioritized
handling of handoff calls is done with the help of guard channel
reservation. To evaluate the proposed model, gamma inter-arrival and
general service time distributions have been considered. Prevention of
some of the attempted calls from reaching to the switching center due
to electromagnetic propagation failure or whimsical user behaviour
(missed call, prepaid balance etc.), make the inter-arrival time of
the input traffic to follow gamma distribution. The performance is
evaluated and compared with that of guard channel scheme.
Abstract: High voltage generators are being subject to higher
voltage rating and are being designed to operate in harsh conditions.
Stator windings are the main component of generators in which
Electrical, magnetically and thermal stresses remain major failures
for insulation degradation accelerated aging. A large number of
generators failed due to stator winding problems, mainly insulation
deterioration. Insulation degradation assessment plays vital role in the
asset life management. Mostly the stator failure is catastrophic
causing significant damage to the plant. Other than generation loss,
stator failure involves heavy repair or replacement cost. Electro
thermal analysis is the main characteristic for improvement design of
stator slot-s insulation. Dielectric parameters such as insulation
thickness, spacing, material types, geometry of winding and slot are
major design consideration. A very powerful method available to
analyze electro thermal performance is Finite Element Method
(FEM) which is used in this paper. The analysis of various stator coil
and slot configurations are used to design the better dielectric system
to reduce electrical and thermal stresses in order to increase the
power of generator in the same volume of core. This paper describes
the process used to perform classical design and improvement
analysis of stator slot-s insulation.
Abstract: We demonstrate through a sample application, Ebanking,
that the Web Service Modelling Language Ontology component
can be used as a very powerful object-oriented database design
language with logic capabilities. Its conceptual syntax allows the
definition of class hierarchies, and logic syntax allows the definition
of constraints in the database. Relations, which are available for
modelling relations of three or more concepts, can be connected to
logical expressions, allowing the implicit specification of database
content. Using a reasoning tool, logic queries can also be made
against the database in simulation mode.
Abstract: In general, class complexity is measured based on any
one of these factors such as Line of Codes (LOC), Functional points
(FP), Number of Methods (NOM), Number of Attributes (NOA) and so on. There are several new techniques, methods and metrics with
the different factors that are to be developed by the researchers for calculating the complexity of the class in Object Oriented (OO)
software. Earlier, Arockiam et.al has proposed a new complexity measure namely Extended Weighted Class Complexity (EWCC)
which is an extension of Weighted Class Complexity which is proposed by Mishra et.al. EWCC is the sum of cognitive weights of
attributes and methods of the class and that of the classes derived. In EWCC, a cognitive weight of each attribute is considered to be 1.
The main problem in EWCC metric is that, every attribute holds the
same value but in general, cognitive load in understanding the
different types of attributes cannot be the same. So here, we are proposing a new metric namely Attribute Weighted Class Complexity
(AWCC). In AWCC, the cognitive weights have to be assigned for the attributes which are derived from the effort needed to understand
their data types. The proposed metric has been proved to be a better
measure of complexity of class with attributes through the case studies and experiments
Abstract: This study applies the sequential panel selection
method (SPSM) procedure proposed by Chortareas and Kapetanios
(2009) to investigate the time-series properties of energy
consumption in 50 US states from 1963 to 2009. SPSM involves the
classification of the entire panel into a group of stationary series and
a group of non-stationary series to identify how many and which
series in the panel are stationary processes. Empirical results obtained
through SPSM with the panel KSS unit root test developed by Ucar
and Omay (2009) combined with a Fourier function indicate that
energy consumption in all the 50 US states are stationary. The results
of this study have important policy implications for the 50 US states.
Abstract: This paper describes a method to improve the robustness of a face recognition system based on the combination of two compensating classifiers. The face images are preprocessed by the appearance-based statistical approaches such as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). LDA features of the face image are taken as the input of the Radial Basis Function Network (RBFN). The proposed approach has been tested on the ORL database. The experimental results show that the LDA+RBFN algorithm has achieved a recognition rate of 93.5%
Abstract: In this paper, a wavelet-based neural network (WNN) classifier for recognizing EEG signals is implemented and tested under three sets EEG signals (healthy subjects, patients with epilepsy and patients with epileptic syndrome during the seizure). First, the Discrete Wavelet Transform (DWT) with the Multi-Resolution Analysis (MRA) is applied to decompose EEG signal at resolution levels of the components of the EEG signal (δ, θ, α, β and γ) and the Parseval-s theorem are employed to extract the percentage distribution of energy features of the EEG signal at different resolution levels. Second, the neural network (NN) classifies these extracted features to identify the EEGs type according to the percentage distribution of energy features. The performance of the proposed algorithm has been evaluated using in total 300 EEG signals. The results showed that the proposed classifier has the ability of recognizing and classifying EEG signals efficiently.