Abstract: This paper describes a new supervised fusion (hybrid)
electrocardiogram (ECG) classification solution consisting of a new
QRS complex geometrical feature extraction as well as a new version
of the learning vector quantization (LVQ) classification algorithm
aimed for overcoming the stability-plasticity dilemma. Toward this
objective, after detection and delineation of the major events of ECG
signal via an appropriate algorithm, each QRS region and also its
corresponding discrete wavelet transform (DWT) are supposed as
virtual images and each of them is divided into eight polar sectors.
Then, the curve length of each excerpted segment is calculated
and is used as the element of the feature space. To increase the
robustness of the proposed classification algorithm versus noise,
artifacts and arrhythmic outliers, a fusion structure consisting of
five different classifiers namely as Support Vector Machine (SVM),
Modified Learning Vector Quantization (MLVQ) and three Multi
Layer Perceptron-Back Propagation (MLP–BP) neural networks with
different topologies were designed and implemented. The new proposed
algorithm was applied to all 48 MIT–BIH Arrhythmia Database
records (within–record analysis) and the discrimination power of the
classifier in isolation of different beat types of each record was
assessed and as the result, the average accuracy value Acc=98.51%
was obtained. Also, the proposed method was applied to 6 number
of arrhythmias (Normal, LBBB, RBBB, PVC, APB, PB) belonging
to 20 different records of the aforementioned database (between–
record analysis) and the average value of Acc=95.6% was achieved.
To evaluate performance quality of the new proposed hybrid learning
machine, the obtained results were compared with similar peer–
reviewed studies in this area.
Abstract: As networking has become popular, Web-learning
tends to be a trend while designing a tool. Moreover, five-axis
machining has been widely used in industry recently; however, it has
potential axial table colliding problems. Thus this paper aims at
proposing an efficient web-learning collision detection tool on
five-axis machining. However, collision detection consumes heavy
resource that few devices can support, thus this research uses a
systematic approach based on web knowledge to detect collision. The
methodologies include the kinematics analyses for five-axis motions,
separating axis method for collision detection, and computer
simulation for verification. The machine structure is modeled as STL
format in CAD software. The input to the detection system is the
g-code part program, which describes the tool motions to produce the
part surface. This research produced a simulation program with C
programming language and demonstrated a five-axis machining
example with collision detection on web site. The system simulates the
five-axis CNC motion for tool trajectory and detects for any collisions
according to the input g-codes and also supports high-performance
web service benefiting from C. The result shows that our method
improves 4.5 time of computational efficiency, comparing to the
conventional detection method.
Abstract: Fixed-point simulation results are used for the
performance measure of inverting matrices by Cholesky
decomposition. The fixed-point Cholesky decomposition algorithm
is implemented using a fixed-point reconfigurable processing
element. The reconfigurable processing element provides all
mathematical operations required by Cholesky decomposition. The
fixed-point word length analysis is based on simulations using
different condition numbers and different matrix sizes. Simulation
results show that 16 bits word length gives sufficient performance
for small matrices with low condition number. Larger matrices and
higher condition numbers require more dynamic range for a fixedpoint
implementation.
Abstract: The stability of Newtonian and Non-Newtonian extending films under local or global heating or cooling conditions are considered. The thickness-averaged mass, momentum and energy equations with convective and radiative heat transfer are derived, both for Newtonian and non-Newtonian fluids (Maxwell, PTT and Giesekus models considered). The stability of the system is explored using either eigenvalue analysis or transient simulations. The results showed that the influence of heating and cooling on stability strongly depends on the magnitude of the Peclet number. Examples of stabilization or destabilization of heating or cooling are shown for Pe
Abstract: In this paper is investigated a possible
optimization of some linear algebra problems which can be
solved by parallel processing using the special arrays called
systolic arrays. In this paper are used some special types of
transformations for the designing of these arrays. We show
the characteristics of these arrays. The main focus is on
discussing the advantages of these arrays in parallel
computation of matrix product, with special approach to the
designing of systolic array for matrix multiplication.
Multiplication of large matrices requires a lot of
computational time and its complexity is O(n3 ). There are
developed many algorithms (both sequential and parallel) with
the purpose of minimizing the time of calculations. Systolic
arrays are good suited for this purpose. In this paper we show
that using an appropriate transformation implicates in finding
more optimal arrays for doing the calculations of this type.
Abstract: In this paper the reference current for Voltage Source
Converter (VSC) of the Shunt Active Power Filter (SAPF) is
generated using Synchronous Reference Frame method,
incorporating the PI controller with anti-windup scheme. The
proposed method improves the harmonic filtering by compensating
the winding up phenomenon caused by the integral term of the PI
controller.
Using Reference Frame Transformation, the current is transformed
from om a - b - c stationery frame to rotating 0 - d - q frame. Using
the PI controller, the current in the 0 - d - q frame is controlled to
get the desired reference signal. A controller with integral action
combined with an actuator that becomes saturated can give some
undesirable effects. If the control error is so large that the integrator
saturates the actuator, the feedback path becomes ineffective because
the actuator will remain saturated even if the process output changes.
The integrator being an unstable system may then integrate to a very
large value, the phenomenon known as integrator windup.
Implementing the integrator anti-windup circuit turns off the
integrator action when the actuator saturates, hence improving the
performance of the SAPF and dynamically compensating harmonics
in the power network. In this paper the system performance is
examined with Shunt Active Power Filter simulation model.
Abstract: Adhesively bonded joints are preferred over the
conventional methods of joining such as riveting, welding, bolting
and soldering. Some of the main advantages of adhesive joints
compared to conventional joints are the ability to join dissimilar
materials and damage-sensitive materials, better stress distribution,
weight reduction, fabrication of complicated shapes, excellent
thermal and insulation properties, vibration response and enhanced
damping control, smoother aerodynamic surfaces and an
improvement in corrosion and fatigue resistance. This paper presents
the behavior of adhesively bonded joints subjected to combined
thermal loadings, using the numerical methods. The joint
configuration considers aluminum as central adherend with six
different outer adherends including aluminum, steel, titanium, boronepoxy,
unidirectional graphite-epoxy and cross-ply graphite-epoxy
and epoxy-based adhesives. Free expansion of the joint in x
direction was permitted and stresses in adhesive layer and interfaces
calculated for different adherends.
Abstract: A new multi inner stage (MIS) cyclone was designed to
remove the acidic gas and fine particles produced from electronic
industry. To characterize gas flow in MIS cyclone, pressure and
velocity distribution were calculated by means of CFD program. Also,
the flow locus of fine particles and particle removal efficiency were
analyzed by Lagrangian method. When outlet pressure condition was
–100mmAq, the efficiency was the best in this study.
Abstract: Prediction of bacterial virulent protein sequences can
give assistance to identification and characterization of novel
virulence-associated factors and discover drug/vaccine targets against
proteins indispensable to pathogenicity. Gene Ontology (GO)
annotation which describes functions of genes and gene products as a
controlled vocabulary of terms has been shown effectively for a
variety of tasks such as gene expression study, GO annotation
prediction, protein subcellular localization, etc. In this study, we
propose a sequence-based method Virulent-GO by mining informative
GO terms as features for predicting bacterial virulent proteins.
Each protein in the datasets used by the existing method
VirulentPred is annotated by using BLAST to obtain its homologies
with known accession numbers for retrieving GO terms. After
investigating various popular classifiers using the same five-fold
cross-validation scheme, Virulent-GO using the single kind of GO
term features with an accuracy of 82.5% is slightly better than
VirulentPred with 81.8% using five kinds of sequence-based features.
For the evaluation of independent test, Virulent-GO also yields better
results (82.0%) than VirulentPred (80.7%). When evaluating single
kind of feature with SVM, the GO term feature performs much well,
compared with each of the five kinds of features.
Abstract: In cellular networks, limited availability of resources
has to be tapped to its fullest potential. In view of this aspect, a
sophisticated averaging and voting technique has been discussed in
this paper, wherein the radio resources available are utilized to the
fullest value by taking into consideration, several network and radio
parameters which decide on when the handover has to be made and
thereby reducing the load on Base station .The increase in the load
on the Base station might be due to several unnecessary handover
taking place which can be eliminated by making judicious use of the
radio and network parameters.
Abstract: Zero inflated strict arcsine model is a newly developed
model which is found to be appropriate in modeling overdispersed
count data. In this study, we extend zero inflated strict arcsine model
to zero inflated strict arcsine regression model by taking into
consideration the extra variability caused by extra zeros and
covariates in count data. Maximum likelihood estimation method is
used in estimating the parameters for this zero inflated strict arcsine
regression model.
Abstract: In this paper, we present a simple circuit for
Manchester decoding and without using any complicated or
programmable devices. This circuit can decode 90kbps of transmitted
encoded data; however, greater than this transmission rate can be
decoded if high speed devices were used. We also present a new
method for extracting the embedded clock from Manchester data in
order to use it for serial-to-parallel conversion. All of our
experimental measurements have been done using simulation.
Abstract: The term private equity usually refers to any type of
equity investment in an asset in which the equity is not freely
tradable on a public stock market. Some researchers believe that
private equity contributed to the extent of the crisis and increased
the pace of its spread over the world. We do not agree with this.
On the other hand, we argue that during the economic recession
private equity might become an important source of funds for firms
with special needs (e.g. for firms seeking buyout financing, venture
capital, expansion capital or distress debt financing). However,
over-regulation of private equity in both the European Union and
the US can slow down this specific funding channel to the
economy and deepen credit crunch during global crises.
Abstract: Recent trends in building constructions in Libya are
more toward tall (high-rise) building projects. As a consequence, a
better estimation of the lateral loading in the design process is
becoming the focal of a safe and cost effective building industry. Byin-
large, Libya is not considered a potential earthquake prone zone,
making wind is the dominant design lateral loads. Current design
practice in the country estimates wind speeds on a mere random
bases by considering certain factor of safety to the chosen wind
speed. Therefore, a need for a more accurate estimation of wind
speeds in Libya was the motivation behind this study. Records of
wind speed data were collected from 22 metrological stations in
Libya, and were statistically analysed. The analysis of more than four
decades of wind speed records suggests that the country can be
divided into four zones of distinct wind speeds. A computer “survey"
program was manipulated to draw design wind speeds contour map
for the state of Libya.
The paper presents the statistical analysis of Libya-s recorded
wind speed data and proposes design wind speed values for a 50-year
return period that covers the entire country.
Abstract: Fermented cassava flours (lafun) sold in Ogun and Oyo
States of Nigeria were collected from 10 markets for a period of two
months and analysed to determine their safety status. The presence of
trace metals was due to high vehicular movement around the drying
sites and markets. Cyanide and moisture contents of samples were
also determined to assess the adequacy of fermentation and drying.
The result showed that sample OWO was found to have the highest
amount of 16.02±0.12mg/kg cyanide while the lowest was found in
sample OJO with 10.51±0.10mg/kg. The results also indicated that
sample TVE had the highest moisture content of 18.50±0.20% while
sample OWO had the lowest amount of 12.46±0.47%. Copper and
lead levels were found to be highest in TVE with values 28.10mg/kg
and 1.1mg/kg respectively, while sample BTS had the lowest values
of 20.6mg/kg and 0.05mg/kg respectively. High value of cyanide
indicated inadequate fermentation.
Abstract: In this paper, we propose a face recognition algorithm
using AAM and Gabor features. Gabor feature vectors which are well
known to be robust with respect to small variations of shape, scaling,
rotation, distortion, illumination and poses in images are popularly
employed for feature vectors for many object detection and
recognition algorithms. EBGM, which is prominent among face
recognition algorithms employing Gabor feature vectors, requires
localization of facial feature points where Gabor feature vectors are
extracted. However, localization method employed in EBGM is based
on Gabor jet similarity and is sensitive to initial values. Wrong
localization of facial feature points affects face recognition rate. AAM
is known to be successfully applied to localization of facial feature
points. In this paper, we devise a facial feature point localization
method which first roughly estimate facial feature points using AAM
and refine facial feature points using Gabor jet similarity-based facial
feature localization method with initial points set by the rough facial
feature points obtained from AAM, and propose a face recognition
algorithm using the devised localization method for facial feature
localization and Gabor feature vectors. It is observed through
experiments that such a cascaded localization method based on both
AAM and Gabor jet similarity is more robust than the localization
method based on only Gabor jet similarity. Also, it is shown that the
proposed face recognition algorithm using this devised localization
method and Gabor feature vectors performs better than the
conventional face recognition algorithm using Gabor jet
similarity-based localization method and Gabor feature vectors like
EBGM.
Abstract: This paper presents a computational methodology
based on matrix operations for a computer based solution to the
problem of performance analysis of software reliability models
(SRMs). A set of seven comparison criteria have been formulated to
rank various non-homogenous Poisson process software reliability
models proposed during the past 30 years to estimate software
reliability measures such as the number of remaining faults, software
failure rate, and software reliability. Selection of optimal SRM for
use in a particular case has been an area of interest for researchers in
the field of software reliability. Tools and techniques for software
reliability model selection found in the literature cannot be used with
high level of confidence as they use a limited number of model
selection criteria. A real data set of middle size software project from
published papers has been used for demonstration of matrix method.
The result of this study will be a ranking of SRMs based on the
Permanent value of the criteria matrix formed for each model based
on the comparison criteria. The software reliability model with
highest value of the Permanent is ranked at number – 1 and so on.
Abstract: The output beam quality of multi transverse modes of
laser, are relatively poor. In order to obtain better beam quality, one
may use an aperture inside the laser resonator. In this case, various
transverse modes can be selected. We have selected various
transverse modes both by simulation and doing experiment. By
inserting a circular aperture inside the diode end-pumped Nd:YAG
pulsed laser resonator, we have obtained 00 TEM , 01 TEM
, 20 TEM and have studied which parameters, can change the mode
shape. Then, we have determined the beam quality factor of TEM00
gaussian mode.
Abstract: In this paper is presented a Geographic Information System (GIS) approach in order to qualify and monitor the broadband lines in efficient way. The methodology used for interpolation is the Delaunay Triangular Irregular Network (TIN). This method is applied for a case study in ISP Greece monitoring 120,000 broadband lines.
Abstract: In this paper, a new approach based on the extent of
friendship between the nodes is proposed which makes the nodes to
co-operate in an ad hoc environment. The extended DSR protocol is
tested under different scenarios by varying the number of malicious
nodes and node moving speed. It is also tested varying the number of
nodes in simulation used. The result indicates the achieved
throughput by extended DSR is greater than the standard DSR and
indicates the percentage of malicious drops over total drops are less
in the case of extended DSR than the standard DSR.