Abstract: AAM has been successfully applied to face alignment,
but its performance is very sensitive to initial values. In case the initial
values are a little far distant from the global optimum values, there
exists a pretty good possibility that AAM-based face alignment may
converge to a local minimum. In this paper, we propose a progressive
AAM-based face alignment algorithm which first finds the feature
parameter vector fitting the inner facial feature points of the face and
later localize the feature points of the whole face using the first
information. The proposed progressive AAM-based face alignment
algorithm utilizes the fact that the feature points of the inner part of the
face are less variant and less affected by the background surrounding
the face than those of the outer part (like the chin contour). The
proposed algorithm consists of two stages: modeling and relation
derivation stage and fitting stage. Modeling and relation derivation
stage first needs to construct two AAM models: the inner face AAM
model and the whole face AAM model and then derive relation matrix
between the inner face AAM parameter vector and the whole face
AAM model parameter vector. In the fitting stage, the proposed
algorithm aligns face progressively through two phases. In the first
phase, the proposed algorithm will find the feature parameter vector
fitting the inner facial AAM model into a new input face image, and
then in the second phase it localizes the whole facial feature points of
the new input face image based on the whole face AAM model using
the initial parameter vector estimated from using the inner feature
parameter vector obtained in the first phase and the relation matrix
obtained in the first stage. Through experiments, it is verified that the
proposed progressive AAM-based face alignment algorithm is more
robust with respect to pose, illumination, and face background than the
conventional basic AAM-based face alignment algorithm.
Abstract: An approach for experimental measurement of the
dynamic characteristics of linear electromagnet actuators is
presented. It uses accelerometer sensor to register the armature
acceleration. The velocity and displacement of the moving parts can
be obtained by integration of the acceleration results. The armature
movement of permanent magnet linear actuator is acquired using this
technique. The results are analyzed and the performance of the
supposed approach is compared with the most commonly used
experimental setup where the displacement of the armature vs. time
is measured instead of its acceleration.
Abstract: Scale Invariant Feature Transform (SIFT) has been
widely applied, but extracting SIFT feature is complicated and
time-consuming. In this paper, to meet the demand of the real-time
applications, SIFT is parallelized and optimized on cluster system,
which is named pSIFT. Redundancy storage and communication are
used for boundary data to improve the performance, and before
representation of feature descriptor, data reallocation is adopted to
keep load balance in pSIFT. Experimental results show that pSIFT
achieves good speedup and scalability.
Abstract: This paper presents an adaptive motion estimator
that can be dynamically reconfigured by the best algorithm
depending on the variation of the video nature during the lifetime
of an application under running. The 4 Step Search (4SS) and the
Gradient Search (GS) algorithms are integrated in the estimator in
order to be used in the case of rapid and slow video sequences
respectively. The Full Search Block Matching (FSBM) algorithm
has been also integrated in order to be used in the case of the
video sequences which are not real time oriented.
In order to efficiently reduce the computational cost while
achieving better visual quality with low cost power, the proposed
motion estimator is based on a Variable Block Size (VBS) scheme
that uses only the 16x16, 16x8, 8x16 and 8x8 modes.
Experimental results show that the adaptive motion estimator
allows better results in term of Peak Signal to Noise Ratio
(PSNR), computational cost, FPGA occupied area, and dissipated
power relatively to the most popular variable block size schemes
presented in the literature.
Abstract: Pattern matching is one of the fundamental applications in molecular biology. Searching DNA related data is a common activity for molecular biologists. In this paper we explore the applicability of a new pattern matching technique called Index based Forward Backward Multiple Pattern Matching algorithm(IFBMPM), for DNA Sequences. Our approach avoids unnecessary comparisons in the DNA Sequence due to this; the number of comparisons of the proposed algorithm is very less compared to other existing popular methods. The number of comparisons rapidly decreases and execution time decreases accordingly and shows better performance.
Abstract: Impurity metals such as manganese and cadmium
from high-tenor cobalt electrolyte solution were selectively removed
by solvent extraction method using Co-D2EHPA after converting the functional group of D2EHPA with Co2+ ions. The process parameters
such as pH, organic concentration, O/A ratio, kinetics etc. were
investigated and the experiments were conducted by batch tests in the laboratory bench scale. Results showed that a significant amount
of manganese and cadmium can be extracted using Co-D2EHPA for the optimum processing of cobalt electrolyte solution at equilibrium pH about 3.5. The McCabe-Thiele diagram, constructed from the
extraction studies showed that 100% impurities can be extracted through four stages for manganese and three stages for cadmium
using O/A ratio of 0.65 and 1.0, respectively. From the stripping study, it was found that 100% manganese and cadmium can be stripped from the loaded organic using 0.4 M H2SO4 in a single
contact. The loading capacity of Co-D2EHPA by manganese and cadmium were also investigated with different O/A ratio as well as
with number of stages of contact of aqueous and organic phases. Valuable information was obtained for the designing of an impurities
removal process for the production of pure cobalt with less trouble in the electrowinning circuit.
Abstract: The Application of e-health solutions has brought superb advancements in the health care industry. E-health solutions have already been embraced in the industrialized countries. In an effort to catch up with the growth, the developing countries have strived to revolutionize the healthcare industry by use of Information technology in different ways. Based on a technology assessment carried out in Kenya – one of the developing countries – and using multiple case studies in Nyanza Province, this work focuses on an investigation on how five rural hospitals are adapting to the technology shift. The issues examined include the ICT infrastructure and e-health technologies in place, the knowledge of participants in terms of benefits gained through the use of ICT and the challenges posing barriers to the use of ICT technologies in these hospitals. The results reveal that the ICT infrastructure in place is inadequate for e-health implementations as a result to various challenges that exist. Consequently, suggestions on how to tackle the various challenges have been addressed in this paper.
Abstract: Conventional WBL is effective for meaningful student, because rote student learn by repeating without thinking or trying to understand. It is impossible to have full benefit from conventional WBL. Understanding of rote student-s intention and what influences it becomes important. Poorly designed user interface will discourage rote student-s cultivation and intention to use WBL. Thus, user interface design is an important factor especially when WBL is used as comprehensive replacement of conventional teaching. This research proposes the influencing factors that can enhance student-s intention to use the system. The enhanced TAM is used for evaluating the proposed factors. The research result points out that factors influencing rote student-s intention are Perceived Usefulness of Homepage Content Structure, Perceived User Friendly Interface, Perceived Hedonic Component, and Perceived (homepage) Visual Attractiveness.
Abstract: In this paper, an automatic detecting algorithm for
QRS complex detecting was applied for analyzing ECG recordings
and five criteria for dangerous arrhythmia diagnosing are applied for a
protocol type of automatic arrhythmia diagnosing system. The
automatic detecting algorithm applied in this paper detected the
distribution of QRS complexes in ECG recordings and related
information, such as heart rate and RR interval. In this investigation,
twenty sampled ECG recordings of patients with different pathologic
conditions were collected for off-line analysis. A combinative
application of four digital filters for bettering ECG signals and
promoting detecting rate for QRS complex was proposed as
pre-processing. Both of hardware filters and digital filters were
applied to eliminate different types of noises mixed with ECG
recordings. Then, an automatic detecting algorithm of QRS complex
was applied for verifying the distribution of QRS complex. Finally,
the quantitative clinic criteria for diagnosing arrhythmia were
programmed in a practical application for automatic arrhythmia
diagnosing as a post-processor. The results of diagnoses by automatic
dangerous arrhythmia diagnosing were compared with the results of
off-line diagnoses by experienced clinic physicians. The results of
comparison showed the application of automatic dangerous
arrhythmia diagnosis performed a matching rate of 95% compared
with an experienced physician-s diagnoses.
Abstract: When the failure function is monotone, some monotonic reliability methods are used to gratefully simplify and facilitate the reliability computations. However, these methods often work in a transformed iso-probabilistic space. To this end, a monotonic simulator or transformation is needed in order that the transformed failure function is still monotone. This note proves at first that the output distribution of failure function is invariant under the transformation. And then it presents some conditions under which the transformed function is still monotone in the newly obtained space. These concern the copulas and the dependence concepts. In many engineering applications, the Gaussian copulas are often used to approximate the real word copulas while the available information on the random variables is limited to the set of marginal distributions and the covariances. So this note catches an importance on the conditional monotonicity of the often used transformation from an independent random vector into a dependent random vector with Gaussian copulas.
Abstract: The objective of current study is to investigate the
differences of winning and losing teams in terms of goal scoring and
passing sequences. Total of 31 matches from UEFA-EURO 2012
were analyzed and 5 matches were excluded from analysis due to
matches end up drawn. There are two groups of variable used in the
study which is; i. the goal scoring variable and: ii. passing sequences
variable. Data were analyzed using Wilcoxon matched pair rank test
with significant value set at p < 0.05. Current study found the timing
of goal scored was significantly higher for winning team at 1st half
(Z=-3.416, p=.001) and 2nd half (Z=-3.252, p=.001). The scoring
frequency was also found to be increase as time progressed and the
last 15 minutes of the game was the time interval the most goals
scored. The indicators that were significantly differences between
winning and losing team were the goal scored (Z=-4.578, p=.000),
the head (Z=-2.500, p=.012), the right foot (Z=-3.788,p=.000),
corner (Z=-.2.126,p=.033), open play (Z=-3.744,p=.000), inside the
penalty box (Z=-4.174, p=.000) , attackers (Z=-2.976, p=.003) and
also the midfielders (Z=-3.400, p=.001). Regarding the passing
sequences, there are significance difference between both teams in
short passing sequences (Z=-.4.141, p=.000). While for the long
passing, there were no significance difference (Z=-.1.795, p=.073).
The data gathered in present study can be used by the coaches to
construct detailed training program based on their objectives.
Abstract: Prior research evidenced that unimodal biometric
systems have several tradeoffs like noisy data, intra-class variations,
restricted degrees of freedom, non-universality, spoof attacks, and
unacceptable error rates. In order for the biometric system to be more
secure and to provide high performance accuracy, more than one
form of biometrics are required. Hence, the need arise for multimodal
biometrics using combinations of different biometric modalities. This
paper introduces a multimodal biometric system (MMBS) based on
fusion of whole dorsal hand geometry and fingerprints that acquires
right and left (Rt/Lt) near-infra-red (NIR) dorsal hand geometry (HG)
shape and (Rt/Lt) index and ring fingerprints (FP). Database of 100
volunteers were acquired using the designed prototype. The acquired
images were found to have good quality for all features and patterns
extraction to all modalities. HG features based on the hand shape
anatomical landmarks were extracted. Robust and fast algorithms for
FP minutia points feature extraction and matching were used. Feature
vectors that belong to similar biometric traits were fused using
feature fusion methodologies. Scores obtained from different
biometric trait matchers were fused using the Min-Max
transformation-based score fusion technique. Final normalized scores
were merged using the sum of scores method to obtain a single
decision about the personal identity based on multiple independent
sources. High individuality of the fused traits and user acceptability
of the designed system along with its experimental high performance
biometric measures showed that this MMBS can be considered for
med-high security levels biometric identification purposes.
Abstract: This paper presents an automated inspection algorithm
for a thick plate. Thick plates typically have various types of surface
defects, such as scabs, scratches, and roller marks. These defects have
individual characteristics including brightness and shape. Therefore, it
is not simple to detect all the defects. In order to solve these problems
and to detect defects more effectively, we propose a dual light
switching lighting method and a defect detection algorithm based on
Gabor filters.
Abstract: The aim of the present study was to analyze and
distinguish playing pattern between winning and losing field hockey
team in Delhi 2012 tournament. The playing pattern is focus to the D
penetration (right, center, left.) and to distinguish D penetration
linking to end shot made from it. The data was recorded and analyzed
using Sportscode elite computer software. 12 matches were analyzed
from the tournament. Two groups of performance indicators are used
to analyze, that is D penetration right, center, and left. The type of
shot chosen is hit, push, flick, drag, drag flick, deflect sweep, deflect
push, scoop, sweep, and reverse hit. This is to distinguish the pattern
of play between winning and losing, only 2 performance indicator
showed high significant differences from right (Z=-2.87, p=.004,
p
Abstract: Mobile agents are a powerful approach to develop distributed systems since they migrate to hosts on which they have the resources to execute individual tasks. In a dynamic environment like a peer-to-peer network, Agents have to be generated frequently and dispatched to the network. Thus they will certainly consume a certain amount of bandwidth of each link in the network if there are too many agents migration through one or several links at the same time, they will introduce too much transferring overhead to the links eventually, these links will be busy and indirectly block the network traffic, therefore, there is a need of developing routing algorithms that consider about traffic load. In this paper we seek to create cooperation between a probabilistic manner according to the quality measure of the network traffic situation and the agent's migration decision making to the next hop based on decision tree learning algorithms.
Abstract: This paper presents an efficient VLSI architecture
design to achieve real time video processing using Full-Search Block
Matching (FSBM) algorithm. The design employs parallel bank
architecture with minimum latency, maximum throughput, and full
hardware utilization. We use nine parallel processors in our
architecture and each controlled by a state machine. State machine
control implementation makes the design very simple and cost
effective. The design is implemented using VHDL and the
programming techniques we incorporated makes the design
completely programmable in the sense that the search ranges and the
block sizes can be varied to suit any given requirements. The design
can operate at frequencies up to 36 MHz and it can function in QCIF
and CIF video resolution at 1.46 MHz and 5.86 MHz, respectively.
Abstract: In this paper, application of Sliding Mode Control (SMC) technique for an Active Magnetic Bearing (AMB) system with varying rotor speed is considered. The gyroscopic effect and mass imbalance inherited in the system is proportional to rotor speed in which this nonlinearity effect causes high system instability as the rotor speed increases. Transformation of the AMB dynamic model into regular system shows that these gyroscopic effect and imbalance lie in the mismatched part of the system. A H2-based sliding surface is designed which bound the mismatched parts. The solution of the surface parameter is obtained using Linear Matrix Inequality (LMI). The performance of the controller applied to the AMB model is demonstrated through simulation works under various system conditions.
Abstract: Protein-protein interactions (PPI) play a crucial role in many biological processes such as cell signalling, transcription, translation, replication, signal transduction, and drug targeting, etc. Structural information about protein-protein interaction is essential for understanding the molecular mechanisms of these processes. Structures of protein-protein complexes are still difficult to obtain by biophysical methods such as NMR and X-ray crystallography, and therefore protein-protein docking computation is considered an important approach for understanding protein-protein interactions. However, reliable prediction of the protein-protein complexes is still under way. In the past decades, several grid-based docking algorithms based on the Katchalski-Katzir scoring scheme were developed, e.g., FTDock, ZDOCK, HADDOCK, RosettaDock, HEX, etc. However, the success rate of protein-protein docking prediction is still far from ideal. In this work, we first propose a more practical measure for evaluating the success of protein-protein docking predictions,the rate of first success (RFS), which is similar to the concept of mean first passage time (MFPT). Accordingly, we have assessed the ZDOCK bound and unbound benchmarks 2.0 and 3.0. We also createda new benchmark set for protein-protein docking predictions, in which the complexes have experimentally determined binding affinity data. We performed free energy calculation based on the solution of non-linear Poisson-Boltzmann equation (nlPBE) to improve the binding mode prediction. We used the well-studied thebarnase-barstarsystem to validate the parameters for free energy calculations. Besides,thenlPBE-based free energy calculations were conducted for the badly predicted cases by ZDOCK and ZRANK. We found that direct molecular mechanics energetics cannot be used to discriminate the native binding pose from the decoys.Our results indicate that nlPBE-based calculations appeared to be one of the promising approaches for improving the success rate of binding pose predictions.
Abstract: The ability of pomelo peel, a natural biosorbent, to remove Cd(II) ions from aqueous solution by biosorption was investigated. The experiments were carried out by batch method at 25 °C. The influence of solution pH, initial cadmium ion concentrations and contact times were evaluated. Cadmium ion removal increased significantly as the pH of the solution increased from pH 1 to pH 5. At pH 5, the cadmium ion removal reached a maximum value. The equilibrium process was described well by the Langmuir isotherm model, with a maximum biosorption capacity of 21.83 mg/g. The biosorption was relatively quick, (approx. 20 min). Biosorption kinetics followed a pseudo-second-order model. The result showed that pomelo peel was effective as a biosorbent for removing cadmium ions from aqueous solution. It is a low cost material that shows potential to be applied in wastewater technology for remediation of heavy metal contamination.
Abstract: Iris-based biometric authentication is gaining importance
in recent times. Iris biometric processing however, is a complex
process and computationally very expensive. In the overall processing
of iris biometric in an iris-based biometric authentication system,
feature processing is an important task. In feature processing, we extract
iris features, which are ultimately used in matching. Since there
is a large number of iris features and computational time increases
as the number of features increases, it is therefore a challenge to
develop an iris processing system with as few as possible number of
features and at the same time without compromising the correctness.
In this paper, we address this issue and present an approach to feature
extraction and feature matching process. We apply Daubechies D4
wavelet with 4 levels to extract features from iris images. These
features are encoded with 2 bits by quantizing into 4 quantization
levels. With our proposed approach it is possible to represent an
iris template with only 304 bits, whereas existing approaches require
as many as 1024 bits. In addition, we assign different weights to
different iris region to compare two iris templates which significantly
increases the accuracy. Further, we match the iris template based on
a weighted similarity measure. Experimental results on several iris
databases substantiate the efficacy of our approach.