Abstract: The paper presents a multimodal approach for biometric authentication, based on multiple classifiers. The proposed solution uses a post-classification biometric fusion method in which the biometric data classifiers outputs are combined in order to improve the overall biometric system performance by decreasing the classification error rates. The paper shows also the biometric recognition task improvement by means of a carefully feature selection, as much as not all of the feature vectors components support the accuracy improvement.
Abstract: Incremental forming is a complex forming process with
continuously local cumulative deformation taking place during its
process, and springback that forming quality affected by would occur.
The springback evaluation method based on forming error
compensation also was proposed, which it can be defined as the
difference between theory and the actual amount of compensation
along the measured direction. According to forming error
compensation evaluation method, experiments was designed and
implemented. And from the results that obtained it can be show, the
magnitude of springback average (δE) of formed parts was very small,
and the forming precision could be significantly improved by adopting
compensation method. Based on double tensile stress state in the main
deformation area, a hypothesis that there is little springback be arisen
by bending behavior on the formed parts that was proposed.
Abstract: A Finite Volume method based on Characteristic Fluxes for compressible fluids is developed. An explicit cell-centered resolution is adopted, where second and third order accuracy is provided by using two different MUSCL schemes with Minmod, Sweby or Superbee limiters for the hyperbolic part. Few different times integrator is used and be describe in this paper. Resolution is performed on a generic unstructured Cartesian grid, where solid boundaries are handled by a Cut-Cell method. Interfaces are explicitely advected in a non-diffusive way, ensuring local mass conservation. An improved cell cutting has been developed to handle boundaries of arbitrary geometrical complexity. Instead of using a polygon clipping algorithm, we use the Voxel traversal algorithm coupled with a local floodfill scanline to intersect 2D or 3D boundary surface meshes with the fixed Cartesian grid. Small cells stability problem near the boundaries is solved using a fully conservative merging method. Inflow and outflow conditions are also implemented in the model. The solver is validated on 2D academic test cases, such as the flow past a cylinder. The latter test cases are performed both in the frame of the body and in a fixed frame where the body is moving across the mesh. Adaptive Cartesian grid is provided by Paramesh without complex geometries for the moment.
Abstract: The contribution is dealing with the influence of high speed parameters on the quality of machined surface. In general the principle of high speed cutting lies in achieving faster machine times with concurrent increase in accuracy and quality of the machined areas in largely irregular, mathematically hard to define shapes. High speed machining is a highly effective method of machining with the following goals: increasing of machining productivity, increasing of quality of the machined surface, improving of machining economy, improving of ecological aspects of machining. This article is based on an experiment performed by the Department of Machining and Assembly of the Faculty of Mechanical Engineering of VŠBTechnical University of Ostrava.
Abstract: Detecting object in video sequence is a challenging
mission for identifying, tracking moving objects. Background
removal considered as a basic step in detected moving objects tasks.
Dual static cameras placed in front and rear moving platform
gathered information which is used to detect objects. Background
change regarding with speed and direction moving platform, so
moving objects distinguished become complicated. In this paper, we
propose framework allows detection moving object with variety of
speed and direction dynamically. Object detection technique built on
two levels the first level apply background removal and edge
detection to generate moving areas. The second level apply Moving
Areas Filter (MAF) then calculate Correlation Score (CS) for
adjusted moving area. Merging moving areas with closer CS and
marked as moving object. Experiment result is prepared on real scene
acquired by dual static cameras without overlap in sense. Results
showing accuracy in detecting objects compared with optical flow
and Mixture Module Gaussian (MMG), Accurate ratio produced to
measure accurate detection moving object.
Abstract: Most neural network (NN) models of human category learning use a gradient-based learning method, which assumes that locally-optimal changes are made to model parameters on each learning trial. This method tends to under predict variability in individual-level cognitive processes. In addition many recent models of human category learning have been criticized for not being able to replicate rapid changes in categorization accuracy and attention processes observed in empirical studies. In this paper we introduce stochastic learning algorithms for NN models of human category learning and show that use of the algorithms can result in (a) rapid changes in accuracy and attention allocation, and (b) different learning trajectories and more realistic variability at the individual-level.
Abstract: The current speech interfaces in many military
applications may be adequate for native speakers. However,
the recognition rate drops quite a lot for non-native speakers
(people with foreign accents). This is mainly because the nonnative
speakers have large temporal and intra-phoneme
variations when they pronounce the same words. This
problem is also complicated by the presence of large
environmental noise such as tank noise, helicopter noise, etc.
In this paper, we proposed a novel continuous acoustic feature
adaptation algorithm for on-line accent and environmental
adaptation. Implemented by incremental singular value
decomposition (SVD), the algorithm captures local acoustic
variation and runs in real-time. This feature-based adaptation
method is then integrated with conventional model-based
maximum likelihood linear regression (MLLR) algorithm.
Extensive experiments have been performed on the NATO
non-native speech corpus with baseline acoustic model trained
on native American English. The proposed feature-based
adaptation algorithm improved the average recognition
accuracy by 15%, while the MLLR model based adaptation
achieved 11% improvement. The corresponding word error
rate (WER) reduction was 25.8% and 2.73%, as compared to
that without adaptation. The combined adaptation achieved
overall recognition accuracy improvement of 29.5%, and
WER reduction of 31.8%, as compared to that without
adaptation.
Abstract: Circular tubes have been widely used as structural
members in engineering application. Therefore, its collapse behavior
has been studied for many decades, focusing on its energy absorption
characteristics. In order to predict the collapse behavior of members,
one could rely on the use of finite element codes or experiments.
These tools are helpful and high accuracy but costly and require
extensive running time. Therefore, an approximating model of tubes
collapse mechanism is an alternative for early step of design. This
paper is also aimed to develop a closed-form solution of thin-walled
circular tube subjected to bending. It has extended the Elchalakani et
al.-s model (Int. J. Mech. Sci.2002; 44:1117-1143) to include the
rate of energy dissipation of rolling hinge in the circumferential
direction. The 3-D geometrical collapse mechanism was analyzed by
adding the oblique hinge lines along the longitudinal tube within the
length of plastically deforming zone. The model was based on the
principal of energy rate conservation. Therefore, the rates of internal
energy dissipation were calculated for each hinge lines which are
defined in term of velocity field. Inextensional deformation and
perfect plastic material behavior was assumed in the derivation of
deformation energy rate. The analytical result was compared with
experimental result. The experiment was conducted with a number of
tubes having various D/t ratios. Good agreement between analytical
and experiment was achieved.
Abstract: In this paper we developed the Improved Runge-Kutta Nystrom (IRKN) method for solving second order ordinary differential equations. The methods are two step in nature and require lower number of function evaluations per step compared with the existing Runge-Kutta Nystrom (RKN) methods. Therefore, the methods are computationally more efficient at achieving the higher order of local accuracy. Algebraic order conditions of the method are obtained and the third and fourth order method are derived with two and three stages respectively. The numerical results are given to illustrate the efficiency of the proposed method compared to the existing RKN methods.
Abstract: The present work describes a computational study of
aerodynamic characteristics of GLC305 airfoil clean and with 16.7
min ice shape (rime 212) and 22.5 min ice shape (glaze 944).The
performance of turbulence models SA, Kε, Kω Std, and Kω SST
model are observed against experimental flow fields at different
Mach numbers 0.12, 0.21, 0.28 in a range of Reynolds numbers
3x106, 6x106, and 10.5x106 on clean and iced aircraft airfoil
GLC305. Numerical predictions include lift, drag and pitching
moment coefficients at different Mach numbers and at different angle
of attacks were done. Accuracy of solutions with respect to the
effects of turbulence models, variation of Mach number, initial
conditions, grid resolution and grid spacing near the wall made the
study much sensitive. Navier Stokes equation based computational
technique is used. Results are very close to the experimental results.
It has seen that SA and SST models are more efficient than Kε and
Kω standard in under study problem.
Abstract: In this paper, we present a method for edge
segmentation of satellite images based on 2-D Phase Congruency
(PC) model. The proposed approach is composed by two steps: The
contextual non linear smoothing algorithm (CNLS) is used to smooth
the input images. Then, the 2D stretched Gabor filter (S-G filter)
based on proposed angular variation is developed in order to avoid
the multiple responses in the previous work. An assessment of our
proposed method performance is provided in terms of accuracy of
satellite image edge segmentation. The proposed method is compared
with others known approaches.
Abstract: Understanding driving behavior is a complicated
researching topic. To describe accurate speed, flow and density of a
multiclass users traffic flow, an adequate model is needed. In this
study, we propose the concept of standard passenger car equivalent
(SPCE) instead of passenger car equivalent (PCE) to estimate the
influence of heavy vehicles and slow cars. Traffic cellular automata
model is employed to calibrate and validate the results. According to
the simulated results, the SPCE transformations present good
accuracy.
Abstract: Due to their high power-to-weight ratio and low cost, pneumatic actuators are attractive for robotics and automation applications; however, achieving fast and accurate control of their position have been known as a complex control problem. The paper presents a methodology for obtaining controllers that achieve high position accuracy and preserve the closed-loop characteristics over a broad operating range. Experimentation with a number of conventional (or "classical") three-term controllers shows that, as repeated operations accumulate, the characteristics of the pneumatic actuator change requiring frequent re-tuning of the controller parameters (PID gains). Furthermore, three-term controllers are found to perform poorly in recovering the closed-loop system after the application of load or other external disturbances. The key reason for these problems lies in the non-linear exchange of energy inside the cylinder relating, in particular, to the complex friction forces that develop on the piston-wall interface. In order to overcome this problem but still remain within the boundaries of classical control methods, we designed an auto selective classicaql controller so that the system performance would benefit from all three control gains (KP, Kd, Ki) according to system requirements and the characteristics of each type of controller. This challenging experimentation took place for consistent performance in the face of modelling imprecision and disturbances. In the work presented, a selective PID controller is presented for an experimental rig comprising an air cylinder driven by a variable-opening pneumatic valve and equipped with position and pressure sensors. The paper reports on tests carried out to investigate the capability of this specific controller to achieve consistent control performance under, repeated operations and other changes in operating conditions.
Abstract: Recent years, adaptive pushover methods have been
developed for seismic analysis of structures. Herein, the accuracy of
the displacement-based adaptive pushover (DAP) method, which is
introduced by Antoniou and Pinho [2004], is evaluated for Irregular
buildings. The results are compared to the force-based procedure.
Both concrete and steel frame structures, asymmetric in plan and
elevation are analyzed and also torsional effects are taking into the
account. These analyses are performed using both near fault and far
fault records. In order to verify the results, the Incremental Dynamic
Analysis (IDA) is performed.
Abstract: In this paper we present a new method for over-height
vehicle detection in low headroom streets and highways using digital
video possessing. The accuracy and the lower price comparing to
present detectors like laser radars and the capability of providing
extra information like speed and height measurement make this
method more reliable and efficient. In this algorithm the features are
selected and tracked using KLT algorithm. A blob extraction
algorithm is also applied using background estimation and
subtraction. Then the world coordinates of features that are inside the
blobs are estimated using a noble calibration method. As, the heights
of the features are calculated, we apply a threshold to select overheight
features and eliminate others. The over-height features are
segmented using some association criteria and grouped using an
undirected graph. Then they are tracked through sequential frames.
The obtained groups refer to over-height vehicles in a scene.
Abstract: Robots- visual perception is a field that is gaining
increasing attention from researchers. This is partly due to emerging
trends in the commercial availability of 3D scanning systems or
devices that produce a high information accuracy level for a variety of
applications. In the history of mining, the mortality rate of mine workers
has been alarming and robots exhibit a great deal of potentials to
tackle safety issues in mines. However, an effective vision system
is crucial to safe autonomous navigation in underground terrains.
This work investigates robots- perception in underground terrains
(mines and tunnels) using statistical region merging (SRM) model.
SRM reconstructs the main structural components of an imagery
by a simple but effective statistical analysis. An investigation is
conducted on different regions of the mine, such as the shaft, stope
and gallery, using publicly available mine frames, with a stream of
locally captured mine images. An investigation is also conducted on a
stream of underground tunnel image frames, using the XBOX Kinect
3D sensors. The Kinect sensors produce streams of red, green and
blue (RGB) and depth images of 640 x 480 resolution at 30 frames per
second. Integrating the depth information to drivability gives a strong
cue to the analysis, which detects 3D results augmenting drivable and
non-drivable regions in 2D. The results of the 2D and 3D experiment
with different terrains, mines and tunnels, together with the qualitative
and quantitative evaluation, reveal that a good drivable region can be
detected in dynamic underground terrains.
Abstract: Data mining incorporates a group of statistical
methods used to analyze a set of information, or a data set. It operates
with models and algorithms, which are powerful tools with the great
potential. They can help people to understand the patterns in certain
chunk of information so it is obvious that the data mining tools have
a wide area of applications. For example in the theoretical chemistry
data mining tools can be used to predict moleculeproperties or
improve computer-assisted drug design. Classification analysis is one
of the major data mining methodologies. The aim of thecontribution
is to create a classification model, which would be able to deal with a
huge data set with high accuracy. For this purpose logistic regression,
Bayesian logistic regression and random forest models were built
using R software. TheBayesian logistic regression in Latent GOLD
software was created as well. These classification methods belong to
supervised learning methods.
It was necessary to reduce data matrix dimension before construct
models and thus the factor analysis (FA) was used. Those models
were applied to predict the biological activity of molecules, potential
new drug candidates.
Abstract: Terminal localization for indoor Wireless Local Area
Networks (WLANs) is critical for the deployment of location-aware
computing inside of buildings. A major challenge is obtaining high
localization accuracy in presence of fluctuations of the received signal
strength (RSS) measurements caused by multipath fading. This paper
focuses on reducing the effect of the distance-varying noise by spatial
filtering of the measured RSS. Two different survey point geometries
are tested with the noise reduction technique: survey points arranged
in sets of clusters and survey points uniformly distributed over the
network area. The results show that the location accuracy improves
by 16% when the filter is used and by 18% when the filter is applied
to a clustered survey set as opposed to a straight-line survey set.
The estimated locations are within 2 m of the true location, which
indicates that clustering the survey points provides better localization
accuracy due to superior noise removal.
Abstract: This paper proposes a novel frequency offset (FO) estimator for orthogonal frequency division multiplexing. Simplicity is most significant feature of this algorithm and can be repeated to achieve acceptable accuracy. Also fractional and integer part of FO is estimated jointly with use of the same algorithm. To do so, instead of using conventional algorithms that usually use correlation function, we use DFT of received signal. Therefore, complexity will be reduced and we can do synchronization procedure by the same hardware that is used to demodulate OFDM symbol. Finally, computer simulation shows that the accuracy of this method is better than other conventional methods.
Abstract: Nowadays, hard disk is one of the most popular storage components. In hard disk industry, the hard disk drive must pass various complex processes and tested systems. In each step, there are some failures. To reduce waste from these failures, we must find the root cause of those failures. Conventionall data analysis method is not effective enough to analyze the large capacity of data. In this paper, we proposed the Hough method for straight line detection that helps to detect straight line defect patterns that occurs in hard disk drive. The proposed method will help to increase more speed and accuracy in failure analysis.