Abstract: In this paper, a novel method for a biometric system based on the ECG signal is proposed, using spectral coefficients computed through linear predictive coding (LPC). ECG biometric systems have traditionally incorporated characteristics of fiducial points of the ECG signal as the feature set. These systems have been shown to contain loopholes and thus a non-fiducial system allows for tighter security. In the proposed system, incorporating non-fiducial features from the LPC spectrum produced a segment and subject recognition rate of 99.52% and 100% respectively. The recognition rates outperformed the biometric system that is based on the wavelet packet decomposition (WPD) algorithm in terms of recognition rates and computation time. This allows for LPC to be used in a practical ECG biometric system that requires fast, stringent and accurate recognition.
Abstract: The ionizing radiation of livestock wastewater for the
removal of nitrogen and phosphorus was studied in the presence of a
natural zeolite. The feasibility of a combined process of zeolite ion
exchange and electron beam irradiation of livestock wastewater was
also investigated. The removal efficiencies of NH4
+-N, T-N and T-P
were significantly enhanced by electron beam irradiation after zeolite
ion exchange as a pre-treatment. The presence of silica zeolite
accelerated the decomposition rate of livestock wastewater in the
electron beam irradiation process. These results indicate that the
combined process of zeolite ion exchange and electron beam
irradiation has the potential for the treatment of livestock wastewater
Abstract: Over last two decades, due to hostilities of environment
over the internet the concerns about confidentiality of information
have increased at phenomenal rate. Therefore to safeguard the information
from attacks, number of data/information hiding methods have
evolved mostly in spatial and transformation domain.In spatial domain
data hiding techniques,the information is embedded directly on
the image plane itself. In transform domain data hiding techniques the
image is first changed from spatial domain to some other domain and
then the secret information is embedded so that the secret information
remains more secure from any attack. Information hiding algorithms
in time domain or spatial domain have high capacity and relatively
lower robustness. In contrast, the algorithms in transform domain,
such as DCT, DWT have certain robustness against some multimedia
processing.In this work the authors propose a novel steganographic
method for hiding information in the transform domain of the gray
scale image.The proposed approach works by converting the gray
level image in transform domain using discrete integer wavelet
technique through lifting scheme.This approach performs a 2-D
lifting wavelet decomposition through Haar lifted wavelet of the cover
image and computes the approximation coefficients matrix CA and
detail coefficients matrices CH, CV, and CD.Next step is to apply the
PMM technique in those coefficients to form the stego image. The
aim of this paper is to propose a high-capacity image steganography
technique that uses pixel mapping method in integer wavelet domain
with acceptable levels of imperceptibility and distortion in the cover
image and high level of overall security. This solution is independent
of the nature of the data to be hidden and produces a stego image
with minimum degradation.
Abstract: In this work, the autoregressive vectors are used to
know dynamics of the Agricultural export and import, and the real
effective exchange rate (REER). In order to analyze the interactions,
the impulse- response function is used in decomposition of variance,
causality of Granger as well as the methodology of Johansen to know
the relations co integration. The REER causes agricultural export and
import in the sense of Granger. The influence displays the
innovations of the REER on the agricultural export and import is not
very great and the duration of the effects is short. It displays that
REER has an immediate positive effect, after the tenth year it
displays smooth results on the agricultural export. Evidence of a
vector exists co integration, In short run, REER has smaller effects
on export and import, compared to the long-run effects.
Abstract: We describe a novel method for removing noise (in wavelet domain) of unknown variance from microarrays. The method is based on a smoothing of the coefficients of the highest subbands. Specifically, we decompose the noisy microarray into wavelet subbands, apply smoothing within each highest subband, and reconstruct a microarray from the modified wavelet coefficients. This process is applied a single time, and exclusively to the first level of decomposition, i.e., in most of the cases, it is not necessary a multirresoltuion analysis. Denoising results compare favorably to the most of methods in use at the moment.
Abstract: Sleep stage scoring is the process of classifying the
stage of the sleep in which the subject is in. Sleep is classified into
two states based on the constellation of physiological parameters.
The two states are the non-rapid eye movement (NREM) and the
rapid eye movement (REM). The NREM sleep is also classified into
four stages (1-4). These states and the state wakefulness are
distinguished from each other based on the brain activity. In this
work, a classification method for automated sleep stage scoring
based on a single EEG recording using wavelet packet decomposition
was implemented. Thirty two ploysomnographic recording from the
MIT-BIH database were used for training and validation of the
proposed method. A single EEG recording was extracted and
smoothed using Savitzky-Golay filter. Wavelet packets
decomposition up to the fourth level based on 20th order Daubechies
filter was used to extract features from the EEG signal. A features
vector of 54 features was formed. It was reduced to a size of 25 using
the gain ratio method and fed into a classifier of regression trees. The
regression trees were trained using 67% of the records available. The
records for training were selected based on cross validation of the
records. The remaining of the records was used for testing the
classifier. The overall correct rate of the proposed method was found
to be around 75%, which is acceptable compared to the techniques in
the literature.
Abstract: This paper considers the control of the longitudinal
flight dynamics of an F-16 aircraft. The primary design objective
is model-following of the pitch rate q, which is the preferred
system for aircraft approach and landing. Regulation of the aircraft
velocity V (or the Mach-hold autopilot) is also considered, but
as a secondary objective. The problem is challenging because the
system is nonlinear, and also non-affine in the input. A sliding
mode controller is designed for the pitch rate, that exploits the
modal decomposition of the linearized dynamics into its short-period
and phugoid approximations. The inherent robustness of the SMC
design provides a convenient way to design controllers without gain
scheduling, with a steady-state response that is comparable to that
of a conventional polynomial based gain-scheduled approach with
integral control, but with improved transient performance. Integral
action is introduced in the sliding mode design using the recently
developed technique of “conditional integrators", and it is shown that
robust regulation is achieved with asymptotically constant exogenous
signals, without degrading the transient response. Through extensive
simulation on the nonlinear multiple-input multiple-output (MIMO)
longitudinal model of the F-16 aircraft, it is shown that the conditional
integrator design outperforms the one based on the conventional linear
control, without requiring any scheduling.
Abstract: When trying to enumerate all BIBD-s for given parameters,
their natural solution space appears to be huge and grows extremely with the number of points of the design. Therefore,
constructive enumerations are often carried out by assuming additional
constraints on design-s structure, automorphisms being mostly used ones. It remains a hard task to construct designs with trivial
automorphism group – those with no additional symmetry – although it is believed that most of the BIBD-s belong to that case. In
this paper, very many new designs with parameters 2-(13, 5, 5), 2-(16, 6, 5) and 2-(21, 6, 4) are constructed, assuming an action of an
automorphism of order 3. Even more, it was possible to construct millions of such designs with no non-trivial automorphisms.
Abstract: Finding the shortest path between two positions is a
fundamental problem in transportation, routing, and communications
applications. In robot motion planning, the robot should pass around
the obstacles touching none of them, i.e. the goal is to find a
collision-free path from a starting to a target position. This task has
many specific formulations depending on the shape of obstacles,
allowable directions of movements, knowledge of the scene, etc.
Research of path planning has yielded many fundamentally different
approaches to its solution, mainly based on various decomposition
and roadmap methods. In this paper, we show a possible use of
visibility graphs in point-to-point motion planning in the Euclidean
plane and an alternative approach using Voronoi diagrams that
decreases the probability of collisions with obstacles. The second
application area, investigated here, is focused on problems of finding
minimal networks connecting a set of given points in the plane using
either only straight connections between pairs of points (minimum
spanning tree) or allowing the addition of auxiliary points to the set
to obtain shorter spanning networks (minimum Steiner tree).
Abstract: Dorsal hand vein pattern is an emerging biometric which is attracting the attention of researchers, of late. Research is being carried out on existing techniques in the hope of improving them or finding more efficient ones. In this work, Principle Component Analysis (PCA) , which is a successful method, originally applied on face biometric is being modified using Cholesky decomposition and Lanczos algorithm to extract the dorsal hand vein features. This modified technique decreases the number of computation and hence decreases the processing time. The eigenveins were successfully computed and projected onto the vein space. The system was tested on a database of 200 images and using a threshold value of 0.9 to obtain the False Acceptance Rate (FAR) and False Rejection Rate (FRR). This modified algorithm is desirable when developing biometric security system since it significantly decreases the matching time.
Abstract: In this paper we present discretization and decomposition methods for a multi-component transport model of a chemical vapor deposition (CVD) process. CVD processes are used to manufacture deposition layers or bulk materials. In our transport model we simulate the deposition of thin layers. The microscopic model is based on the heavy particles, which are derived by approximately solving a linearized multicomponent Boltzmann equation. For the drift-process of the particles we propose diffusionreaction equations as well as for the effects of heat conduction. We concentrate on solving the diffusion-reaction equation with analytical and numerical methods. For the chemical processes, modelled with reaction equations, we propose decomposition methods and decouple the multi-component models to simpler systems of differential equations. In the numerical experiments we present the computational results of our proposed models.
Abstract: In order to make conventional implicit algorithm to be applicable in large scale parallel computers , an interface prediction and correction of discontinuous finite element method is presented to solve time-dependent neutron transport equations under 2-D cylindrical geometry. Domain decomposition is adopted in the computational domain.The numerical experiments show that our parallel algorithm with explicit prediction and implicit correction has good precision, parallelism and simplicity. Especially, it can reach perfect speedup even on hundreds of processors for large-scale problems.
Abstract: Electrochemical-oxidation of Reactive Black-5 (RB- 5) was conducted for degradation using DSA type Ti/RuO2-SnO2- Sb2O5 electrode. In the study, for electro-oxidation, electrode was indigenously fabricated in laboratory using titanium as substrate. This substrate was coated using different metal oxides RuO2, Sb2O5 and SnO2 by thermal decomposition method. Laboratory scale batch reactor was used for degradation and decolorization studies at pH 2, 7 and 11. Current density (50mA/cm2) and distance between electrodes (8mm) were kept constant for all experiments. Under identical conditions, removal of color, COD and TOC at initial pH 2 was 99.40%, 55% and 37% respectively for initial concentration of 100 mg/L RB-5. Surface morphology and composition of the fabricated electrode coatings were characterized using scanning electron microscopy (SEM) and energy dispersive X-ray spectroscopy (EDX) respectively. Coating microstructure was analyzed by X-ray diffraction (XRD). Results of this study further revealed that almost 90% of oxidation occurred within 5-10 minutes.
Abstract: Solar sunspot rotation, latitudinal bands are studied based on intelligent computation methods. A combination of image fusion method with together tree decomposition is used to obtain quantitative values about the latitudes of trajectories on sun surface that sunspots rotate around them. Daily solar images taken with SOlar and Heliospheric (SOHO) satellite are fused for each month separately .The result of fused image is decomposed with Quad Tree decomposition method in order to achieve the precise information about latitudes of sunspot trajectories. Such analysis is useful for gathering information about the regions on sun surface and coordinates in space that is more expose to solar geomagnetic storms, tremendous flares and hot plasma gases permeate interplanetary space and help human to serve their technical systems. Here sunspot images in September, November and October in 2001 are used for studying the magnetic behavior of sun.
Abstract: Optimal cultural site selection is one of the ways that
can lead to the promotion of citizenship culture in addition to
ensuring the health and leisure of city residents. This study examines
the social and cultural needs of the community and optimal cultural
site allocation and after identifying the problems and shortcomings,
provides a suitable model for finding the best location for these
centers where there is the greatest impact on the promotion of
citizenship culture. On the other hand, non-scientific methods cause
irreversible impacts to the urban environment and citizens. But
modern efficient methods can reduce these impacts. One of these
methods is using geographical information systems (GIS). In this
study, Analytical Hierarchy Process (AHP) method was used to
locate the optimal cultural site. In AHP, three principles
(decomposition), (comparative analysis), and (combining
preferences) are used. The objectives of this research include
providing optimal contexts for passing time and performing cultural
activities by Shiraz residents and also proposing construction of some
cultural sites in different areas of the city. The results of this study
show the correct positioning of cultural sites based on social needs of
citizens. Thus, considering the population parameters and radii
access, GIS and AHP model for locating cultural centers can meet
social needs of citizens.
Abstract: We study the performance of compressed beamforming
weights feedback technique in generalized triangular decomposition
(GTD) based MIMO system. GTD is a beamforming technique that
enjoys QoS flexibility. The technique, however, will perform at its
optimum only when the full knowledge of channel state information
(CSI) is available at the transmitter. This would be impossible in
the real system, where there are channel estimation error and limited
feedback. We suggest a way to implement the quantized beamforming
weights feedback, which can significantly reduce the feedback data,
on GTD-based MIMO system and investigate the performance of
the system. Interestingly, we found that compressed beamforming
weights feedback does not degrade the BER performance of the
system at low input power, while the channel estimation error
and quantization do. For comparison, GTD is more sensitive to
compression and quantization, while SVD is more sensitive to the
channel estimation error. We also explore the performance of GTDbased
MU-MIMO system, and find that the BER performance starts
to degrade largely at around -20 dB channel estimation error.
Abstract: The purpose of this study is to introduce a new
interface program to calculate a dose distribution with Monte Carlo method in complex heterogeneous systems such as organs or tissues
in proton therapy. This interface program was developed under
MATLAB software and includes a friendly graphical user interface
with several tools such as image properties adjustment or results display. Quadtree decomposition technique was used as an image
segmentation algorithm to create optimum geometries from Computed Tomography (CT) images for dose calculations of proton
beam. The result of the mentioned technique is a number of nonoverlapped
squares with different sizes in every image. By this way
the resolution of image segmentation is high enough in and near
heterogeneous areas to preserve the precision of dose calculations
and is low enough in homogeneous areas to reduce the number of
cells directly. Furthermore a cell reduction algorithm can be used to combine neighboring cells with the same material. The validation of this method has been done in two ways; first, in comparison with experimental data obtained with 80 MeV proton beam in Cyclotron
and Radioisotope Center (CYRIC) in Tohoku University and second, in comparison with data based on polybinary tissue calibration method, performed in CYRIC. These results are presented in this paper. This program can read the output file of Monte Carlo code while region of interest is selected manually, and give a plot of dose distribution of proton beam superimposed onto the CT images.
Abstract: In this paper, we develop an accurate and efficient Haar wavelet method for well-known FitzHugh-Nagumo equation. The proposed scheme can be used to a wide class of nonlinear reaction-diffusion equations. The power of this manageable method is confirmed. Moreover the use of Haar wavelets is found to be accurate, simple, fast, flexible, convenient, small computation costs and computationally attractive.
Abstract: In this paper, we propose a fully-utilized, block-based 2D DWT (discrete wavelet transform) architecture, which consists of four 1D DWT filters with two-channel QMF lattice structure. The proposed architecture requires about 2MN-3N registers to save the intermediate results for higher level decomposition, where M and N stand for the filter length and the row width of the image respectively. Furthermore, the proposed 2D DWT processes in horizontal and vertical directions simultaneously without an idle period, so that it computes the DWT for an N×N image in a period of N2(1-2-2J)/3. Compared to the existing approaches, the proposed architecture shows 100% of hardware utilization and high throughput rates. To mitigate the long critical path delay due to the cascaded lattices, we can apply the pipeline technique with four stages, while retaining 100% of hardware utilization. The proposed architecture can be applied in real-time video signal processing.
Abstract: A proof of convergence of a new continuation algorithm for computing the Analytic SVD for a large sparse parameter– dependent matrix is given. The algorithm itself was developed and numerically tested in [5].