Abstract: AAM has been successfully applied to face alignment,
but its performance is very sensitive to initial values. In case the initial
values are a little far distant from the global optimum values, there
exists a pretty good possibility that AAM-based face alignment may
converge to a local minimum. In this paper, we propose a progressive
AAM-based face alignment algorithm which first finds the feature
parameter vector fitting the inner facial feature points of the face and
later localize the feature points of the whole face using the first
information. The proposed progressive AAM-based face alignment
algorithm utilizes the fact that the feature points of the inner part of the
face are less variant and less affected by the background surrounding
the face than those of the outer part (like the chin contour). The
proposed algorithm consists of two stages: modeling and relation
derivation stage and fitting stage. Modeling and relation derivation
stage first needs to construct two AAM models: the inner face AAM
model and the whole face AAM model and then derive relation matrix
between the inner face AAM parameter vector and the whole face
AAM model parameter vector. In the fitting stage, the proposed
algorithm aligns face progressively through two phases. In the first
phase, the proposed algorithm will find the feature parameter vector
fitting the inner facial AAM model into a new input face image, and
then in the second phase it localizes the whole facial feature points of
the new input face image based on the whole face AAM model using
the initial parameter vector estimated from using the inner feature
parameter vector obtained in the first phase and the relation matrix
obtained in the first stage. Through experiments, it is verified that the
proposed progressive AAM-based face alignment algorithm is more
robust with respect to pose, illumination, and face background than the
conventional basic AAM-based face alignment algorithm.
Abstract: In this study, a fuzzy integrated logical forecasting method (FILF) is extended for multi-variate systems by using a vector autoregressive model. Fuzzy time series forecasting (FTSF) method was recently introduced by Song and Chissom [1]-[2] after that Chen improved the FTSF method. Rather than the existing literature, the proposed model is not only compared with the previous FTS models, but also with the conventional time series methods such as the classical vector autoregressive model. The cluster optimization is based on the C-means clustering method. An empirical study is performed for the prediction of the chartering rates of a group of dry bulk cargo ships. The root mean squared error (RMSE) metric is used for the comparing of results of methods and the proposed method has superiority than both traditional FTS methods and also the classical time series methods.
Abstract: The objective of this research was to study the
influence of marketing mix on customers purchasing behavior. A
total of 397 respondents were collected from customers who were the
patronages of the Chatuchak Plaza market. A questionnaire was
utilized as a tool to collect data. Statistics utilized in this research
included frequency, percentage, mean, standard deviation, and
multiple regression analysis. Data were analyzed by using Statistical
Package for the Social Sciences. The findings revealed that the
majority of respondents were male with the age between 25-34 years
old, hold undergraduate degree, married and stay together. The
average income of respondents was between 10,001-20,000 baht. In
terms of occupation, the majority worked for private companies. The
research analysis disclosed that there were three variables of
marketing mix which included price (X2), place (X3), and product
(X1) which had an influence on the frequency of customer
purchasing. These three variables can predict a purchase about 30
percent of the time by using the equation; Y1 = 6.851 + .921(X2) +
.949(X3) + .591(X1). It also found that in terms of marketing mixed,
there were two variables had an influence on the amount of customer
purchasing which were physical characteristic (X6), and the process
(X7). These two variables are 17 percent predictive of a purchasing
by using the equation: Y2 = 2276.88 + 2980.97(X6) + 2188.09(X7).
Abstract: Image compression plays a vital role in today-s
communication. The limitation in allocated bandwidth leads to
slower communication. To exchange the rate of transmission in the
limited bandwidth the Image data must be compressed before
transmission. Basically there are two types of compressions, 1)
LOSSY compression and 2) LOSSLESS compression. Lossy
compression though gives more compression compared to lossless
compression; the accuracy in retrievation is less in case of lossy
compression as compared to lossless compression. JPEG, JPEG2000
image compression system follows huffman coding for image
compression. JPEG 2000 coding system use wavelet transform,
which decompose the image into different levels, where the
coefficient in each sub band are uncorrelated from coefficient of
other sub bands. Embedded Zero tree wavelet (EZW) coding exploits
the multi-resolution properties of the wavelet transform to give a
computationally simple algorithm with better performance compared
to existing wavelet transforms. For further improvement of
compression applications other coding methods were recently been
suggested. An ANN base approach is one such method. Artificial
Neural Network has been applied to many problems in image
processing and has demonstrated their superiority over classical
methods when dealing with noisy or incomplete data for image
compression applications. The performance analysis of different
images is proposed with an analysis of EZW coding system with
Error Backpropagation algorithm. The implementation and analysis
shows approximately 30% more accuracy in retrieved image
compare to the existing EZW coding system.
Abstract: The performance of high-resolution schemes is investigated for unsteady, inviscid and compressible multiphase flows. An Eulerian diffuse interface approach has been chosen for the simulation of multicomponent flow problems. The reduced fiveequation and seven equation models are used with HLL and HLLC approximation. The authors demonstrated the advantages and disadvantages of both seven equations and five equations models studying their performance with HLL and HLLC algorithms on simple test case. The seven equation model is based on two pressure, two velocity concept of Baer–Nunziato [10], while five equation model is based on the mixture velocity and pressure. The numerical evaluations of two variants of Riemann solvers have been conducted for the classical one-dimensional air-water shock tube and compared with analytical solution for error analysis.
Abstract: Renewable energy systems are becoming a topic of
great interest and investment in the world. In recent years wind
power generation has experienced a very fast development in the
whole world. For planning and successful implementations of good
wind power plant projects, wind potential measurements are
required. In these projects, of great importance is the effective choice
of the micro location for wind potential measurements, installation of
the measurement station with the appropriate measuring equipment,
its maintenance and analysis of the gained data on wind potential
characteristics. In this paper, a wavelet transform has been applied to
analyze the wind speed data in the context of insight in the
characteristics of the wind and the selection of suitable locations that
could be the subject of a wind farm construction. This approach
shows that it can be a useful tool in investigation of wind potential.
Abstract: Determining depth of anesthesia is a challenging problem
in the context of biomedical signal processing. Various methods
have been suggested to determine a quantitative index as depth of
anesthesia, but most of these methods suffer from high sensitivity
during the surgery. A novel method based on energy scattering of
samples in the wavelet domain is suggested to represent the basic
content of electroencephalogram (EEG) signal. In this method, first
EEG signal is decomposed into different sub-bands, then samples
are squared and energy of samples sequence is constructed through
each scale and time, which is normalized and finally entropy of the
resulted sequences is suggested as a reliable index. Empirical Results
showed that applying the proposed method to the EEG signals can
classify the awake, moderate and deep anesthesia states similar to
BIS.
Abstract: Lighvan cheese is basically made from sheep milk in
the area of Sahand mountainside which is located in the North West
of Iran. The main objective of this study was to investigate the effect
of enterococci isolated from traditional Lighvan cheese on the quality
of Iranian UF white during ripening. The experimental design was
split plot based on randomized complete blocks, main plots were four
types of starters and subplots were different ripening durations.
Addition of Enterococcus spp. did not significantly (P
Abstract: In the paper we discuss the influence of the route
flexibility degree, the open rate of operations and the production type
coefficient on makespan. The flexible job-open shop scheduling
problem FJOSP (an extension of the classical job shop scheduling) is
analyzed. For the analysis of the production process we used a
hybrid heuristic of the GRASP (greedy randomized adaptive search
procedure) with simulated annealing algorithm. Experiments with
different levels of factors have been considered and compared. The
GRASP+SA algorithm has been tested and illustrated with results for
the serial route and the parallel one.
Abstract: Group contribution methods such as the UNIFAC are
very useful to researchers and engineers involved in synthesis,
feasibility studies, design and optimization of separation processes.
They can be applied successfully to predict phase equilibrium and
excess properties in the development of chemical and separation
processes. The main focus of this work was to investigate the
possibility of absorbing selected volatile organic compounds (VOCs)
into polydimethylsiloxane (PDMS) using three selected UNIFAC
group contribution methods. Absorption followed by subsequent
stripping is the predominant available abatement technology of
VOCs from flue gases prior to their release into the atmosphere. The
original, modified and effective UNIFAC models were used in this
work. The thirteen selected VOCs that have been considered in this
research are: pentane, hexane, heptanes, trimethylamine, toluene,
xylene, cyclohexane, butyl acetate, diethyl acetate, chloroform,
acetone, ethyl methyl ketone and isobutyl methyl ketone. The
computation was done for solute VOC concentration of 8.55x10-8
which is well in the infinite dilution region. The results obtained in
this study compare very well with those published in literature
obtained through both measurements and predictions. The phase
equilibrium obtained in this study show that PDMS is a good
absorbent for the removal of VOCs from contaminated air streams
through physical absorption.
Abstract: This paper deals with a high-order accurate Runge
Kutta Discontinuous Galerkin (RKDG) method for the numerical
solution of the wave equation, which is one of the simple case of a
linear hyperbolic partial differential equation. Nodal DG method is
used for a finite element space discretization in 'x' by discontinuous
approximations. This method combines mainly two key ideas which
are based on the finite volume and finite element methods. The
physics of wave propagation being accounted for by means of
Riemann problems and accuracy is obtained by means of high-order
polynomial approximations within the elements. High order accurate
Low Storage Explicit Runge Kutta (LSERK) method is used for
temporal discretization in 't' that allows the method to be nonlinearly
stable regardless of its accuracy. The resulting RKDG
methods are stable and high-order accurate. The L1 ,L2 and L∞ error
norm analysis shows that the scheme is highly accurate and effective.
Hence, the method is well suited to achieve high order accurate
solution for the scalar wave equation and other hyperbolic equations.
Abstract: Human heart valves diseased by congenital heart
defects, rheumatic fever, bacterial infection, cancer may cause stenosis
or insufficiency in the valves. Treatment may be with medication but
often involves valve repair or replacement (insertion of an artificial
heart valve). Bileaflet mechanical heart valves (BMHVs) are widely
implanted to replace the diseased heart valves, but still suffer from
complications such as hemolysis, platelet activation, tissue
overgrowth and device failure. These complications are closely related
to both flow characteristics through the valves and leaflet dynamics. In
this study, the physiological flow interacting with the moving leaflets
in a bileaflet mechanical heart valve (BMHV) is simulated with a
strongly coupled implicit fluid-structure interaction (FSI) method
which is newly organized based on the Arbitrary-Lagrangian-Eulerian
(ALE) approach and the dynamic mesh method (remeshing) of
FLUENT. The simulated results are in good agreement with previous
experimental studies. This study shows the applicability of the present
FSI model to the complicated physics interacting between fluid flow
and moving boundary.
Abstract: The angular distribution of Compton scattering of two
quanta originating in the annihilation of a positron with an electron
is investigated as a quantum key distribution (QKD) mechanism in
the gamma spectral range. The geometry of coincident Compton
scattering is observed on the two sides as a way to obtain partially
correlated readings on the quantum channel. We derive the noise
probability density function of a conceptually equivalent prepare
and measure quantum channel in order to evaluate the limits of the
concept in terms of the device secrecy capacity and estimate it at
roughly 1.9 bits per 1 000 annihilation events. The high error rate
is well above the tolerable error rates of the common reconciliation
protocols; therefore, the proposed key agreement protocol by public
discussion requires key reconciliation using classical error-correcting
codes. We constructed a prototype device based on the readily
available monolithic detectors in the least complex setup.
Abstract: In this paper, we combine a probabilistic neural method with radial-bias functions in order to construct the lithofacies of the wells DF01, DF02 and DF03 situated in the Triassic province of Algeria (Sahara). Lithofacies is a crucial problem in reservoir characterization. Our objective is to facilitate the experts' work in geological domain and to allow them to obtain quickly the structure and the nature of lands around the drilling. This study intends to design a tool that helps automatic deduction from numerical data. We used a probabilistic formalism to enhance the classification process initiated by a Self-Organized Map procedure. Our system gives lithofacies, from well-log data, of the concerned reservoir wells in an aspect easy to read by a geology expert who identifies the potential for oil production at a given source and so forms the basis for estimating the financial returns and economic benefits.
Abstract: In this paper, the effect of receive and/or transmit
antenna spacing on the performance (BER vs. SNR) of multipleantenna
systems is determined by using an RCS (Radar Cross
Section) channel model. In this physical model, the scatterers
existing in the propagation environment are modeled by their RCS so
that the correlation of the receive signal complex amplitudes, i.e.,
both magnitude and phase, can be estimated. The proposed RCS
channel model is then compared with classical models.
Abstract: In IETF RFC 2002, Mobile-IP was developed to
enable Laptobs to maintain Internet connectivity while moving
between subnets. However, the packet loss that comes from
switching subnets arises because network connectivity is lost while
the mobile host registers with the foreign agent and this encounters
large end-to-end packet delays. The criterion to initiate a simple and
fast full-duplex connection between the home agent and foreign
agent, to reduce the roaming duration, is a very important issue to be
considered by a work in this paper. State-transition Petri-Nets of the
modeling scenario-based CIA: communication inter-agents procedure
as an extension to the basic Mobile-IP registration process was
designed and manipulated to describe the system in discrete events.
The heuristic of configuration file during practical Setup session for
registration parameters, on Cisco platform Router-1760 using IOS
12.3 (15)T and TFTP server S/W is created. Finally, stand-alone
performance simulations from Simulink Matlab, within each subnet
and also between subnets, are illustrated for reporting better end-toend
packet delays. Results verified the effectiveness of our Mathcad
analytical manipulation and experimental implementation. It showed
lower values of end-to-end packet delay for Mobile-IP using CIA
procedure-based early registration. Furthermore, it reported packets
flow between subnets to improve losses between subnets.
Abstract: Traditionally, Internet has provided best-effort service to every user regardless of its requirements. However, as Internet becomes universally available, users demand more bandwidth and applications require more and more resources, and interest has developed in having the Internet provide some degree of Quality of Service. Although QoS is an important issue, the question of how it will be brought into the Internet has not been solved yet. Researches, due to the rapid advances in technology are proposing new and more desirable capabilities for the next generation of IP infrastructures. But neither all applications demand the same amount of resources, nor all users are service providers. In this way, this paper is the first of a series of papers that presents an architecture as a first step to the optimization of QoS in the Internet environment as a solution to a SMSE's problem whose objective is to provide public service to internet with certain Quality of Service expectations. The service provides new business opportunities, but also presents new challenges. We have designed and implemented a scalable service framework that supports adaptive bandwidth based on user demands, and the billing based on usage and on QoS. The developed application has been evaluated and the results show that traffic limiting works at optimum and so it does exceeding bandwidth distribution. However, some considerations are done and currently research is under way in two basic areas: (i) development and testing new transfer protocols, and (ii) developing new strategies for traffic improvements based on service differentiation.
Abstract: Researches show that probability-statistical methods application, especially at the early stage of the aviation Gas Turbine Engine (GTE) technical condition diagnosing, when the flight information has property of the fuzzy, limitation and uncertainty is unfounded. Hence the efficiency of application of new technology Soft Computing at these diagnosing stages with the using of the Fuzzy Logic and Neural Networks methods is considered. According to the purpose of this problem training with high accuracy of fuzzy multiple linear and non-linear models (fuzzy regression equations) which received on the statistical fuzzy data basis is made. For GTE technical condition more adequate model making dynamics of skewness and kurtosis coefficients- changes are analysed. Researches of skewness and kurtosis coefficients values- changes show that, distributions of GTE work parameters have fuzzy character. Hence consideration of fuzzy skewness and kurtosis coefficients is expedient. Investigation of the basic characteristics changes- dynamics of GTE work parameters allows drawing conclusion on necessity of the Fuzzy Statistical Analysis at preliminary identification of the engines' technical condition. Researches of correlation coefficients values- changes shows also on their fuzzy character. Therefore for models choice the application of the Fuzzy Correlation Analysis results is offered. At the information sufficiency is offered to use recurrent algorithm of aviation GTE technical condition identification (Hard Computing technology is used) on measurements of input and output parameters of the multiple linear and non-linear generalised models at presence of noise measured (the new recursive Least Squares Method (LSM)). The developed GTE condition monitoring system provides stageby- stage estimation of engine technical conditions. As application of the given technique the estimation of the new operating aviation engine technical condition was made.
Abstract: Discretization of spatial derivatives is an important
issue in meshfree methods especially when the derivative terms
contain non-linear coefficients. In this paper, various methods used
for discretization of second-order spatial derivatives are investigated
in the context of Smoothed Particle Hydrodynamics. Three popular
forms (i.e. "double summation", "second-order kernel derivation",
and "difference scheme") are studied using one-dimensional unsteady
heat conduction equation. To assess these schemes, transient response
to a step function initial condition is considered. Due to parabolic
nature of the heat equation, one can expect smooth and monotone
solutions. It is shown, however in this paper, that regardless of
the type of kernel function used and the size of smoothing radius,
the double summation discretization form leads to non-physical
oscillations which persist in the solution. Also, results show that when
a second-order kernel derivative is used, a high-order kernel function
shall be employed in such a way that the distance of inflection
point from origin in the kernel function be less than the nearest
particle distance. Otherwise, solutions may exhibit oscillations near
discontinuities unlike the "difference scheme" which unconditionally
produces monotone results.
Abstract: In this paper, an automatic detecting algorithm for
QRS complex detecting was applied for analyzing ECG recordings
and five criteria for dangerous arrhythmia diagnosing are applied for a
protocol type of automatic arrhythmia diagnosing system. The
automatic detecting algorithm applied in this paper detected the
distribution of QRS complexes in ECG recordings and related
information, such as heart rate and RR interval. In this investigation,
twenty sampled ECG recordings of patients with different pathologic
conditions were collected for off-line analysis. A combinative
application of four digital filters for bettering ECG signals and
promoting detecting rate for QRS complex was proposed as
pre-processing. Both of hardware filters and digital filters were
applied to eliminate different types of noises mixed with ECG
recordings. Then, an automatic detecting algorithm of QRS complex
was applied for verifying the distribution of QRS complex. Finally,
the quantitative clinic criteria for diagnosing arrhythmia were
programmed in a practical application for automatic arrhythmia
diagnosing as a post-processor. The results of diagnoses by automatic
dangerous arrhythmia diagnosing were compared with the results of
off-line diagnoses by experienced clinic physicians. The results of
comparison showed the application of automatic dangerous
arrhythmia diagnosis performed a matching rate of 95% compared
with an experienced physician-s diagnoses.