Abstract: Image compression plays a vital role in today-s
communication. The limitation in allocated bandwidth leads to
slower communication. To exchange the rate of transmission in the
limited bandwidth the Image data must be compressed before
transmission. Basically there are two types of compressions, 1)
LOSSY compression and 2) LOSSLESS compression. Lossy
compression though gives more compression compared to lossless
compression; the accuracy in retrievation is less in case of lossy
compression as compared to lossless compression. JPEG, JPEG2000
image compression system follows huffman coding for image
compression. JPEG 2000 coding system use wavelet transform,
which decompose the image into different levels, where the
coefficient in each sub band are uncorrelated from coefficient of
other sub bands. Embedded Zero tree wavelet (EZW) coding exploits
the multi-resolution properties of the wavelet transform to give a
computationally simple algorithm with better performance compared
to existing wavelet transforms. For further improvement of
compression applications other coding methods were recently been
suggested. An ANN base approach is one such method. Artificial
Neural Network has been applied to many problems in image
processing and has demonstrated their superiority over classical
methods when dealing with noisy or incomplete data for image
compression applications. The performance analysis of different
images is proposed with an analysis of EZW coding system with
Error Backpropagation algorithm. The implementation and analysis
shows approximately 30% more accuracy in retrieved image
compare to the existing EZW coding system.
Abstract: The performance of high-resolution schemes is investigated for unsteady, inviscid and compressible multiphase flows. An Eulerian diffuse interface approach has been chosen for the simulation of multicomponent flow problems. The reduced fiveequation and seven equation models are used with HLL and HLLC approximation. The authors demonstrated the advantages and disadvantages of both seven equations and five equations models studying their performance with HLL and HLLC algorithms on simple test case. The seven equation model is based on two pressure, two velocity concept of Baer–Nunziato [10], while five equation model is based on the mixture velocity and pressure. The numerical evaluations of two variants of Riemann solvers have been conducted for the classical one-dimensional air-water shock tube and compared with analytical solution for error analysis.
Abstract: Determining depth of anesthesia is a challenging problem
in the context of biomedical signal processing. Various methods
have been suggested to determine a quantitative index as depth of
anesthesia, but most of these methods suffer from high sensitivity
during the surgery. A novel method based on energy scattering of
samples in the wavelet domain is suggested to represent the basic
content of electroencephalogram (EEG) signal. In this method, first
EEG signal is decomposed into different sub-bands, then samples
are squared and energy of samples sequence is constructed through
each scale and time, which is normalized and finally entropy of the
resulted sequences is suggested as a reliable index. Empirical Results
showed that applying the proposed method to the EEG signals can
classify the awake, moderate and deep anesthesia states similar to
BIS.
Abstract: This paper presents an adaptive motion estimator
that can be dynamically reconfigured by the best algorithm
depending on the variation of the video nature during the lifetime
of an application under running. The 4 Step Search (4SS) and the
Gradient Search (GS) algorithms are integrated in the estimator in
order to be used in the case of rapid and slow video sequences
respectively. The Full Search Block Matching (FSBM) algorithm
has been also integrated in order to be used in the case of the
video sequences which are not real time oriented.
In order to efficiently reduce the computational cost while
achieving better visual quality with low cost power, the proposed
motion estimator is based on a Variable Block Size (VBS) scheme
that uses only the 16x16, 16x8, 8x16 and 8x8 modes.
Experimental results show that the adaptive motion estimator
allows better results in term of Peak Signal to Noise Ratio
(PSNR), computational cost, FPGA occupied area, and dissipated
power relatively to the most popular variable block size schemes
presented in the literature.
Abstract: Three novel and significant contributions are made in
this paper Firstly, non-recursive formulation of Haar connection
coefficients, pioneered by the present authors is presented, which
can be computed very efficiently and avoid stack and memory
overflows. Secondly, the generalized approach for state analysis of
singular bilinear time-invariant (TI) and time-varying (TV) systems
is presented; vis-˜a-vis diversified and complex works reported by
different authors. Thirdly, a generalized approach for parameter
estimation of bilinear TI and TV systems is also proposed. The unified
framework of the proposed method is very significant in that the
digital hardware once-designed can be used to perform the complex
tasks of state analysis and parameter estimation of different types
of bilinear systems single-handedly. The simplicity, effectiveness and
generalized nature of the proposed method is established by applying
it to different types of bilinear systems for the two tasks.
Abstract: Freeze concentration freezes or crystallises the water
molecules out as ice crystals and leaves behind a highly concentrated
solution. In conventional suspension freeze concentration where ice
crystals formed as a suspension in the mother liquor, separation of
ice is difficult. The size of the ice crystals is still very limited which
will require usage of scraped surface heat exchangers, which is very
expensive and accounted for approximately 30% of the capital cost.
This research is conducted using a newer method of freeze
concentration, which is progressive freeze concentration. Ice crystals
were formed as a layer on the designed heat exchanger surface. In
this particular research, a helical structured copper crystallisation
chamber was designed and fabricated. The effect of two operating
conditions on the performance of the newly designed crystallisation
chamber was investigated, which are circulation flowrate and coolant
temperature. The performance of the design was evaluated by the
effective partition constant, K, calculated from the volume and
concentration of the solid and liquid phase. The system was also
monitored by a data acquisition tool in order to see the temperature
profile throughout the process. On completing the experimental
work, it was found that higher flowrate resulted in a lower K, which
translated into high efficiency. The efficiency is the highest at 1000
ml/min. It was also found that the process gives the highest
efficiency at a coolant temperature of -6 °C.
Abstract: Software development has experienced remarkable progress in the past decade. However, due to the rising complexity and magnitude of the project the development productivity has not been consistently improved. By analyzing the latest ISBSG data repository with 4106 projects, we discovered that software development productivity has actually undergone irregular variations between the years 1995 and 2005. Considering the factors significant to the productivity, we found its variations are primarily caused by the variations of average team size and the unbalanced uses of the less productive language 3GL.
Abstract: In this paper, we use Radial Basis Function Networks
(RBFN) for solving the problem of environmental interference
cancellation of speech signal. We show that the Second Order Thin-
Plate Spline (SOTPS) kernel cancels the interferences effectively.
For make comparison, we test our experiments on two conventional
most used RBFN kernels: the Gaussian and First order TPS (FOTPS)
basis functions. The speech signals used here were taken from the
OGI Multi-Language Telephone Speech Corpus database and were
corrupted with six type of environmental noise from NOISEX-92
database. Experimental results show that the SOTPS kernel can
considerably outperform the Gaussian and FOTPS functions on
speech interference cancellation problem.
Abstract: This paper presents a longitudinal quasi-linear model for the ADMIRE model. The ADMIRE model is a nonlinear model of aircraft flying in the condition of high angle of attack. So it can-t be considered to be a linear system approximately. In this paper, for getting the longitudinal quasi-linear model of the ADMIRE, a state transformation based on differentiable functions of the nonscheduling states and control inputs is performed, with the goal of removing any nonlinear terms not dependent on the scheduling parameter. Since it needn-t linear approximation and can obtain the exact transformations of the nonlinear states, the above-mentioned approach is thought to be appropriate to establish the mathematical model of ADMIRE. To verify this conclusion, simulation experiments are done. And the result shows that this quasi-linear model is accurate enough.
Abstract: People detection from images has a variety of applications such as video surveillance and driver assistance system, but is still a challenging task and more difficult in crowded environments such as shopping malls in which occlusion of lower parts of human body often occurs. Lack of the full-body information requires more effective features than common features such as HOG. In this paper, new features are introduced that exploits global self-symmetry (GSS) characteristic in head-shoulder patterns. The features encode the similarity or difference of color histograms and oriented gradient histograms between two vertically symmetric blocks. The domain-specific features are rapid to compute from the integral images in Viola-Jones cascade-of-rejecters framework. The proposed features are evaluated with our own head-shoulder dataset that, in part, consists of a well-known INRIA pedestrian dataset. Experimental results show that the GSS features are effective in reduction of false alarmsmarginally and the gradient GSS features are preferred more often than the color GSS ones in the feature selection.
Abstract: The switching lag-time and the voltage drop across
the power devices cause serious waveform distortions and
fundamental voltage drop in pulse width-modulated inverter output.
These phenomenons are conspicuous when both the output frequency
and voltage are low. To estimate the output voltage from the PWM
reference signal it is essential to take account of these imperfections
and to correct them. In this paper, on-line compensation method is
presented. It needs three simple blocs to add at the ideal reference
voltages. This method does not require any additional hardware
circuit and off- line experimental measurement. The paper includes
experimental results to demonstrate the validity of the proposed
method. It is applied, finally, in case of indirect vector controlled
induction machine and implemented using dSpace card.
Abstract: Bloom filter is a probabilistic and memory efficient
data structure designed to answer rapidly whether an element is
present in a set. It tells that the element is definitely not in the set but
its presence is with certain probability. The trade-off to use Bloom
filter is a certain configurable risk of false positives. The odds of a
false positive can be made very low if the number of hash function is
sufficiently large. For spam detection, weight is attached to each set
of elements. The spam weight for a word is a measure used to rate the
e-mail. Each word is assigned to a Bloom filter based on its weight.
The proposed work introduces an enhanced concept in Bloom filter
called Bin Bloom Filter (BBF). The performance of BBF over
conventional Bloom filter is evaluated under various optimization
techniques. Real time data set and synthetic data sets are used for
experimental analysis and the results are demonstrated for bin sizes 4,
5, 6 and 7. Finally analyzing the results, it is found that the BBF
which uses heuristic techniques performs better than the traditional
Bloom filter in spam detection.
Abstract: Missing data yields many analysis challenges. In case of complex survey design, in addition to dealing with missing data, researchers need to account for the sampling design to achieve useful inferences. Methods for incorporating sampling weights in neural network imputation were investigated to account for complex survey designs. An estimate of variance to account for the imputation uncertainty as well as the sampling design using neural networks will be provided. A simulation study was conducted to compare estimation results based on complete case analysis, multiple imputation using a Markov Chain Monte Carlo, and neural network imputation. Furthermore, a public-use dataset was used as an example to illustrate neural networks imputation under a complex survey design
Abstract: This study was initiated with a three prong objective.
One, to identify the relationship between Technological
Competencies factors (Technical Capability, Firm Innovativeness
and E-Business Practices and professional service firms- business
performance. To investigate the predictors of professional service
firms business performance and finally to evaluate the predictors of
business performance according to the type of professional service
firms, a survey questionnaire was deployed to collect empirical data.
The questionnaire was distributed to the owners of the professional
small medium size enterprises services in the Accounting, Legal,
Engineering and Architecture sectors. Analysis showed that all three
Technology Competency factors have moderate effect on business
performance. In addition, the regression models indicate that
technical capability is the most highly influential that could
determine business performance, followed by e-business practices
and firm innovativeness. Subsequently, the main predictor of
business performance for all types of firms is Technical capability.
Abstract: This paper deals with a high-order accurate Runge
Kutta Discontinuous Galerkin (RKDG) method for the numerical
solution of the wave equation, which is one of the simple case of a
linear hyperbolic partial differential equation. Nodal DG method is
used for a finite element space discretization in 'x' by discontinuous
approximations. This method combines mainly two key ideas which
are based on the finite volume and finite element methods. The
physics of wave propagation being accounted for by means of
Riemann problems and accuracy is obtained by means of high-order
polynomial approximations within the elements. High order accurate
Low Storage Explicit Runge Kutta (LSERK) method is used for
temporal discretization in 't' that allows the method to be nonlinearly
stable regardless of its accuracy. The resulting RKDG
methods are stable and high-order accurate. The L1 ,L2 and L∞ error
norm analysis shows that the scheme is highly accurate and effective.
Hence, the method is well suited to achieve high order accurate
solution for the scalar wave equation and other hyperbolic equations.
Abstract: The angular distribution of Compton scattering of two
quanta originating in the annihilation of a positron with an electron
is investigated as a quantum key distribution (QKD) mechanism in
the gamma spectral range. The geometry of coincident Compton
scattering is observed on the two sides as a way to obtain partially
correlated readings on the quantum channel. We derive the noise
probability density function of a conceptually equivalent prepare
and measure quantum channel in order to evaluate the limits of the
concept in terms of the device secrecy capacity and estimate it at
roughly 1.9 bits per 1 000 annihilation events. The high error rate
is well above the tolerable error rates of the common reconciliation
protocols; therefore, the proposed key agreement protocol by public
discussion requires key reconciliation using classical error-correcting
codes. We constructed a prototype device based on the readily
available monolithic detectors in the least complex setup.
Abstract: In this paper, we combine a probabilistic neural method with radial-bias functions in order to construct the lithofacies of the wells DF01, DF02 and DF03 situated in the Triassic province of Algeria (Sahara). Lithofacies is a crucial problem in reservoir characterization. Our objective is to facilitate the experts' work in geological domain and to allow them to obtain quickly the structure and the nature of lands around the drilling. This study intends to design a tool that helps automatic deduction from numerical data. We used a probabilistic formalism to enhance the classification process initiated by a Self-Organized Map procedure. Our system gives lithofacies, from well-log data, of the concerned reservoir wells in an aspect easy to read by a geology expert who identifies the potential for oil production at a given source and so forms the basis for estimating the financial returns and economic benefits.
Abstract: Face and facial expressions play essential roles in
interpersonal communication. Most of the current works on the facial
expression recognition attempt to recognize a small set of the
prototypic expressions such as happy, surprise, anger, sad, disgust
and fear. However the most of the human emotions are
communicated by changes in one or two of discrete features. In this
paper, we develop a facial expressions synthesis system, based on the
facial characteristic points (FCP's) tracking in the frontal image
sequences. Selected FCP's are automatically tracked using a crosscorrelation
based optical flow. The proposed synthesis system uses a
simple deformable facial features model with a few set of control
points that can be tracked in original facial image sequences.
Abstract: In this work, a special case of the image superresolution
problem where the only type of motion is global
translational motion and the blurs are shift-invariant is investigated.
The necessary conditions for exact reconstruction of the original
image by using finite impulse-response reconstruction filters are
developed. Given that the conditions are satisfied, a method for exact
super-resolution is presented and some simulation results are shown.
Abstract: In this paper, the effect of receive and/or transmit
antenna spacing on the performance (BER vs. SNR) of multipleantenna
systems is determined by using an RCS (Radar Cross
Section) channel model. In this physical model, the scatterers
existing in the propagation environment are modeled by their RCS so
that the correlation of the receive signal complex amplitudes, i.e.,
both magnitude and phase, can be estimated. The proposed RCS
channel model is then compared with classical models.