Abstract: In this paper, the two-dimensional stagger grid
interface pressure (SGIP) model has been generalized and presented
into three-dimensional form. For this purpose, various models of
surface tension force for interfacial flows have been investigated and
compared with each other. The VOF method has been used for
tracking the interface. To show the ability of the SGIP model for
three-dimensional flows in comparison with other models, pressure
contours, maximum spurious velocities, norm spurious flow
velocities and pressure jump error for motionless drop of liquid and
bubble of gas are calculated using different models. It has been
pointed out that SGIP model in comparison with the CSF, CSS and
PCIL models produces the least maximum and norm spurious
velocities. Additionally, the new model produces more accurate
results in calculating the pressure jumps across the interface for
motionless drop of liquid and bubble of gas which is generated in
surface tension force.
Abstract: We present a novel scheme to evaluate sinusoidal functions with low complexity and high precision using cubic spline interpolation. To this end, two different approaches are proposed to find the interpolating polynomial of sin(x) within the range [- π , π]. The first one deals with only a single data point while the other with two to keep the realization cost as low as possible. An approximation error optimization technique for cubic spline interpolation is introduced next and is shown to increase the interpolator accuracy without increasing complexity of the associated hardware. The architectures for the proposed approaches are also developed, which exhibit flexibility of implementation with low power requirement.
Abstract: Echocardiography imaging is one of the most common diagnostic tests that are widely used for assessing the abnormalities of the regional heart ventricle function. The main goal of the image enhancement task in 2D-echocardiography (2DE) is to solve two major anatomical structure problems; speckle noise and low quality. Therefore, speckle noise reduction is one of the important steps that used as a pre-processing to reduce the distortion effects in 2DE image segmentation. In this paper, we present the common filters that based on some form of low-pass spatial smoothing filters such as Mean, Gaussian, and Median. The Laplacian filter was used as a high-pass sharpening filter. A comparative analysis was presented to test the effectiveness of these filters after being applied to original 2DE images of 4-chamber and 2-chamber views. Three statistical quantity measures: root mean square error (RMSE), peak signal-to-ratio (PSNR) and signal-tonoise ratio (SNR) are used to evaluate the filter performance quantitatively on the output enhanced image.
Abstract: An immunomodulator bioproduct is prepared in a
batch bioprocess with a modified bacterium Pseudomonas
aeruginosa. The bioprocess is performed in 100 L Bioengineering
bioreactor with 42 L cultivation medium made of peptone, meat
extract and sodium chloride. The optimal bioprocess parameters were
determined: temperature – 37 0C, agitation speed - 300 rpm, aeration
rate – 40 L/min, pressure – 0.5 bar, Dow Corning Antifoam M-max.
4 % of the medium volume, duration - 6 hours. This kind of
bioprocesses are appreciated as difficult to control because their
dynamic behavior is highly nonlinear and time varying. The aim of
the paper is to present (by comparison) different models based on
experimental data.
The analysis criteria were modeling error and convergence rate.
The estimated values and the modeling analysis were done by using
the Table Curve 2D.
The preliminary conclusions indicate Andrews-s model with a
maximum specific growth rate of the bacterium in the range of
0.8 h-1.
Abstract: In this paper, we propose a modified version of the
Constant Modulus Algorithm (CMA) tailored for blind Decision
Feedback Equalizer (DFE) of first order Markovian time varying
channels. The proposed NonStationary CMA (NSCMA) is designed
so that it explicitly takes into account the Markovian structure of
the channel nonstationarity. Hence, unlike the classical CMA, the
NSCMA is not blind with respect to the channel time variations.
This greatly helps the equalizer in the case of realistic channels, and
avoids frequent transmissions of training sequences.
This paper develops a theoretical analysis of the steady state
performance of the CMA and the NSCMA for DFEs within a time
varying context. Therefore, approximate expressions of the mean
square errors are derived. We prove that in the steady state, the
NSCMA exhibits better performance than the classical CMA. These
new results are confirmed by simulation.
Through an experimental study, we demonstrate that the Bit Error
Rate (BER) is reduced by the NSCMA-DFE, and the improvement
of the BER achieved by the NSCMA-DFE is as significant as the
channel time variations are severe.
Abstract: Visualizing sound and noise often help us to determine
an appropriate control over the source localization. Near-field acoustic
holography (NAH) is a powerful tool for the ill-posed problem.
However, in practice, due to the small finite aperture size, the discrete
Fourier transform, FFT based NAH couldn-t predict the activeregion-
of-interest (AROI) over the edges of the plane. Theoretically
few approaches were proposed for solving finite aperture problem.
However most of these methods are not quite compatible for the
practical implementation, especially near the edge of the source. In
this paper, a zip-stuffing extrapolation approach has suggested with
2D Kaiser window. It is operated on wavenumber complex space
to localize the predicted sources. We numerically form a practice
environment with touch impact databases to test the localization of
sound source. It is observed that zip-stuffing aperture extrapolation
and 2D window with evanescent components provide more accuracy
especially in the small aperture and its derivatives.
Abstract: This paper presents an approach for an unequal error
protection of facial features of personal ID images coding. We
consider unequal error protection (UEP) strategies for the efficient
progressive transmission of embedded image codes over noisy
channels. This new method is based on the progressive image
compression embedded zerotree wavelet (EZW) algorithm and UEP
technique with defined region of interest (ROI). In this case is ROI
equal facial features within personal ID image. ROI technique is
important in applications with different parts of importance. In ROI
coding, a chosen ROI is encoded with higher quality than the
background (BG). Unequal error protection of image is provided by
different coding techniques and encoding LL band separately. In our
proposed method, image is divided into two parts (ROI, BG) that
consist of more important bytes (MIB) and less important bytes
(LIB). The proposed unequal error protection of image transmission
has shown to be more appropriate to low bit rate applications,
producing better quality output for ROI of the compresses image.
The experimental results verify effectiveness of the design. The
results of our method demonstrate the comparison of the UEP of
image transmission with defined ROI with facial features and the
equal error protection (EEP) over additive white gaussian noise
(AWGN) channel.
Abstract: In this paper, we explore the applicability of the Sinc-
Collocation method to a three-dimensional (3D) oceanography model.
The model describes a wind-driven current with depth-dependent
eddy viscosity in the complex-velocity system. In general, the
Sinc-based methods excel over other traditional numerical methods
due to their exponentially decaying errors, rapid convergence and
handling problems in the presence of singularities in end-points.
Together with these advantages, the Sinc-Collocation approach that
we utilize exploits first derivative interpolation, whose integration
is much less sensitive to numerical errors. We bring up several
model problems to prove the accuracy, stability, and computational
efficiency of the method. The approximate solutions determined by
the Sinc-Collocation technique are compared to exact solutions and
those obtained by the Sinc-Galerkin approach in earlier studies. Our
findings indicate that the Sinc-Collocation method outperforms other
Sinc-based methods in past studies.
Abstract: ELS is an important ground based hardware in the
loop simulator used for aerodynamics torque loading experiments
of the actuators under test. This work focuses on improvement of the
transient response of torque controller with parameters uncertainty
of Electrical Load Simulator (ELS).The parameters of load simulator
are estimated online and the model is updated, eliminating the model
error and improving the steady state torque tracking response of
torque controller. To improve the Transient control performance the
gain of robust term of SMC is updated online using fuzzy logic
system based on the amount of uncertainty in parameters of load
simulator. The states of load simulator which cannot be measured
directly are estimated using luenberger observer with update of new
estimated parameters. The stability of the control scheme is verified
using Lyapunov theorem. The validity of proposed control scheme is
verified using simulations.
Abstract: Recently, a lot of attention has been devoted to
advanced techniques of system modeling. PNN(polynomial neural
network) is a GMDH-type algorithm (Group Method of Data
Handling) which is one of the useful method for modeling nonlinear
systems but PNN performance depends strongly on the number of
input variables and the order of polynomial which are determined by
trial and error. In this paper, we introduce GPNN (genetic
polynomial neural network) to improve the performance of PNN.
GPNN determines the number of input variables and the order of all
neurons with GA (genetic algorithm). We use GA to search between
all possible values for the number of input variables and the order of
polynomial. GPNN performance is obtained by two nonlinear
systems. the quadratic equation and the time series Dow Jones stock
index are two case studies for obtaining the GPNN performance.
Abstract: Medical image data hiding has strict constrains such
as high imperceptibility, high capacity and high robustness.
Achieving these three requirements simultaneously is highly
cumbersome. Some works have been reported in the literature on
data hiding, watermarking and stegnography which are suitable for
telemedicine applications. None is reliable in all aspects. Electronic
Patient Report (EPR) data hiding for telemedicine demand it blind
and reversible. This paper proposes a novel approach to blind
reversible data hiding based on integer wavelet transform.
Experimental results shows that this scheme outperforms the prior
arts in terms of zero BER (Bit Error Rate), higher PSNR (Peak Signal
to Noise Ratio), and large EPR data embedding capacity with
WPSNR (Weighted Peak Signal to Noise Ratio) around 53 dB,
compared with the existing reversible data hiding schemes.
Abstract: Most scientific programs have large input and output
data sets that require out-of-core programming or use virtual memory
management (VMM). Out-of-core programming is very error-prone
and tedious; as a result, it is generally avoided. However, in many
instance, VMM is not an effective approach because it often results
in substantial performance reduction. In contrast, compiler driven I/O
management will allow a program-s data sets to be retrieved in parts,
called blocks or tiles. Comanche (COmpiler MANaged caCHE) is a
compiler combined with a user level runtime system that can be used
to replace standard VMM for out-of-core programs. We describe
Comanche and demonstrate on a number of representative problems
that it substantially out-performs VMM. Significantly our system
does not require any special services from the operating system and
does not require modification of the operating system kernel.
Abstract: The mitigation of crop loss due to damaging freezes requires accurate air temperature prediction models. An improved model for temperature prediction in Georgia was developed by including information on seasonality and modifying parameters of an existing artificial neural network model. Alternative models were compared by instantiating and training multiple networks for each model. The inclusion of up to 24 hours of prior weather information and inputs reflecting the day of year were among improvements that reduced average four-hour prediction error by 0.18°C compared to the prior model. Results strongly suggest model developers should instantiate and train multiple networks with different initial weights to establish appropriate model parameters.
Abstract: Many accidents were happened because of fast driving, habitual working overtime or tired spirit. This paper presents a solution of remote warning for vehicles collision avoidance using vehicular communication. The development system integrates dedicated short range communication (DSRC) and global position system (GPS) with embedded system into a powerful remote warning system. To transmit the vehicular information and broadcast vehicle position; DSRC communication technology is adopt as the bridge. The proposed system is divided into two parts of the positioning andvehicular units in a vehicle. The positioning unit is used to provide the position and heading information from GPS module, and furthermore the vehicular unit is used to receive the break, throttle, and othersignals via controller area network (CAN) interface connected to each mechanism. The mobile hardware are built with an embedded system using X86 processor in Linux system. A vehicle is communicated with other vehicles via DSRC in non-addressed protocol with wireless access in vehicular environments (WAVE) short message protocol. From the position data and vehicular information, this paper provided a conflict detection algorithm to do time separation and remote warning with error bubble consideration. And the warning information is on-line displayed in the screen. This system is able to enhance driver assistance service and realize critical safety by using vehicular information from the neighbor vehicles.KeywordsDedicated short range communication, GPS, Control area network, Collision avoidance warning system.
Abstract: In spite of all advancement in software testing,
debugging remains a labor-intensive, manual, time consuming, and
error prone process. A candidate solution to enhance debugging
process is to fuse it with testing process. To achieve this integration,
a possible solution may be categorizing common software tests and
errors followed by the effort on fixing the errors through general
solutions for each test/error pair. Our approach to address this issue is
based on Christopher Alexander-s pattern and pattern language
concepts. The patterns in this language are grouped into three major
sections and connect the three concepts of test, error, and debug.
These patterns and their hierarchical relationship shape a pattern
language that introduces a solution to solve software errors in a
known testing context.
Finally, we will introduce our developed framework ADE as a
sample implementation to support a pattern of proposed language,
which aims to automate the whole process of evolving software
design via evolutionary methods.
Abstract: Most of the commonly used blind equalization algorithms are based on the minimization of a nonconvex and nonlinear cost function and a neural network gives smaller residual error as compared to a linear structure. The efficacy of complex valued feedforward neural networks for blind equalization of linear and nonlinear communication channels has been confirmed by many studies. In this paper we present two neural network models for blind equalization of time-varying channels, for M-ary QAM and PSK signals. The complex valued activation functions, suitable for these signal constellations in time-varying environment, are introduced and the learning algorithms based on the CMA cost function are derived. The improved performance of the proposed models is confirmed through computer simulations.
Abstract: Noise has adverse effect on human health and
comfort. Noise not only cause hearing impairment, but it also acts as
a causal factor for stress and raising systolic pressure. Additionally it
can be a causal factor in work accidents, both by marking hazards
and warning signals and by impeding concentration. Industry
workers also suffer psychological and physical stress as well as
hearing loss due to industrial noise. This paper proposes an approach
to enable engineers to point out quantitatively the noisiest source for
modification, while multiple machines are operating simultaneously.
The model with the point source and spherical radiation in a free field
was adopted to formulate the problem. The procedure works very
well in ideal cases (point source and free field). However, most of the
industrial noise problems are complicated by the fact that the noise is
confined in a room. Reflections from the walls, floor, ceiling, and
equipment in a room create a reverberant sound field that alters the
sound wave characteristics from those for the free field. So the model
was validated for relatively low absorption room at NIT Kurukshetra
Central Workshop. The results of validation pointed out that the
estimated sound power of noise sources under simultaneous
conditions were on lower side, within the error limits 3.56 - 6.35 %.
Thus suggesting the use of this methodology for practical
implementation in industry. To demonstrate the application of the
above analytical procedure for estimating the sound power of noise
sources under simultaneous operating conditions, a manufacturing
facility (Railway Workshop at Yamunanagar, India) having five
sound sources (machines) on its workshop floor is considered in this
study. The findings of the case study had identified the two most
effective candidates (noise sources) for noise control in the Railway
Workshop Yamunanagar, India. The study suggests that the
modification in the design and/or replacement of these two identified
noisiest sources (machine) would be necessary so as to achieve an
effective reduction in noise levels. Further, the estimated data allows
engineers to better understand the noise situations of the workplace
and to revise the map when changes occur in noise level due to a
workplace re-layout.
Abstract: An artificial neural network (ANN) model is
presented for the prediction of kinematic viscosity of binary mixtures
of poly (ethylene glycol) (PEG) in water as a function of temperature,
number-average molecular weight and mass fraction. Kinematic
viscosities data of aqueous solutions for PEG (0.55419×10-6 –
9.875×10-6 m2/s) were obtained from the literature for a wide range
of temperatures (277.15 - 338.15 K), number-average molecular
weight (200 -10000), and mass fraction (0.0 – 1.0). A three layer
feed-forward artificial neural network was employed. This model
predicts the kinematic viscosity with a mean square error (MSE) of
0.281 and the coefficient of determination (R2) of 0.983. The results
show that the kinematic viscosity of binary mixture of PEG in water
could be successfully predicted using an artificial neural network
model.
Abstract: This paper proposes an efficient method for the design
of two channel quadrature mirror filter (QMF) bank. To achieve
minimum value of reconstruction error near to perfect reconstruction,
a linear optimization process has been proposed. Prototype low pass
filter has been designed using Kaiser window function. The modified
algorithm has been developed to optimize the reconstruction error
using linear objective function through iteration method. The result
obtained, show that the performance of the proposed algorithm is
better than that of the already exists methods.
Abstract: In order to implement flexibility as well as survivable
capacities over passive optical network (PON), a new automatic
random fault-recovery mechanism with array-waveguide-grating
based (AWG-based) optical switch (OSW) is presented. Firstly,
wavelength-division-multiplexing and optical code-division
multiple-access (WDM/OCDMA) scheme are configured to meet the
various geographical locations requirement between optical network
unit (ONU) and optical line terminal (OLT). The AWG-base optical
switch is designed and viewed as central star-mesh topology to
prohibit/decrease the duplicated redundant elements such as fiber and
transceiver as well. Hence, by simple monitoring and routing switch
algorithm, random fault-recovery capacity is achieved over
bi-directional (up/downstream) WDM/OCDMA scheme. When error
of distribution fiber (DF) takes place or bit-error-rate (BER) is higher
than 10-9 requirement, the primary/slave AWG-based OSW are
adjusted and controlled dynamically to restore the affected ONU
groups via the other working DFs immediately.