Abstract: The aim of this study was to remove the two principal
noises which disturb the surface electromyography signal
(Diaphragm). These signals are the electrocardiogram ECG artefact
and the power line interference artefact. The algorithm proposed
focuses on a new Lean Mean Square (LMS) Widrow adaptive
structure. These structures require a reference signal that is correlated
with the noise contaminating the signal. The noise references are
then extracted : first with a noise reference mathematically
constructed using two different cosine functions; 50Hz (the
fundamental) function and 150Hz (the first harmonic) function for
the power line interference and second with a matching pursuit
technique combined to an LMS structure for the ECG artefact
estimation. The two removal procedures are attained without the use
of supplementary electrodes. These techniques of filtering are
validated on real records of surface diaphragm electromyography
signal. The performance of the proposed methods was compared with
already conducted research results.
Abstract: This paper describes a new measuring algorithm for
three-dimensional (3-D) braided composite material .Braided angle is
an important parameter of braided composites. The objective of this
paper is to present an automatic measuring system. In the paper, the
algorithm is performed by using vcµ6.0 language on PC. An
advanced filtered algorithm for image of 3-D braided composites
material performs has been developed. The procedure is completely
automatic and relies on the gray scale information content of the
images and their local wavelet transform modulus maxims.
Experimental results show that the proposed method is feasible.
The algorithm was tested on both carbon-fiber and glass-fiber
performs.
Abstract: A new current-mode multifunction filter using minimum number of passive elements is proposed. The proposed filter has single-input and four high-impedance outputs. It uses four passive elements (two capacitors and two resistors) and four dual output second generation current conveyors. Each output provides a different filter response, namely, low-pass, high-pass, band-pass and band-reject. The sensitivity analysis is also carried out on both ideal and non-ideal filter configurations. The validity of the proposed filter is verified through PSPICE simulations.
Abstract: The general idea behind the filter is to average a pixel
using other pixel values from its neighborhood, but simultaneously to
take care of important image structures such as edges. The main
concern of the proposed filter is to distinguish between any variations
of the captured digital image due to noise and due to image structure.
The edges give the image the appearance depth and sharpness. A
loss of edges makes the image appear blurred or unfocused.
However, noise smoothing and edge enhancement are traditionally
conflicting tasks. Since most noise filtering behaves like a low pass
filter, the blurring of edges and loss of detail seems a natural
consequence. Techniques to remedy this inherent conflict often
encompass generation of new noise due to enhancement.
In this work a new fuzzy filter is presented for the noise reduction
of images corrupted with additive noise. The filter consists of three
stages. (1) Define fuzzy sets in the input space to computes a fuzzy
derivative for eight different directions (2) construct a set of IFTHEN
rules by to perform fuzzy smoothing according to
contributions of neighboring pixel values and (3) define fuzzy sets in
the output space to get the filtered and edged image.
Experimental results are obtained to show the feasibility of the
proposed approach with two dimensional objects.
Abstract: Image interpolation is a common problem in imaging applications. However, most interpolation algorithms in existence suffer visually the effects of blurred edges and jagged artifacts in the image to some extent. This paper presents an adaptive feature preserving bidirectional flow process, where an inverse diffusion is performed to sharpen edges along the normal directions to the isophote lines (edges), while a normal diffusion is done to remove artifacts (“jaggies") along the tangent directions. In order to preserve image features such as edges, corners and textures, the nonlinear diffusion coefficients are locally adjusted according to the directional derivatives of the image. Experimental results on synthetic images and nature images demonstrate that our interpolation algorithm substantially improves the subjective quality of the interpolated images over conventional interpolations.
Abstract: The intent of this essay is to evaluate the effectiveness
of surge suppressor aimed at power supply used for automation
devices in power distribution system which is consist of MOV and
T type low-pass filter. Books, journal articles and e-sources related
to surge protection of power supply used for automation devices in
power distribution system were consulted, and the useful information
was organized, analyzed and developed into five parts: characteristics
of surge wave, protection against surge wave, impedance characteristics
of target, using Matlab to simulate circuit response after
5kV,1.2/50s surge wave and suggestions for surge protection. The
results indicate that various types of load situation have great impact
on the effectiveness of surge protective device. Therefore, type and
parameters of surge protective device need to be carefully selected,
and load matching is also vital to be concerned.
Abstract: In this paper a bank of velocity filters is devised to be
used for isolating a moving object with specific velocity in a sequence of frames. The approach used is a 3-D FFT based experimental procedure without applying any theoretical concept
from velocity filters. Accordingly, velocity filters are built using the
spectral signature of each separate moving object. Experimentation
reveals the capabilities of the constructed filter bank to separate moving objects as far as the amplitude as well as the direction of the
velocity are concerned.
Abstract: User-based Collaborative filtering (CF), one of the
most prevailing and efficient recommendation techniques, provides
personalized recommendations to users based on the opinions of other
users. Although the CF technique has been successfully applied in
various applications, it suffers from serious sparsity problems. The
cloud-model approach addresses the sparsity problems by
constructing the user-s global preference represented by a cloud
eigenvector. The user-based CF approach works well with dense
datasets while the cloud-model CF approach has a greater
performance when the dataset is sparse. In this paper, we present a
hybrid approach that integrates the predictions from both the
user-based CF and the cloud-model CF approaches. The experimental
results show that the proposed hybrid approach can ameliorate the
sparsity problem and provide an improved prediction quality.
Abstract: In this paper, we propose a fully-utilized, block-based 2D DWT (discrete wavelet transform) architecture, which consists of four 1D DWT filters with two-channel QMF lattice structure. The proposed architecture requires about 2MN-3N registers to save the intermediate results for higher level decomposition, where M and N stand for the filter length and the row width of the image respectively. Furthermore, the proposed 2D DWT processes in horizontal and vertical directions simultaneously without an idle period, so that it computes the DWT for an N×N image in a period of N2(1-2-2J)/3. Compared to the existing approaches, the proposed architecture shows 100% of hardware utilization and high throughput rates. To mitigate the long critical path delay due to the cascaded lattices, we can apply the pipeline technique with four stages, while retaining 100% of hardware utilization. The proposed architecture can be applied in real-time video signal processing.
Abstract: Image enhancement is the most important challenging preprocessing for almost all applications of Image Processing. By now, various methods such as Median filter, α-trimmed mean filter, etc. have been suggested. It was proved that the α-trimmed mean filter is the modification of median and mean filters. On the other hand, ε-filters have shown excellent performance in suppressing noise. In spite of their simplicity, they achieve good results. However, conventional ε-filter is based on moving average. In this paper, we suggested a new ε-filter which utilizes α-trimmed mean. We argue that this new method gives better outcomes compared to previous ones and the experimental results confirmed this claim.
Abstract: Several trillion cigarettes produced worldwide annually lead to many thousands of kilograms of toxic waste. Cigarette butts (CBs) accumulate in the environment due to the poor biodegradability of the cellulose acetate filters. This paper presents some of the results from a continuing study on recycling CBs into fired clay bricks. Physico-mechanical properties of fired clay bricks manufactured with different percentages of CBs are reported and discussed. The results show that the density of fired bricks was reduced by up to 30 %, depending on the percentage of CBs incorporated into the raw materials. Similarly, the compressive strength of bricks tested decreased according to the percentage of CBs included in the mix. The thermal conductivity performance of bricks was improved by 51 and 58 % for 5 and 10 % CBs content respectively. Leaching tests were carried out to investigate the levels of possible leachates of heavy metals from the manufactured clay-CB bricks. The results revealed trace amounts of heavy metals.
Abstract: This paper presents the design and implements the prototype of an intelligent data processing framework in ubiquitous sensor networks. Much focus is put on how to handle the sensor data stream as well as the interoperability between the low-level sensor data and application clients. Our framework first addresses systematic middleware which mitigates the interaction between the application layer and low-level sensors, for the sake of analyzing a great volume of sensor data by filtering and integrating to create value-added context information. Then, an agent-based architecture is proposed for real-time data distribution to efficiently forward a specific event to the appropriate application registered in the directory service via the open interface. The prototype implementation demonstrates that our framework can host a sophisticated application on the ubiquitous sensor network and it can autonomously evolve to new middleware, taking advantages of promising technologies such as software agents, XML, cloud computing, and the like.
Abstract: Fast depth estimation from binocular vision is often
desired for autonomous vehicles, but, most algorithms could not easily
be put into practice because of the much time cost. We present an
image-processing technique that can fast estimate depth image from
binocular vision images. By finding out the lines which present the
best matched area in the disparity space image, the depth can be
estimated. When detecting these lines, an edge-emphasizing filter is
used. The final depth estimation will be presented after the smooth
filter. Our method is a compromise between local methods and global
optimization.
Abstract: In this paper a technique for increasing the
convergence rate of fractionally spaced channel equalizer is
proposed. Instead of symbol-spaced updating of the equalizer filter, a
mechanism has been devised to update the filter at a higher rate. This
ensures convergence of the equalizer filter at a higher rate and
therefore less time-consuming. The proposed technique has been
simulated and tested for two-ray modeled channels with various
delay spreads. These channels include minimum-phase and nonminimum-
phase channels. Simulation results suggest that that
proposed technique outperforms the conventional technique of
symbol-spaced updating of equalizer filter.
Abstract: Smoothing or filtering of data is first preprocessing step
for noise suppression in many applications involving data analysis.
Moving average is the most popular method of smoothing the data,
generalization of this led to the development of Savitzky-Golay filter.
Many window smoothing methods were developed by convolving
the data with different window functions for different applications;
most widely used window functions are Gaussian or Kaiser. Function
approximation of the data by polynomial regression or Fourier
expansion or wavelet expansion also gives a smoothed data. Wavelets
also smooth the data to great extent by thresholding the wavelet
coefficients. Almost all smoothing methods destroys the peaks and
flatten them when the support of the window is increased. In certain
applications it is desirable to retain peaks while smoothing the data
as much as possible. In this paper we present a methodology called
as peak-wise smoothing that will smooth the data to any desired level
without losing the major peak features.
Abstract: The design problem of Infinite Impulse Response (IIR)
digital filters is usually expressed as the minimization problem of
the complex magnitude error that includes both the magnitude and
phase information. However, the group delay of the filter obtained
by solving such design problem may be far from the desired group
delay. In this paper, we propose a design method of stable IIR digital
filters with prespecified maximum group delay errors. In the proposed
method, the approximation problems of the magnitude-phase and
group delay are separately defined, and these two approximation
problems are alternately solved using successive projections. As a
result, the proposed method can design the IIR filters that satisfy the
prespecified allowable errors for not only the complex magnitude but
also the group delay by alternately executing the coefficient update
for the magnitude-phase and the group delay approximation. The
usefulness of the proposed method is verified through some examples.
Abstract: The permanent magnet synchronous motor (PMSM) is
very useful in many applications. Vector control of PMSM is popular
kind of its control. In this paper, at first an optimal vector control for
PMSM is designed and then results are compared with conventional
vector control. Then, it is assumed that the measurements are noisy
and linear quadratic Gaussian (LQG) methodology is used to filter
the noises. The results of noisy optimal vector control and filtered
optimal vector control are compared to each other. Nonlinearity of
PMSM and existence of inverter in its control circuit caused that the
system is nonlinear and time-variant. With deriving average model,
the system is changed to nonlinear time-invariant and then the
nonlinear system is converted to linear system by linearization of
model around average values. This model is used to optimize vector
control then two optimal vector controls are compared to each other.
Simulation results show that the performance and robustness to noise
of the control system has been highly improved.
Abstract: This research presented in this paper is an on-going
project of an application of neural network and fuzzy models to
evaluate the sociological factors which affect the educational
performance of the students in Sri Lanka. One of its major goals is to
prepare the grounds to device a counseling tool which helps these
students for a better performance at their examinations, especially at
their G.C.E O/L (General Certificate of Education-Ordinary Level)
examination. Closely related sociological factors are collected as raw
data and the noise of these data are filtered through the fuzzy
interface and the supervised neural network is being utilized to
recognize the performance patterns against the chosen social factors.
Abstract: While compressing text files is useful, compressing
still image files is almost a necessity. A typical image takes up much
more storage than a typical text message and without compression
images would be extremely clumsy to store and distribute. The
amount of information required to store pictures on modern
computers is quite large in relation to the amount of bandwidth
commonly available to transmit them over the Internet and
applications. Image compression addresses the problem of reducing
the amount of data required to represent a digital image. Performance
of any image compression method can be evaluated by measuring the
root-mean-square-error & peak signal to noise ratio. The method of
image compression that will be analyzed in this paper is based on the
lossy JPEG image compression technique, the most popular
compression technique for color images. JPEG compression is able to
greatly reduce file size with minimal image degradation by throwing
away the least “important" information. In JPEG, both color
components are downsampled simultaneously, but in this paper we
will compare the results when the compression is done by
downsampling the single chroma part. In this paper we will
demonstrate more compression ratio is achieved when the
chrominance blue is downsampled as compared to downsampling the
chrominance red in JPEG compression. But the peak signal to noise
ratio is more when the chrominance red is downsampled as compared
to downsampling the chrominance blue in JPEG compression. In
particular we will use the hats.jpg as a demonstration of JPEG
compression using low pass filter and demonstrate that the image is
compressed with barely any visual differences with both methods.
Abstract: A framework to estimate the state of dynamically
varying environment where data are generated from heterogeneous
sources possessing partial knowledge about the environment is presented.
This is entirely derived within Dempster-Shafer and Evidence
Filtering frameworks. The belief about the current state is expressed
as belief and plausibility functions. An addition to Single Input
Single Output Evidence Filter, Multiple Input Single Output Evidence
Filtering approach is introduced. Variety of applications such as
situational estimation of an emergency environment can be developed
within the framework successfully. Fire propagation scenario is used
to justify the proposed framework, simulation results are presented.