Abstract: Median filters with larger windows offer greater smoothing and are more robust than the median filters of smaller windows. However, the larger median smoothers (the median filters with the larger windows) fail to track low order polynomial trends in the signals. Due to this, constant regions are produced at the signal corners, leading to the loss of fine details. In this paper, an algorithm, which combines the ability of the 3-point median smoother in preserving the low order polynomial trends and the superior noise filtering characteristics of the larger median smoother, is introduced. The proposed algorithm (called the combiner algorithm in this paper) is evaluated for its performance on a test image corrupted with different types of noise and the results obtained are included.
Abstract: Automatic Vehicle Identification (AVI) has many
applications in traffic systems (highway electronic toll collection, red
light violation enforcement, border and customs checkpoints, etc.).
License Plate Recognition is an effective form of AVI systems. In
this study, a smart and simple algorithm is presented for vehicle-s
license plate recognition system. The proposed algorithm consists of
three major parts: Extraction of plate region, segmentation of
characters and recognition of plate characters. For extracting the
plate region, edge detection algorithms and smearing algorithms are
used. In segmentation part, smearing algorithms, filtering and some
morphological algorithms are used. And finally statistical based
template matching is used for recognition of plate characters. The
performance of the proposed algorithm has been tested on real
images. Based on the experimental results, we noted that our
algorithm shows superior performance in car license plate
recognition.
Abstract: In this paper we consider the problem of change
detection and non stationary signals tracking. Using parametric
estimation of signals based on least square lattice adaptive filters we
consider for change detection statistical parametric methods using
likelihood ratio and hypothesis tests. In order to track signals
dynamics, we introduce a compensation procedure in the adaptive
estimation. This will improve the adaptive estimation performances
and fasten it-s convergence after changes detection.
Abstract: This is the second part of the paper. It, aside from the
core subroutine test reported previously, focuses on the simulation of
turbulence governed by the full STF Navier-Stokes equations on a
large scale. Law of the wall is found plausible in this study as a model
of the boundary layer dynamics. Model validations proceed to
include velocity profiles of a stationary turbulent Couette flow, pure
sloshing flow simulations, and the identification of water-surface
inclination due to fluid accelerations. Errors resulting from the
irrotational and hydrostatic assumptions are explored when studying
a wind-driven water circulation with no shakings. Illustrative
examples show that this numerical strategy works for the simulation
of sloshing-shear mixed flow in a 3-D rigid rectangular base tank.
Abstract: In this manuscript, a wavelet-based blind
watermarking scheme has been proposed as a means to provide
security to authenticity of a fingerprint. The information used for
identification or verification of a fingerprint mainly lies in its
minutiae. By robust watermarking of the minutiae in the fingerprint
image itself, the useful information can be extracted accurately even
if the fingerprint is severely degraded. The minutiae are converted in
a binary watermark and embedding these watermarks in the detail
regions increases the robustness of watermarking, at little to no
additional impact on image quality. It has been experimentally shown
that when the minutiae is embedded into wavelet detail coefficients
of a fingerprint image in spread spectrum fashion using a
pseudorandom sequence, the robustness is observed to have a
proportional response while perceptual invisibility has an inversely
proportional response to amplification factor “K". The DWT-based
technique has been found to be very robust against noises,
geometrical distortions filtering and JPEG compression attacks and is
also found to give remarkably better performance than DCT-based
technique in terms of correlation coefficient and number of erroneous
minutiae.
Abstract: Flight management system (FMS) is a specialized
computer system that automates a wide variety of in-flight tasks,
reducing the workload on the flight crew to the point that modern
aircraft no longer carry flight engineers or navigators. The primary
function of FMS is to perform the in-flight management of the flight
plan using various sensors (such as GPS and INS often backed up by
radio navigation) to determine the aircraft's position. From the
cockpit FMS is normally controlled through a Control Display Unit
(CDU) which incorporates a small screen and keyboard or touch
screen. This paper investigates the performance of GPS/ INS
integration techniques in which the data fusion process is done using
Kalman filtering. This will include the importance of sensors
calibration as well as the alignment of the strap down inertial
navigation system. The limitations of the inertial navigation systems
are investigated in order to understand why INS sometimes is
integrated with other navigation aids and not just operating in standalone
mode. Finally, both the loosely coupled and tightly coupled
configurations are analyzed for several types of situations and
operational conditions.
Abstract: this paper presented a survey analysis subjected on
network bandwidth management from published papers referred in
IEEE Explorer database in three years from 2009 to 2011. Network
Bandwidth Management is discussed in today-s issues for computer
engineering applications and systems. Detailed comparison is
presented between published papers to look further in the IP based
network critical research area for network bandwidth management.
Important information such as the network focus area, a few
modeling in the IP Based Network and filtering or scheduling used in
the network applications layer is presented. Many researches on
bandwidth management have been done in the broad network area
but fewer are done in IP Based network specifically at the
applications network layer. A few researches has contributed new
scheme or enhanced modeling but still the issue of bandwidth
management still arise at the applications network layer. This survey
is taken as a basic research towards implementations of network
bandwidth management technique, new framework model and
scheduling scheme or algorithm in an IP Based network which will
focus in a control bandwidth mechanism in prioritizing the network
traffic the applications layer.
Abstract: We consider optimal channel equalization for MIMO
(multi-input/multi-output) time-varying channels in the sense of
MMSE (minimum mean-squared-error), where the observation noise
can be non-stationary. We show that all ZF (zero-forcing) receivers
can be parameterized in an affine form which eliminates completely
the ISI (inter-symbol-interference), and optimal channel equalizers
can be designed through minimization of the MSE (mean-squarederror)
between the detected signals and the transmitted signals,
among all ZF receivers. We demonstrate that the optimal channel
equalizer is a modified Kalman filter, and show that under the AWGN
(additive white Gaussian noise) assumption, the proposed optimal
channel equalizer minimizes the BER (bit error rate) among all
possible ZF receivers. Our results are applicable to optimal channel
equalization for DWMT (discrete wavelet multitone), multirate transmultiplexers,
OFDM (orthogonal frequency division multiplexing),
and DS (direct sequence) CDMA (code division multiple access)
wireless data communication systems. A design algorithm for optimal
channel equalization is developed, and several simulation examples
are worked out to illustrate the proposed design algorithm.
Abstract: This paper presents a low cost design of heart beat monitoring device using reflectance mode PhotoPlethysmography (PPG). PPG is known for its simple construction, ease of use and cost effectiveness and can provide information about the changes in cardiac activity as well as aid in earlier non-invasive diagnostics. The proposed device is divided into three phases. First is the detection of pulses through the fingertip. The signal is then passed to the signal processing unit for the purpose of amplification, filtering and digitizing. Finally the heart rate is calculated and displayed on the computer using parallel port interface. The paper is concluded with prototyping of the device followed by verification procedure of the heartbeat signal obtained in laboratory setting.
Abstract: In this paper the reference current for Voltage Source
Converter (VSC) of the Shunt Active Power Filter (SAPF) is
generated using Synchronous Reference Frame method,
incorporating the PI controller with anti-windup scheme. The
proposed method improves the harmonic filtering by compensating
the winding up phenomenon caused by the integral term of the PI
controller.
Using Reference Frame Transformation, the current is transformed
from om a - b - c stationery frame to rotating 0 - d - q frame. Using
the PI controller, the current in the 0 - d - q frame is controlled to
get the desired reference signal. A controller with integral action
combined with an actuator that becomes saturated can give some
undesirable effects. If the control error is so large that the integrator
saturates the actuator, the feedback path becomes ineffective because
the actuator will remain saturated even if the process output changes.
The integrator being an unstable system may then integrate to a very
large value, the phenomenon known as integrator windup.
Implementing the integrator anti-windup circuit turns off the
integrator action when the actuator saturates, hence improving the
performance of the SAPF and dynamically compensating harmonics
in the power network. In this paper the system performance is
examined with Shunt Active Power Filter simulation model.
Abstract: Estimation of voltage stability based on optimal
filtering method is presented. PV curve is used as a tool for voltage stability analysis. Dynamic voltage stability estimation is done by
using particle filter method. Optimum value (nose point) of PV curve can be estimated by estimating parameter of PV curve equation
optimal value represents critical voltage and
condition at specified point of measurement. Voltage stability is then estimated by analyzing loading margin condition c stimating equation. This
maximum loading
ecified dynamically.
Abstract: In this paper we present a soft timing phase estimation (STPE) method for wireless mobile receivers operating in low signal to noise ratios (SNRs). Discrete Polyphase Matched (DPM) filters, a Log-maximum a posterior probability (MAP) and/or a Soft-output Viterbi algorithm (SOVA) are combined to derive a new timing recovery (TR) scheme. We apply this scheme to wireless cellular communication system model that comprises of a raised cosine filter (RCF), a bit-interleaved turbo-coded multi-level modulation (BITMM) scheme and the channel is assumed to be memory-less. Furthermore, no clock signals are transmitted to the receiver contrary to the classical data aided (DA) models. This new model ensures that both the bandwidth and power of the communication system is conserved. However, the computational complexity of ideal turbo synchronization is increased by 50%. Several simulation tests on bit error rate (BER) and block error rate (BLER) versus low SNR reveal that the proposed iterative soft timing recovery (ISTR) scheme outperforms the conventional schemes.
Abstract: The dynamic or complex modulus test is considered
to be a mechanistically based laboratory test to reliably characterize
the strength and load-resistance of Hot-Mix Asphalt (HMA) mixes
used in the construction of roads. The most common observation is
that the data collected from these tests are often noisy and somewhat
non-sinusoidal. This hampers accurate analysis of the data to obtain
engineering insight. The goal of the work presented in this paper is to
develop and compare automated evolutionary computational
techniques to filter test noise in the collection of data for the HMA
complex modulus test. The results showed that the Covariance
Matrix Adaptation-Evolutionary Strategy (CMA-ES) approach is
computationally efficient for filtering data obtained from the HMA
complex modulus test.
Abstract: Image convolution similar to the receptive fields
found in mammalian visual pathways has long been used in
conventional image processing in the form of Gabor masks.
However, no VLSI implementation of parallel, multi-layered pulsed
processing has been brought forward which would emulate this
property. We present a technical realization of such a pulsed image
processing scheme. The discussed IC also serves as a general testbed
for VLSI-based pulsed information processing, which is of interest
especially with regard to the robustness of representing an analog
signal in the phase or duration of a pulsed, quasi-digital signal, as
well as the possibility of direct digital manipulation of such an
analog signal. The network connectivity and processing properties
are reconfigurable so as to allow adaptation to various processing
tasks.
Abstract: A filter is used to remove undesirable frequency information from a dynamic signal. This paper shows that the Znotch filter filtering technique can be applied to remove the noise nuisance from a machining signal. In machining, the noise components were identified from the sound produced by the operation of machine components itself such as hydraulic system, motor, machine environment and etc. By correlating the noise components with the measured machining signal, the interested components of the measured machining signal which was less interfered by the noise, can be extracted. Thus, the filtered signal is more reliable to be analysed in terms of noise content compared to the unfiltered signal. Significantly, the I-kaz method i.e. comprises of three dimensional graphical representation and I-kaz coefficient, Z∞ could differentiate between the filtered and the unfiltered signal. The bigger space of scattering and the higher value of Z∞ demonstrated that the signal was highly interrupted by noise. This method can be utilised as a proactive tool in evaluating the noise content in a signal. The evaluation of noise content is very important as well as the elimination especially for machining operation fault diagnosis purpose. The Z-notch filtering technique was reliable in extracting noise component from the measured machining signal with high efficiency. Even though the measured signal was exposed to high noise disruption, the signal generated from the interaction between cutting tool and work piece still can be acquired. Therefore, the interruption of noise that could change the original signal feature and consequently can deteriorate the useful sensory information can be eliminated.
Abstract: This paper presents performance comparison of three estimation techniques used for peak load forecasting in power systems. The three optimum estimation techniques are, genetic algorithms (GA), least error squares (LS) and, least absolute value filtering (LAVF). The problem is formulated as an estimation problem. Different forecasting models are considered. Actual recorded data is used to perform the study. The performance of the above three optimal estimation techniques is examined. Advantages of each algorithms are reported and discussed.
Abstract: We study a new technique for optimal data compression
subject to conditions of causality and different types of memory. The
technique is based on the assumption that some information about
compressed data can be obtained from a solution of the associated
problem without constraints of causality and memory. This allows
us to consider two separate problem related to compression and decompression
subject to those constraints. Their solutions are given
and the analysis of the associated errors is provided.
Abstract: With the rapid growth in business size, today's businesses orient towards electronic technologies. Amazon.com and e-bay.com are some of the major stakeholders in this regard. Unfortunately the enormous size and hugely unstructured data on the web, even for a single commodity, has become a cause of ambiguity for consumers. Extracting valuable information from such an everincreasing data is an extremely tedious task and is fast becoming critical towards the success of businesses. Web content mining can play a major role in solving these issues. It involves using efficient algorithmic techniques to search and retrieve the desired information from a seemingly impossible to search unstructured data on the Internet. Application of web content mining can be very encouraging in the areas of Customer Relations Modeling, billing records, logistics investigations, product cataloguing and quality management. In this paper we present a review of some very interesting, efficient yet implementable techniques from the field of web content mining and study their impact in the area specific to business user needs focusing both on the customer as well as the producer. The techniques we would be reviewing include, mining by developing a knowledge-base repository of the domain, iterative refinement of user queries for personalized search, using a graphbased approach for the development of a web-crawler and filtering information for personalized search using website captions. These techniques have been analyzed and compared on the basis of their execution time and relevance of the result they produced against a particular search.
Abstract: The new concept of two–dimensional (2D) image
processing implementation for auto-guiding system is shown in this
paper. It is dedicated to astrophotography and operates with
astronomy CCD guide cameras or with self-guided dual-detector
CCD cameras and ST4 compatible equatorial mounts. This idea was
verified by MATLAB model, which was used to test all procedures
and data conversions. Next the circuit prototype was implemented at
Altera MAX II CPLD device and tested for real astronomical object
images. The digital processing speed of CPLD prototype board was
sufficient for correct equatorial mount guiding in real-time system.
Abstract: The behavior of Radial Basis Function (RBF) Networks greatly depends on how the center points of the basis functions are selected. In this work we investigate the use of instance reduction techniques, originally developed to reduce the storage requirements of instance based learners, for this purpose. Five Instance-Based Reduction Techniques were used to determine the set of center points, and RBF networks were trained using these sets of centers. The performance of the RBF networks is studied in terms of classification accuracy and training time. The results obtained were compared with two Radial Basis Function Networks: RBF networks that use all instances of the training set as center points (RBF-ALL) and Probabilistic Neural Networks (PNN). The former achieves high classification accuracies and the latter requires smaller training time. Results showed that RBF networks trained using sets of centers located by noise-filtering techniques (ALLKNN and ENN) rather than pure reduction techniques produce the best results in terms of classification accuracy. The results show that these networks require smaller training time than that of RBF-ALL and higher classification accuracy than that of PNN. Thus, using ALLKNN and ENN to select center points gives better combination of classification accuracy and training time. Our experiments also show that using the reduced sets to train the networks is beneficial especially in the presence of noise in the original training sets.