Abstract: This paper introduces a new approach for the performance
analysis of adaptive filter with error saturation nonlinearity in
the presence of impulsive noise. The performance analysis of adaptive
filters includes both transient analysis which shows that how fast
a filter learns and the steady-state analysis gives how well a filter
learns. The recursive expressions for mean-square deviation(MSD)
and excess mean-square error(EMSE) are derived based on weighted
energy conservation arguments which provide the transient behavior
of the adaptive algorithm. The steady-state analysis for co-related
input regressor data is analyzed, so this approach leads to a new
performance results without restricting the input regression data to
be white.
Abstract: Spectrum is a scarce commodity, and considering the spectrum scarcity faced by the wireless-based service providers led to high congestion levels. Technical inefficiencies from pooled, since all networks share a common pool of channels, exhausting the available channels will force networks to block the services. Researchers found that cognitive radio (CR) technology may resolve the spectrum scarcity. A CR is a self-configuring entity in a wireless networking that senses its environment, tracks changes, and frequently exchanges information with their networks. However, CRN facing challenges and condition become worst while tracks changes i.e. reallocation of another under-utilized channels while primary network user arrives. In this paper, channels or resource reallocation technique based on DNA-inspired computing algorithm for CRN has been proposed.
Abstract: Mel Frequency Cepstral Coefficient (MFCC) features
are widely used as acoustic features for speech recognition as well
as speaker recognition. In MFCC feature representation, the Mel frequency
scale is used to get a high resolution in low frequency region,
and a low resolution in high frequency region. This kind of processing
is good for obtaining stable phonetic information, but not suitable
for speaker features that are located in high frequency regions. The
speaker individual information, which is non-uniformly distributed
in the high frequencies, is equally important for speaker recognition.
Based on this fact we proposed an admissible wavelet packet based
filter structure for speaker identification. Multiresolution capabilities
of wavelet packet transform are used to derive the new features.
The proposed scheme differs from previous wavelet based works,
mainly in designing the filter structure. Unlike others, the proposed
filter structure does not follow Mel scale. The closed-set speaker
identification experiments performed on the TIMIT database shows
improved identification performance compared to other commonly
used Mel scale based filter structures using wavelets.
Abstract: In recent years, the research in wireless sensor
network has increased steadily, and many studies were focusing on
reducing energy consumption of sensor nodes to extend their lifetimes.
In this paper, the issue of energy consumption is investigated and two
adaptive mechanisms are proposed to extend the network lifetime.
This study uses high-energy-first scheme to determine cluster heads
for data transmission. Thus, energy consumption in each cluster is
balanced and network lifetime can be extended. In addition, this study
uses cluster merging and dynamic routing mechanisms to further
reduce energy consumption during data transmission. The simulation
results show that the proposed method can effectively extend the
lifetime of wireless sensor network, and it is suitable for different base
station locations.
Abstract: In this work, we present a comparison between two
techniques of image compression. In the first case, the image is
divided in blocks which are collected according to zig-zag scan. In
the second one, we apply the Fast Cosine Transform to the image,
and then the transformed image is divided in blocks which are
collected according to zig-zag scan too. Later, in both cases, the
Karhunen-Loève transform is applied to mentioned blocks. On the
other hand, we present three new metrics based on eigenvalues for a
better comparative evaluation of the techniques. Simulations show
that the combined version is the best, with minor Mean Absolute
Error (MAE) and Mean Squared Error (MSE), higher Peak Signal to
Noise Ratio (PSNR) and better image quality. Finally, new technique
was far superior to JPEG and JPEG2000.
Abstract: Electromagnetic interference (EMI) is one of the
serious problems in most electrical and electronic appliances
including fluorescent lamps. The electronic ballast used to regulate
the power flow through the lamp is the major cause for EMI. The
interference is because of the high frequency switching operation of
the ballast. Formerly, some EMI mitigation techniques were in
practice, but they were not satisfactory because of the hardware
complexity in the circuit design, increased parasitic components and
power consumption and so on. The majority of the researchers have
their spotlight only on EMI mitigation without considering the other
constraints such as cost, effective operation of the equipment etc. In
this paper, we propose a technique for EMI mitigation in fluorescent
lamps by integrating Frequency Modulation and Evolutionary
Programming. By the Frequency Modulation technique, the
switching at a single central frequency is extended to a range of
frequencies, and so, the power is distributed throughout the range of
frequencies leading to EMI mitigation. But in order to meet the
operating frequency of the ballast and the operating power of the
fluorescent lamps, an optimal modulation index is necessary for
Frequency Modulation. The optimal modulation index is determined
using Evolutionary Programming. Thereby, the proposed technique
mitigates the EMI to a satisfactory level without disturbing the
operation of the fluorescent lamp.
Abstract: Link reliability and transmitted power are two important design constraints in wireless network design. Error control coding (ECC) is a classic approach used to increase link reliability and to lower the required transmitted power. It provides coding gain, resulting in transmitter energy savings at the cost of added decoder power consumption. But the choice of ECC is very critical in the case of wireless sensor network (WSN). Since the WSNs are energy constraint in nature, both the BER and power consumption has to be taken into count. This paper develops a step by step approach in finding suitable error control codes for WSNs. Several simulations are taken considering different error control codes and the result shows that the RS(31,21) fits both in BER and power consumption criteria.
Abstract: In this paper, we have developed a method to
compute fractal dimension (FD) of discrete time signals, in the
time domain, by modifying the box-counting method. The size
of the box is dependent on the sampling frequency of the
signal. The number of boxes required to completely cover the
signal are obtained at multiple time resolutions. The time
resolutions are made coarse by decimating the signal. The loglog
plot of total number of boxes required to cover the curve
versus size of the box used appears to be a straight line, whose
slope is taken as an estimate of FD of the signal. The results
are provided to demonstrate the performance of the proposed
method using parametric fractal signals. The estimation
accuracy of the method is compared with that of Katz, Sevcik,
and Higuchi methods. In addition, some properties of the FD
are discussed.
Abstract: The rapid growth of e-Commerce services is
significantly observed in the past decade. However, the method to
verify the authenticated users still widely depends on numeric
approaches. A new search on other verification methods suitable for
online e-Commerce is an interesting issue. In this paper, a new online
signature-verification method using angular transformation is
presented. Delay shifts existing in online signatures are estimated by
the estimation method relying on angle representation. In the
proposed signature-verification algorithm, all components of input
signature are extracted by considering the discontinuous break points
on the stream of angular values. Then the estimated delay shift is
captured by comparing with the selected reference signature and the
error matching can be computed as a main feature used for verifying
process. The threshold offsets are calculated by two types of error
characteristics of the signature verification problem, False Rejection
Rate (FRR) and False Acceptance Rate (FAR). The level of these two
error rates depends on the decision threshold chosen whose value is
such as to realize the Equal Error Rate (EER; FAR = FRR). The
experimental results show that through the simple programming,
employed on Internet for demonstrating e-Commerce services, the
proposed method can provide 95.39% correct verifications and 7%
better than DP matching based signature-verification method. In
addition, the signature verification with extracting components
provides more reliable results than using a whole decision making.
Abstract: Automatic reading of handwritten cheque is a computationally
complex process and it plays an important role in financial
risk management. Machine vision and learning provide a viable
solution to this problem. Research effort has mostly been focused
on recognizing diverse pitches of cheques and demand drafts with an
identical outline. However most of these methods employ templatematching
to localize the pitches and such schemes could potentially
fail when applied to different types of outline maintained by the
bank. In this paper, the so-called outline problem is resolved by
a cheque information tree (CIT), which generalizes the localizing
method to extract active-region-of-entities. In addition, the weight
based density plot (WBDP) is performed to isolate text entities and
read complete pitches. Recognition is based on texture features using
neural classifiers. Legal amount is subsequently recognized by both
texture and perceptual features. A post-processing phase is invoked
to detect the incorrect readings by Type-2 grammar using the Turing
machine. The performance of the proposed system was evaluated
using cheque and demand drafts of 22 different banks. The test data
consists of a collection of 1540 leafs obtained from 10 different
account holders from each bank. Results show that this approach
can easily be deployed without significant design amendments.
Abstract: The IEEE 802.11e which is an enhanced version of the 802.11 WLAN standards incorporates the Quality of Service (QoS) which makes it a better choice for multimedia and real time applications. In this paper we study various aspects concerned with 802.11e standard. Further, the analysis results for this standard are compared with the legacy 802.11 standard. Simulation results show that IEEE 802.11e out performs legacy IEEE 802.11 in terms of quality of service due to its flow differentiated channel allocation and better queue management architecture. We also propose a method to improve the unfair allocation of bandwidth for downlink and uplink channels by varying the medium access priority level.
Abstract: Wireless LAN (WLAN) access in public hotspot areas
becomes popular in the recent years. Since more and more multimedia
information is available in the Internet, there is an increasing demand
for accessing multimedia information through WLAN hotspots.
Currently, the bandwidth offered by an IEEE 802.11 WLAN cannot
afford many simultaneous real-time video accesses. A possible way to
increase the offered bandwidth in a hotspot is the use of multiple access
points (APs). However, a mobile station is usually connected to the
WLAN AP with the strongest received signal strength indicator (RSSI).
The total consumed bandwidth cannot be fairly allocated among those
APs. In this paper, we will propose an effective load-balancing scheme
via the support of the IAPP and SNMP in APs. The proposed scheme is
an open solution and doesn-t need any changes in both wireless stations
and APs. This makes load balancing possible in WLAN hotspots,
where a variety of heterogeneous mobile devices are employed.
Abstract: In this paper, we present a method for edge
segmentation of satellite images based on 2-D Phase Congruency
(PC) model. The proposed approach is composed by two steps: The
contextual non linear smoothing algorithm (CNLS) is used to smooth
the input images. Then, the 2D stretched Gabor filter (S-G filter)
based on proposed angular variation is developed in order to avoid
the multiple responses in the previous work. An assessment of our
proposed method performance is provided in terms of accuracy of
satellite image edge segmentation. The proposed method is compared
with others known approaches.