Abstract: Power cables are vulnerable to failure due to aging or
defects that occur with the passage of time under continuous
operation and loading stresses. PD detection and characterization
provide information on the location, nature, form and extent of the
degradation. As a result, PD monitoring has become an important
part of condition based maintenance (CBM) program among power
utilities. Online partial discharge (PD) localization of defect sources
in power cable system is possible using the time of flight method.
The information regarding the time difference between the main and
reflected pulses and cable length can help in locating the partial
discharge source along the cable length. However, if the length of
the cable is not known and the defect source is located at the extreme
ends of the cable or in the middle of the cable, then double ended
measurement is required to indicate the location of PD source. Use of
multiple sensors can also help in discriminating the cable PD or local/
external PD. This paper presents the experience and results from
online partial discharge measurements conducted in the laboratory
and the challenges in partial discharge source localization.
Abstract: Eye localization is necessary for face recognition and
related application areas. Most of eye localization algorithms reported
so far still need to be improved about precision and computational
time for successful applications. In this paper, we propose an eye
location method based on multi-scale Gabor feature vectors, which is
more robust with respect to initial points. The eye localization based
on Gabor feature vectors first needs to constructs an Eye Model Bunch
for each eye (left or right eye) which consists of n Gabor jets and
average eye coordinates of each eyes obtained from n model face
images, and then tries to localize eyes in an incoming face image by
utilizing the fact that the true eye coordinates is most likely to be very
close to the position where the Gabor jet will have the best Gabor jet
similarity matching with a Gabor jet in the Eye Model Bunch. Similar
ideas have been already proposed in such as EBGM (Elastic Bunch
Graph Matching). However, the method used in EBGM is known to be
not robust with respect to initial values and may need extensive search
range for achieving the required performance, but extensive search
ranges will cause much more computational burden. In this paper, we
propose a multi-scale approach with a little increased computational
burden where one first tries to localize eyes based on Gabor feature
vectors in a coarse face image obtained from down sampling of the
original face image, and then localize eyes based on Gabor feature
vectors in the original resolution face image by using the eye
coordinates localized in the coarse scaled image as initial points.
Several experiments and comparisons with other eye localization
methods reported in the other papers show the efficiency of our
proposed method.
Abstract: This paper deals with the localization of the wideband sources. We develop a new approach for estimating the wide band sources parameters. This method is based on the high order statistics of the recorded data in order to eliminate the Gaussian components from the signals received on the various hydrophones.In fact the noise of sea bottom is regarded as being Gaussian. Thanks to the coherent signal subspace algorithm based on the cumulant matrix of the received data instead of the cross-spectral matrix the wideband correlated sources are perfectly located in the very noisy environment. We demonstrate the performance of the proposed algorithm on the real data recorded during an underwater acoustics experiments.
Abstract: In this paper, we present a novel approach to location
system under indoor environment. The key idea of our work is
accurate distance estimation with cricket-based location system using
A* algorithm. We also use magnetic sensor for detecting obstacles in
indoor environment. Finally, we suggest how this system can be used
in various applications such as asset tracking and monitoring.
Abstract: Nonlinear finite element method with eight noded
isoparametric quadrilateral element is used for prediction of loaddeformation
behavior including bearing capacity of foundations.
Modified generalized plasticity model with non-associated flow rule
is applied for analysis of soil-footing system. Also Von Mises and
Tresca criterions are used for simulation of soil behavior. Modified
generalized plasticity model is able to simulate load-deformation
including softening behavior. Localization phenomena are considered
by different meshes. Localization phenomena have not been seen in
the examples. Predictions by modified generalized plasticity model
show good agreement with laboratory data and theoretical prediction
in comparison the other models.
Abstract: Many environment specific methods and systems for Robot Navigation exist. However vast strides in the evolution of navigation technologies and system techniques create the need for a general unified framework that is scalable, modular and dynamic. In this paper a Unified Framework for a Robust Conflict-free Robot Navigation System that can be used for either a structured or unstructured and indoor or outdoor environments has been proposed. The fundamental design aspects and implementation issues encountered during the development of the module are discussed. The results of the deployment of three major peripheral modules of the framework namely the GSM based communication module, GIS Module and GPS module are reported in this paper.
Abstract: In the present work, an attempt is made to understand
electromagnetic field confinement in a subwavelength waveguide
structure using concepts of quantum mechanics. Evanescent field in
the waveguide is looked as inability of the photon to get confined in
the waveguide core and uncertainty of position is assigned to it. The
momentum uncertainty is calculated from position uncertainty.
Schrödinger wave equation for the photon is written by incorporating
position-momentum uncertainty. The equation is solved and field
distribution in the waveguide is obtained. The field distribution and
power confinement is compared with conventional waveguide theory.
They were found in good agreement with each other.
Abstract: Array signal processing involves signal enumeration and source localization. Array signal processing is centered on the ability to fuse temporal and spatial information captured via sampling signals emitted from a number of sources at the sensors of an array in order to carry out a specific estimation task: source characteristics (mainly localization of the sources) and/or array characteristics (mainly array geometry) estimation. Array signal processing is a part of signal processing that uses sensors organized in patterns or arrays, to detect signals and to determine information about them. Beamforming is a general signal processing technique used to control the directionality of the reception or transmission of a signal. Using Beamforming we can direct the majority of signal energy we receive from a group of array. Multiple signal classification (MUSIC) is a highly popular eigenstructure-based estimation method of direction of arrival (DOA) with high resolution. This Paper enumerates the effect of missing sensors in DOA estimation. The accuracy of the MUSIC-based DOA estimation is degraded significantly both by the effects of the missing sensors among the receiving array elements and the unequal channel gain and phase errors of the receiver.
Abstract: This study introduces a new method for detecting,
sorting, and localizing spikes from multiunit EEG recordings. The
method combines the wavelet transform, which localizes distinctive
spike features, with Super-Paramagnetic Clustering (SPC) algorithm,
which allows automatic classification of the data without assumptions
such as low variance or Gaussian distributions. Moreover, the method
is capable of setting amplitude thresholds for spike detection. The
method makes use of several real EEG data sets, and accordingly the
spikes are detected, clustered and their times were detected.
Abstract: In this paper, we present the recently implemented approach allowing dynamics systems to plan its actions, taking into account the environment perception changes, and to control their execution when uncertainty and incomplete knowledge are the major characteristics of the situated environment [1],[2],[3],[4]. The control distributed architecture has three modules and the approach is related to hierarchical planning: the plan produced by the planner is further refined at the control layer that in turn supervises its execution by a functional level. We propose a new intelligent distributed architecture constituted by: Multi-Agent subsystem of the sensor, of the interpretation and representation of environment [9], of the dynamic localization and of the action. We tested this distributed architecture with dynamic system in the known environment. The autonomous for Rotor Mini Rotorcraft task is described by the primitive actions. The distributed controlbased on multi-agent system is in charge of achieving each task in the best possible way taking into account the context and sensory feedback.
Abstract: Automatic reading of handwritten cheque is a computationally
complex process and it plays an important role in financial
risk management. Machine vision and learning provide a viable
solution to this problem. Research effort has mostly been focused
on recognizing diverse pitches of cheques and demand drafts with an
identical outline. However most of these methods employ templatematching
to localize the pitches and such schemes could potentially
fail when applied to different types of outline maintained by the
bank. In this paper, the so-called outline problem is resolved by
a cheque information tree (CIT), which generalizes the localizing
method to extract active-region-of-entities. In addition, the weight
based density plot (WBDP) is performed to isolate text entities and
read complete pitches. Recognition is based on texture features using
neural classifiers. Legal amount is subsequently recognized by both
texture and perceptual features. A post-processing phase is invoked
to detect the incorrect readings by Type-2 grammar using the Turing
machine. The performance of the proposed system was evaluated
using cheque and demand drafts of 22 different banks. The test data
consists of a collection of 1540 leafs obtained from 10 different
account holders from each bank. Results show that this approach
can easily be deployed without significant design amendments.
Abstract: Information of nodes’ locations is an important
criterion for lots of applications in Wireless Sensor Networks. In the
hop-based range-free localization methods, anchors transmit the
localization messages counting a hop count value to the whole
network. Each node receives this message and calculates its own
distance with anchor in hops and then approximates its own position.
However the estimative distances can provoke large error, and affect
the localization precision. To solve the problem, this paper proposes
an algorithm, which makes the unknown nodes fix the nearest anchor
as a reference and select two other anchors which are the most
accurate to achieve the estimated location. Compared to the DV-Hop
algorithm, experiment results illustrate that proposed algorithm has
less average localization error and is more effective.
Abstract: This paper describes new computer vision algorithms
that have been developed to track moving objects as part of a
long-term study into the design of (semi-)autonomous vehicles. We
present the results of a study to exploit variable kernels for tracking in
video sequences. The basis of our work is the mean shift
object-tracking algorithm; for a moving target, it is usual to define a
rectangular target window in an initial frame, and then process the data
within that window to separate the tracked object from the background
by the mean shift segmentation algorithm. Rather than use the
standard, Epanechnikov kernel, we have used a kernel weighted by the
Chamfer distance transform to improve the accuracy of target
representation and localization, minimising the distance between the
two distributions in RGB color space using the Bhattacharyya
coefficient. Experimental results show the improved tracking
capability and versatility of the algorithm in comparison with results
using the standard kernel. These algorithms are incorporated as part of
a robot test-bed architecture which has been used to demonstrate their
effectiveness.
Abstract: Iris localization is a very important approach in
biometric identification systems. Identification process usually is
implemented in three levels: iris localization, feature extraction, and
pattern matching finally. Accuracy of iris localization as the first step
affects all other levels and this shows the importance of iris
localization in an iris based biometric system. In this paper, we
consider Daugman iris localization method as a standard method,
propose a new method in this field and then analyze and compare the
results of them on a standard set of iris images. The proposed method
is based on the detection of circular edge of iris, and improved by
fuzzy circles and surface energy difference contexts. Implementation
of this method is so easy and compared to the other methods, have a
rather high accuracy and speed. Test results show that the accuracy of
our proposed method is about Daugman method and computation
speed of it is 10 times faster.
Abstract: In this paper, a fragile watermarking scheme is proposed for color image specified object-s authentication. The color image is first transformed from RGB to YST color space, suitable for watermarking the color media. The T channel corresponds to the chrominance component of a color image andYS ÔèÑ T , therefore selected for embedding the watermark. The T channel is first divided into 2×2 non-overlapping blocks and the two LSBs are set to zero. The object that is to be authenticated is also divided into 2×2 nonoverlapping blocks and each block-s intensity mean is computed followed by eight bit encoding. The generated watermark is then embedded into T channel randomly selected 2×2 block-s LSBs using 2D-Torus Automorphism. Selection of block size is paramount for exact localization and recovery of work. The proposed scheme is blind, efficient and secure with ability to detect and locate even minor tampering applied to the image with full recovery of original work. The quality of watermarked media is quite high both subjectively and objectively. The technique is suitable for class of images with format such as gif, tif or bitmap.
Abstract: The localization of software products is essential for reaching the users of the international market. An important task for this is the translation of the user interface into local national languages. As graphical interfaces are usually optimized for the size of the texts in the original language, after the translation certain user controls (e.g. text labels and buttons in dialogs) may grow in such a manner that they slip above each other. This not only causes an unpleasant appearance but also makes the use of the program more difficult (or even impossible) which implies that the arrangement of the controls must be corrected subsequently. The correction should preserve the original structure of the interface (e.g. the relation of logically coherent controls), furthermore, it is important to keep the nicely proportioned design: the formation of large empty areas should be avoided. This paper describes an algorithm that automatically rearranges the controls of a graphical user interface based on the principles above. The algorithm has been implemented and integrated into a translation support system and reached results pleasant for the human eye in most test cases.
Abstract: Recently, content delivery services have grown rapidly
over the Internet. For ASPs (Application Service Provider) providing
content delivery services, P2P architecture is beneficial to reduce
outgoing traffic from content servers. On the other hand, ISPs are
suffering from the increase in P2P traffic. The P2P traffic is
unnecessarily redundant because the same content or the same
fractions of content are transferred through an inter-ISP link several
times. Subscriber ISPs have to pay a transit fee to upstream ISPs based
on the volume of inter-ISP traffic. In order to solve such problems,
several works have been done for the purpose of P2P traffic reduction.
However, these existing works cannot control the traffic volume of a
certain link. In order to solve such an ISP-s operational requirement,
we propose a method to control traffic volume for a link within a
preconfigured upper bound value. We evaluated that the proposed
method works well by conducting a simulation on a 1,000-user scale.
We confirm that the traffic volume could be controlled at a lower level
than the upper bound for all evaluated conditions. Moreover, our
method could control the traffic volume at 98.95% link usage against
the target value.
Abstract: Terminal localization for indoor Wireless Local Area
Networks (WLANs) is critical for the deployment of location-aware
computing inside of buildings. A major challenge is obtaining high
localization accuracy in presence of fluctuations of the received signal
strength (RSS) measurements caused by multipath fading. This paper
focuses on reducing the effect of the distance-varying noise by spatial
filtering of the measured RSS. Two different survey point geometries
are tested with the noise reduction technique: survey points arranged
in sets of clusters and survey points uniformly distributed over the
network area. The results show that the location accuracy improves
by 16% when the filter is used and by 18% when the filter is applied
to a clustered survey set as opposed to a straight-line survey set.
The estimated locations are within 2 m of the true location, which
indicates that clustering the survey points provides better localization
accuracy due to superior noise removal.
Abstract: Target tracking and localization are important applications
in wireless sensor networks. In these applications, sensor nodes
collectively monitor and track the movement of a target. They have
limited energy supplied by batteries, so energy efficiency is essential
for sensor networks. Most existing target tracking protocols need to
wake up sensors periodically to perform tracking. Some unnecessary
energy waste is thus introduced. In this paper, an energy efficient
protocol for target localization is proposed. In order to preserve
energy, the protocol fixes the number of sensors for target tracking,
but it retains the quality of target localization in an acceptable
level. By selecting a set of sensors for target localization, the other
sensors can sleep rather than periodically wake up to track the target.
Simulation results show that the proposed protocol saves a significant
amount of energy and also prolongs the network lifetime.
Abstract: Camera calibration is an indispensable step for augmented
reality or image guided applications where quantitative information
should be derived from the images. Usually, a camera
calibration is obtained by taking images of a special calibration object
and extracting the image coordinates of projected calibration marks
enabling the calculation of the projection from the 3d world coordinates
to the 2d image coordinates. Thus such a procedure exhibits
typical steps, including feature point localization in the acquired
images, camera model fitting, correction of distortion introduced by
the optics and finally an optimization of the model-s parameters. In
this paper we propose to extend this list by further step concerning
the identification of the optimal subset of images yielding the smallest
overall calibration error. For this, we present a Monte Carlo based
algorithm along with a deterministic extension that automatically
determines the images yielding an optimal calibration. Finally, we
present results proving that the calibration can be significantly
improved by automated image selection.