Abstract: Electronic nose (array of chemical sensors) are widely
used in food industry and pollution control. Also it could be used to
locate or detect the direction of the source of emission odors. Usually
this task is performed by electronic nose (ENose) cooperated with
mobile vehicles, but when a source is instantaneous or surrounding is
hard for vehicles to reach, problem occurs. Thus a method for
stationary ENose to detect the direction of the source and locate the
source will be required. A novel method which uses the ratio between
the responses of different sensors as a discriminant to determine the
direction of source in natural wind surroundings is presented in this
paper. The result shows that the method is accurate and easily to be
implemented. This method could be also used in movably, as an
optimized algorithm for robot tracking source location.
Abstract: The main objective of this study was to demonstrate that differentiation of infected and vaccinated animals (DIVA) strategy using different ELISA tests is possible when a subunit vaccine (Haemagglutinin protein) is used to prevent Avian influenza. Special emphasis was placed on the differentiation in the serological response to different components of the AIV (Nucleoprotein, Neuraminidase, Haemagglutinin, Nucleocapsid) between chickens that were vaccinated with a whole virus kill vaccine and recombinant vaccine. Furthermore, the potential use of this DIVA strategy using ELISA assays to detect Neuraminidase 1 (N1) was analyzed as strategy in countries where the field virus is H5N1 and the vaccine used is formulated with H5N2. Detection of AIV-s antibodies to any component in serum was negative for all animals on the study days 0-13. At study day 14 the titers of antibodies against Nucleoprotein (NP) and Nucleocapsid (NC) rose in the experimental groups vaccinated with Volvac® AI KV and were negatives during all the trial in the experimental groups vaccinated with a subunit H5; significant statistically differences were observed between these groups (p < 0.05). The seroconversion either Haemagglutinin or Neuraminidase was evident after 21 days post-vaccination in the experimental groups vaccinated with the respective viral fraction. Regarding the main aim of this study and according with the results that were obtained, use a combination of different ELISA test as a DIVA strategy is feasible when the vaccination is carry out with a subunit H5 vaccine. Also is possible to use the ELISA kit to detect Neuraminidase (either N1 or N2) as a DIVA concept in countries where H5N1 is present and the vaccination programs are done with H5N2 vaccine.
Abstract: This paper proposes an alternative control mechanism
for an interactive Pan/Tilt/Zoom (PTZ) camera control system.
Instead of using a mouse or a joystick, the proposed mechanism
utilizes a Nintendo Wii remote and infrared (IR) sensor bar. The Wii
remote has buttons that allows the user to control the movement of a
PTZ camera through Bluetooth connectivity. In addition, the Wii
remote has a built-in motion sensor that allows the user to give
control signals to the PTZ camera through pitch and roll movement.
A stationary IR sensor bar, placed at some distance away opposite the
Wii remote, enables the detection of yaw movement. In addition, the
Wii remote-s built-in IR camera has the ability to detect its spatial
position, and thus generates a control signal when the user moves the
Wii remote. Some experiments are carried out and their performances
are compared with an industry-standard PTZ joystick.
Abstract: We present a genetic algorithm application to the problem of object registration (i.e., object detection, localization and recognition) in a class of medical images containing various types of blood cells. The genetic algorithm approach taken here is seen to be most appropriate for this type of image, due to the characteristics of the objects. Successful cell registration results on real life microscope images of blood cells show the potential of the proposed approach.
Abstract: In this paper we propose an enhanced equalization technique for multi-carrier code division multiple access (MC-CDMA). This method is based on the control of Equal Gain Combining (EGC) technique. Indeed, we introduce a new level changer to the EGC equalizer in order to adapt the equalization parameters to the channel coefficients. The optimal equalization level is, first, determined by channel training. The new approach reduces drastically the mutliuser interferences caused by interferes, without increasing the noise power. To compare the performances of the proposed equalizer, the theoretical analysis and numerical performances are given.
Abstract: The paper presents frame and burst acquisition in a satellite communication network based on time division multiple access (TDMA) in which the transmissions may be carried on different transponders. A unique word pattern is used for the acquisition process. The search for the frame is aided by soft-decision of QPSK modulated signals in an additive white Gaussian channel. Results show that when the false alarm rate is low the probability of detection is also low, and the acquisition time is long. Conversely when the false alarm rate is high, the probability of detection is also high and the acquisition time is short. Thus the system operators can trade high false alarm rates for high detection probabilities and shorter acquisition times.
Abstract: Loop detectors report traffic characteristics in real
time. They are at the core of traffic control process. Intuitively,
one would expect that as density of detection increases, so would
the quality of estimates derived from detector data. However, as
detector deployment increases, the associated operating and
maintenance cost increases. Thus, traffic agencies often need to
decide where to add new detectors and which detectors should
continue receiving maintenance, given their resource constraints.
This paper evaluates the effect of detector spacing on freeway
travel time estimation. A freeway section (Interstate-15) in Salt
Lake City metropolitan region is examined. The research reveals
that travel time accuracy does not necessarily deteriorate with
increased detector spacing. Rather, the actual location of detectors
has far greater influence on the quality of travel time estimates.
The study presents an innovative computational approach that
delivers optimal detector locations through a process that relies on
Genetic Algorithm formulation.
Abstract: In this study, we propose a tongue diagnosis method
which detects the tongue from face image and divides the tongue area into six areas, and finally generates tongue coating ratio of each area.
To detect the tongue area from face image, we use ASM as one of the active shape models. Detected tongue area is divided into six areas
widely used in the Korean traditional medicine and the distribution of tongue coating of the six areas is examined by SVM(Support Vector
Machine). For SVM, we use a 3-dimensional vector calculated by PCA(Principal Component Analysis) from a 12-dimentional vector
consisting of RGB, HIS, Lab, and Luv. As a result, we detected the tongue area stably using ASM and found that PCA and SVM helped
raise the ratio of tongue coating detection.
Abstract: In this paper we present a new approach to detecting a
flaw in T.O.F.D (Time Of Flight Diffraction) type ultrasonic image
based on texture features. Texture is one of the most important
features used in recognizing patterns in an image. The paper
describes texture features based on 2D Gabor functions, i.e.,
Gaussian shaped band-pass filters, with dyadic treatment of the radial
spatial frequency range and multiple orientations, which represent an
appropriate choice for tasks requiring simultaneous measurement in
both space and frequency domains. The most relevant features are
used as input data on a Fuzzy c-mean clustering classifier. The
classes that exist are only two: 'defects' or 'no defects'. The proposed
approach is tested on the T.O.F.D image achieved at the laboratory
and on the industrial field.
Abstract: For gamma radiation detection, assemblies having
scintillation crystals and a photomultiplier tube, also there is a
preamplifier connected to the detector because the signals from
photomultiplier tube are of small amplitude. After pre-amplification
the signals are sent to the amplifier and then to the multichannel
analyser. The multichannel analyser sorts all incoming electrical
signals according to their amplitudes and sorts the detected photons
in channels covering small energy intervals. The energy range of
each channel depends on the gain settings of the multichannel
analyser and the high voltage across the photomultiplier tube. The
exit spectrum data of the two main isotopes studied ,putting data in
biomass program ,process it by Matlab program to get the solid
holdup image (solid spherical nuclear fuel)
Abstract: We present in this paper a new approach for specific JPEG steganalysis and propose studying statistics of the compressed DCT coefficients. Traditionally, steganographic algorithms try to preserve statistics of the DCT and of the spatial domain, but they cannot preserve both and also control the alteration of the compressed data. We have noticed a deviation of the entropy of the compressed data after a first embedding. This deviation is greater when the image is a cover medium than when the image is a stego image. To observe this deviation, we pointed out new statistic features and combined them with the Multiple Embedding Method. This approach is motivated by the Avalanche Criterion of the JPEG lossless compression step. This criterion makes possible the design of detectors whose detection rates are independent of the payload. Finally, we designed a Fisher discriminant based classifier for well known steganographic algorithms, Outguess, F5 and Hide and Seek. The experiemental results we obtained show the efficiency of our classifier for these algorithms. Moreover, it is also designed to work with low embedding rates (< 10-5) and according to the avalanche criterion of RLE and Huffman compression step, its efficiency is independent of the quantity of hidden information.
Abstract: The development of aid's systems for the medical
diagnosis is not easy thing because of presence of inhomogeneities in
the MRI, the variability of the data from a sequence to the other as
well as of other different source distortions that accentuate this
difficulty. A new automatic, contextual, adaptive and robust
segmentation procedure by MRI brain tissue classification is
described in this article. A first phase consists in estimating the
density of probability of the data by the Parzen-Rozenblatt method.
The classification procedure is completely automatic and doesn't
make any assumptions nor on the clusters number nor on the
prototypes of these clusters since these last are detected in an
automatic manner by an operator of mathematical morphology called
skeleton by influence zones detection (SKIZ). The problem of
initialization of the prototypes as well as their number is transformed
in an optimization problem; in more the procedure is adaptive since it
takes in consideration the contextual information presents in every
voxel by an adaptive and robust non parametric model by the
Markov fields (MF). The number of bad classifications is reduced by
the use of the criteria of MPM minimization (Maximum Posterior
Marginal).
Abstract: Steel surface defect detection is essentially one of
pattern recognition problems. Support Vector Machines (SVMs) are
known as one of the most proper classifiers in this application. In this
paper, we introduce a more accurate classification method by using
SVMs as our final classifier of the inspection system. In this scheme,
multiclass classification task is performed based on the "one-againstone"
method and different kernels are utilized for each pair of the
classes in multiclass classification of the different defects.
In the proposed system, a decision tree is employed in the first
stage for two-class classification of the steel surfaces to "defect" and
"non-defect", in order to decrease the time complexity. Based on
the experimental results, generated from over one thousand images,
the proposed multiclass classification scheme is more accurate than
the conventional methods and the overall system yields a sufficient
performance which can meet the requirements in steel manufacturing.
Abstract: Smoke from domestic wood burning has been
identified as a major contributor to air pollution, motivating detailed
emission measurements under controlled conditions. A series of
experiments was performed to characterise the emissions from wood
combustion in a fireplace and in a woodstove of two common species
of trees grown in Spain: Pyrenean oak (Quercus pyrenaica) and
black poplar (Populus nigra). Volatile organic compounds (VOCs) in
the exhaust emissions were collected in Tedlar bags, re-sampled in
sorbent tubes and analysed by thermal desorption-gas
chromatography-flame ionisation detection. Pyrenean oak presented
substantially higher emissions in the woodstove than in the fireplace,
for the majority of compounds. The opposite was observed for
poplar. Among the 45 identified species, benzene and benzenerelated
compounds represent the most abundant group, followed by
oxygenated VOCs and aliphatics. Emission factors obtained in this
study are generally of the same order than those reported for
residential experiments in the USA.
Abstract: In real-field applications, the correct determination of voice segments highly improves the overall system accuracy and minimises the total computation time. This paper presents reliable measures of speech compression by detcting the end points of the speech signals prior to compressing them. The two different compession schemes used are the Global threshold and the Level- Dependent threshold techniques. The performance of the proposed method is tested wirh the Signal to Noise Ratios, Peak Signal to Noise Ratios and Normalized Root Mean Square Error parameter measures.
Abstract: Facial features are frequently used to represent local
properties of a human face image in computer vision applications. In
this paper, we present a fast algorithm that can extract the facial
features online such that they can give a satisfying representation of a
face image. It includes one step for a coarse detection of each facial
feature by AdaBoost and another one to increase the accuracy of the
found points by Active Shape Models (ASM) in the regions of interest.
The resulted facial features are evaluated by matching with artificial
face models in the applications of physiognomy. The distance measure
between the features and those in the fate models from the database is
carried out by means of the Hausdorff distance. In the experiment, the
proposed method shows the efficient performance in facial feature
extractions and online system of physiognomy.
Abstract: Power transformer consists of components which are
under consistent thermal and electrical stresses. The major
component which degrades under these stresses is the paper
insulation of the power transformer. At site, lightning impulses and
cable faults may cause the winding deformation. In addition, the
winding may deform due to impact during transportation. A
deformed winding will excite more stress to its insulating paper thus
will degrade it. Insulation degradation will shorten the life-span of
the transformer. Currently there are two methods of detecting the
winding deformation which are Sweep Frequency Response
Analysis (SFRA) and Low Voltage Impulse Test (LVI). The latter
injects current pulses to the winding and capture the admittance
plot. In this paper, a transformer which experienced overheating and
arcing was identified, and both SFRA and LVI were performed.
Next, the transformer was brought to the factory for untanking. The
untanking results revealed that the LVI is more accurate than the
SFRA method for this case study.
Abstract: The issue of unintentional islanding in PV grid
interconnection still remains as a challenge in grid-connected
photovoltaic (PV) systems. This paper discusses the overview of
popularly used anti-islanding detection methods, practically applied
in PV grid-connected systems. Anti-islanding methods generally can
be classified into four major groups, which include passive methods,
active methods, hybrid methods and communication base methods.
Active methods have been the preferred detection technique over the
years due to very small non-detected zone (NDZ) in small scale
distribution generation. Passive method is comparatively simpler
than active method in terms of circuitry and operations. However, it
suffers from large NDZ that significantly reduces its performance.
Communication base methods inherit the advantages of active and
passive methods with reduced drawbacks. Hybrid method which
evolved from the combination of both active and passive methods
has been proven to achieve accurate anti-islanding detection by many
researchers. For each of the studied anti-islanding methods, the
operation analysis is described while the advantages and
disadvantages are compared and discussed. It is difficult to pinpoint a
generic method for a specific application, because most of the
methods discussed are governed by the nature of application and
system dependent elements. This study concludes that the setup and
operation cost is the vital factor for anti-islanding method selection in
order to achieve minimal compromising between cost and system
quality.
Abstract: In cancer progress, the optical properties of tissues
like absorption and scattering coefficient change, so by these
changes, we can trace the progress of cancer, even it can be applied
for pre-detection of cancer. In this paper, we investigate the effects of
changes of optical properties on light penetrated into tissues. The
diffusion equation is widely used to simulate light propagation into
biological tissues. In this study, the boundary integral method (BIM)
is used to solve the diffusion equation. We illustrate that the changes
of optical properties can modified the reflectance or penetrating light.
Abstract: In this study, the locations and areas of commercial
accumulations were detected by using digital yellow page data. An
original buffering method that can accurately create polygons of
commercial accumulations is proposed in this paper.; by using this
method, distribution of commercial accumulations can be easily
created and monitored over a wide area. The locations, areas, and
time-series changes of commercial accumulations in the South Kanto
region can be monitored by integrating polygons of commercial
accumulations with the time-series data of digital yellow page data.
The circumstances of commercial accumulations were shown to vary
according to areas, that is, highly- urbanized regions such as the city
center of Tokyo and prefectural capitals, suburban areas near large
cities, and suburban and rural areas.