Abstract: We developed an effective microfluidic device for photoreactions with low reflectance and good heat conductance. The performance of this microfluidic device was tested by carrying out a photoreactive synthesis of benzopinacol and acetone from benzophenone and 2-propanol. The yield reached 36% with an irradiation time of 469.2 s and was improved by more than 30% when compared to the values obtained by the batch method. Therefore, the microfluidic device was found to be effective for improving the yields of photoreactions.
Abstract: We demonstrate a nonfaradaic electrochemical impedance spectroscopy measurement of biochemically modified gold plated electrodes using a two-electrode system. The absence of any redox indicator in the impedance measurements provide more precise and accurate characterization of the measured bioanalyte at molecular resolution. An equivalent electrical circuit of the electrodeelectrolyte interface was deduced from the observed impedance data of saline solution at low and high concentrations. The detection of biomolecular interactions was fundamentally correlated to electrical double-layer variation at modified interface. The investigations were done using 20mer deoxyribonucleic acid (DNA) strands without any label. Surface modification was performed by creating mixed monolayer of the thiol-modified single-stranded DNA and a spacer thiol (mercaptohexanol) by a two-step self-assembly method. The results clearly distinguish between the noncomplementary and complementary hybridization of DNA, at low frequency region below several hundreds Hertz.
Abstract: The class of geometric deformable models, so-called
level sets, has brought tremendous impact to medical imagery. In
this paper we present yet another application of level sets to medical
imaging. The method we give here will in a way modify the speed
term in the standard level sets equation of motion. To do so we
build a potential based on the distance and the gradient of the
image we study. In turn the potential gives rise to the force field:
F~F(x, y) = P
∀(p,q)∈I
((x, y) - (p, q)) |ÔêçI(p,q)|
|(x,y)-(p,q)|
2 . The direction
and intensity of the force field at each point will determine the
direction of the contour-s evolution. The images we used to test
our method were produced by the Univesit'e de Sherbrooke-s PET
scanners.
Abstract: The similarity comparison of RNA secondary
structures is important in studying the functions of RNAs. In recent
years, most existing tools represent the secondary structures by
tree-based presentation and calculate the similarity by tree alignment
distance. Different to previous approaches, we propose a new method
based on maximum clique detection algorithm to extract the maximum
common structural elements in compared RNA secondary structures.
A new graph-based similarity measurement and maximum common
subgraph detection procedures for comparing purely RNA secondary
structures is introduced. Given two RNA secondary structures, the
proposed algorithm consists of a process to determine the score of the
structural similarity, followed by comparing vertices labelling, the
labelled edges and the exact degree of each vertex. The proposed
algorithm also consists of a process to extract the common structural
elements between compared secondary structures based on a proposed
maximum clique detection of the problem. This graph-based model
also can work with NC-IUB code to perform the pattern-based
searching. Therefore, it can be used to identify functional RNA motifs
from database or to extract common substructures between complex
RNA secondary structures. We have proved the performance of this
proposed algorithm by experimental results. It provides a new idea of
comparing RNA secondary structures. This tool is helpful to those
who are interested in structural bioinformatics.
Abstract: On-board Error Detection and Correction (EDAC)
devices aim to secure data transmitted between the central
processing unit (CPU) of a satellite onboard computer and its local
memory. This paper presents a comparison of the performance of
four low complexity EDAC techniques for application in Random
Access Memories (RAMs) on-board small satellites. The
performance of a newly proposed EDAC architecture is measured
and compared with three different EDAC strategies, using the same
FPGA technology. A statistical analysis of single-event upset (SEU)
and multiple-bit upset (MBU) activity in commercial memories
onboard Alsat-1 is given for a period of 8 years
Abstract: The X-ray technology has been used in non-destructive evaluation in the Power System, in which a visual non-destructive inspection method for the electrical equipment is provided. However, lots of noise is existed in the images that are got from the X-ray digital images equipment. Therefore, the auto defect detection which based on these images will be very difficult to proceed. A theory on X-ray image de-noising algorithm based on wavelet transform is proposed in this paper. Then the edge detection algorithm is used so that the defect can be pushed out. The result of experiment shows that the method which utilized by this paper is very useful for de-noising on the X-ray images.
Abstract: This research intends to introduce a new usage of Artificial Intelligent (AI) approaches in Stepping Stone Detection (SSD) fields of research. By using Self-Organizing Map (SOM) approaches as the engine, through the experiment, it is shown that SOM has the capability to detect the number of connection chains that involved in a stepping stones. Realizing that by counting the number of connection chain is one of the important steps of stepping stone detection and it become the research focus currently, this research has chosen SOM as the AI techniques because of its capabilities. Through the experiment, it is shown that SOM can detect the number of involved connection chains in Network-based Stepping Stone Detection (NSSD).
Abstract: A number of automated shot-change detection
methods for indexing a video sequence to facilitate browsing and
retrieval have been proposed in recent years. This paper emphasizes
on the simulation of video shot boundary detection using one of the
methods of the color histogram wherein scaling of the histogram
metrics is an added feature. The difference between the histograms of
two consecutive frames is evaluated resulting in the metrics. Further
scaling of the metrics is performed to avoid ambiguity and to enable
the choice of apt threshold for any type of videos which involves
minor error due to flashlight, camera motion, etc. Two sample videos
are used here with resolution of 352 X 240 pixels using color
histogram approach in the uncompressed media. An attempt is made
for the retrieval of color video. The simulation is performed for the
abrupt change in video which yields 90% recall and precision value.
Abstract: The detection of outliers is very essential because of
their responsibility for producing huge interpretative problem in
linear as well as in nonlinear regression analysis. Much work has
been accomplished on the identification of outlier in linear
regression, but not in nonlinear regression. In this article we propose
several outlier detection techniques for nonlinear regression. The
main idea is to use the linear approximation of a nonlinear model and
consider the gradient as the design matrix. Subsequently, the
detection techniques are formulated. Six detection measures are
developed that combined with three estimation techniques such as the
Least-Squares, M and MM-estimators. The study shows that among
the six measures, only the studentized residual and Cook Distance
which combined with the MM estimator, consistently capable of
identifying the correct outliers.
Abstract: This paper presents the modeling of a MEMS based accelerometer in order to detect the presence of a wheel flat in the railway vehicle. A haversine wheel flat is assigned to one wheel of a 5 DOF pitch plane vehicle model, which is coupled to a 3 layer track model. Based on the simulated acceleration response obtained from the vehicle-track model, an accelerometer is designed that meets all the requirements to detect the presence of a wheel flat. The proposed accelerometer can survive in a dynamic shocking environment with acceleration up to ±150g. The parameters of the accelerometer are calculated in order to achieve the required specifications using lumped element approximation and the results are used for initial design layout. A finite element analysis code (COMSOL) is used to perform simulations of the accelerometer under various operating conditions and to determine the optimum configuration. The simulated results are found within about 2% of the calculated values, which indicates the validity of lumped element approach. The stability of the accelerometer is also determined in the desired range of operation including the condition under shock.
Abstract: Hand gesture is one of the typical methods used in
sign language for non-verbal communication. It is most commonly
used by people who have hearing or speech problems to
communicate among themselves or with normal people. Various sign
language systems have been developed by manufacturers around the
globe but they are neither flexible nor cost-effective for the end
users. This paper presents a system prototype that is able to
automatically recognize sign language to help normal people to
communicate more effectively with the hearing or speech impaired
people. The Sign to Voice system prototype, S2V, was developed
using Feed Forward Neural Network for two-sequence signs
detection. Different sets of universal hand gestures were captured
from video camera and utilized to train the neural network for
classification purpose. The experimental results have shown that
neural network has achieved satisfactory result for sign-to-voice
translation.
Abstract: The purpose of this research is to develop and apply the
RSCMAC to enhance the dynamic accuracy of Global Positioning
System (GPS). GPS devices provide services of accurate positioning,
speed detection and highly precise time standard for over 98% area on
the earth. The overall operation of Global Positioning System includes
24 GPS satellites in space; signal transmission that includes 2
frequency carrier waves (Link 1 and Link 2) and 2 sets random
telegraphic codes (C/A code and P code), on-earth monitoring stations
or client GPS receivers. Only 4 satellites utilization, the client position
and its elevation can be detected rapidly. The more receivable
satellites, the more accurate position can be decoded. Currently, the
standard positioning accuracy of the simplified GPS receiver is greatly
increased, but due to affected by the error of satellite clock, the
troposphere delay and the ionosphere delay, current measurement
accuracy is in the level of 5~15m. In increasing the dynamic GPS
positioning accuracy, most researchers mainly use inertial navigation
system (INS) and installation of other sensors or maps for the
assistance. This research utilizes the RSCMAC advantages of fast
learning, learning convergence assurance, solving capability of
time-related dynamic system problems with the static positioning
calibration structure to improve and increase the GPS dynamic
accuracy. The increasing of GPS dynamic positioning accuracy can be
achieved by using RSCMAC system with GPS receivers collecting
dynamic error data for the error prediction and follows by using the
predicted error to correct the GPS dynamic positioning data. The
ultimate purpose of this research is to improve the dynamic positioning
error of cheap GPS receivers and the economic benefits will be
enhanced while the accuracy is increased.
Abstract: In this work, we present an automatic vehicle detection
system for airborne videos using combined features. We propose a
pixel-wise classification method for vehicle detection using Dynamic
Bayesian Networks. In spite of performing pixel-wise classification,
relations among neighboring pixels in a region are preserved in the
feature extraction process. The main novelty of the detection scheme is
that the extracted combined features comprise not only pixel-level
information but also region-level information. Afterwards, tracking is
performed on the detected vehicles. Tracking is performed using
efficient Kalman filter with dynamic particle sampling. Experiments
were conducted on a wide variety of airborne videos. We do not
assume prior information of camera heights, orientation, and target
object sizes in the proposed framework. The results demonstrate
flexibility and good generalization abilities of the proposed method on
a challenging dataset.
Abstract: Robustness is one of the primary performance criteria for an Intelligent Video Surveillance (IVS) system. One of the key factors in enhancing the robustness of dynamic video analysis is,providing accurate and reliable means for shadow detection. If left undetected, shadow pixels may result in incorrect object tracking and classification, as it tends to distort localization and measurement information. Most of the algorithms proposed in literature are computationally expensive; some to the extent of equalling computational requirement of motion detection. In this paper, the homogeneity property of shadows is explored in a novel way for shadow detection. An adaptive division image (which highlights homogeneity property of shadows) analysis followed by a relatively simpler projection histogram analysis for penumbra suppression is the key novelty in our approach.
Abstract: Texture classification is an important image processing
task with a broad application range. Many different techniques for
texture classification have been explored. Using sparse approximation
as a feature extraction method for texture classification is a relatively
new approach, and Skretting et al. recently presented the Frame
Texture Classification Method (FTCM), showing very good results on
classical texture images. As an extension of that work the FTCM is
here tested on a real world application as detection of abnormalities
in mammograms. Some extensions to the original FTCM that are
useful in some applications are implemented; two different smoothing
techniques and a vector augmentation technique. Both detection of
microcalcifications (as a primary detection technique and as a last
stage of a detection scheme), and soft tissue lesions in mammograms
are explored. All the results are interesting, and especially the results
using FTCM on regions of interest as the last stage in a detection
scheme for microcalcifications are promising.
Abstract: The main objective developed in this paper is to find a
graphic technique for modeling, simulation and diagnosis of the
industrial systems. This importance is much apparent when it is about
a complex system such as the nuclear reactor with pressurized water
of several form with various several non-linearity and time scales. In
this case the analytical approach is heavy and does not give a fast
idea on the evolution of the system. The tool Bond Graph enabled us
to transform the analytical model into graphic model and the
software of simulation SYMBOLS 2000 specific to the Bond Graphs
made it possible to validate and have the results given by the
technical specifications. We introduce the analysis of the problem
involved in the faults localization and identification in the complex
industrial processes. We propose a method of fault detection applied
to the diagnosis and to determine the gravity of a detected fault. We
show the possibilities of application of the new diagnosis approaches
to the complex system control. The industrial systems became
increasingly complex with the faults diagnosis procedures in the
physical systems prove to become very complex as soon as the
systems considered are not elementary any more. Indeed, in front of
this complexity, we chose to make recourse to Fault Detection and
Isolation method (FDI) by the analysis of the problem of its control
and to conceive a reliable system of diagnosis making it possible to
apprehend the complex dynamic systems spatially distributed applied
to the standard pressurized water nuclear reactor.
Abstract: Bond Graph as a unified multidisciplinary tool is widely
used not only for dynamic modelling but also for Fault Detection and
Isolation because of its structural and causal proprieties. A binary
Fault Signature Matrix is systematically generated but to make the
final binary decision is not always feasible because of the problems
revealed by such method. The purpose of this paper is introducing a
methodology for the improvement of the classical binary method of
decision-making, so that the unknown and identical failure signatures
can be treated to improve the robustness. This approach consists of
associating the evaluated residuals and the components reliability data
to build a Hybrid Bayesian Network. This network is used in two
distinct inference procedures: one for the continuous part and the
other for the discrete part. The continuous nodes of the network are
the prior probabilities of the components failures, which are used by
the inference procedure on the discrete part to compute the posterior
probabilities of the failures. The developed methodology is applied
to a real steam generator pilot process.
Abstract: This paper includes two novel techniques for skew
estimation of binary document images. These algorithms are based on
connected component analysis and Hough transform. Both these
methods focus on reducing the amount of input data provided to
Hough transform. In the first method, referred as word centroid
approach, the centroids of selected words are used for skew detection.
In the second method, referred as dilate & thin approach, the selected
characters are blocked and dilated to get word blocks and later
thinning is applied. The final image fed to Hough transform has the
thinned coordinates of word blocks in the image. The methods have
been successful in reducing the computational complexity of Hough
transform based skew estimation algorithms. Promising experimental
results are also provided to prove the effectiveness of the proposed
methods.
Abstract: Legionella pneumophila is involved in more than 95%
cases of severe atypical pneumonia. Infection is mainly by
inhalation the indoor aerosols through the water-coolant systems.
Because some Legionella strains may be viable but not culturable,
therefore, Taq polymerase, DNA amplification and semi-nested-PCR
were carried out to detect Legionella-specific 16S-rDNA sequence.
For this purpose, 1.5 litter of water samples from 77 water-coolant
system were collected from four different hospitals, two nursing
homes and one student hostel in Kerman city of Iran, each in a brand
new plastic bottle during summer season of 2006 (from April to
August). The samples were filtered in the sterile condition through
the Millipore Membrane Filter. DNA was extracted from membrane
and used for PCR to detect Legionella spp. The PCR product was
then subjected to semi-nested PCR for detection of L. pneumophila.
Out of 77 water samples that were tested by PCR, 30 (39%) were
positive for most species of Legionella. However, L. pneumophila
was detected from 14 (18.2%) water samples by semi-nested PCR.
From the above results it can be concluded that water coolant
systems of different hospitals and nursing homes in Kerman city of
Iran are highly contaminated with L. pneumophila spp. and pose
serious concern. So, we recommend avoiding such type of coolant
system in the hospitals and nursing homes.
Abstract: Exact expressions for bit-error probability (BEP) for
coherent square detection of uncoded and coded M-ary quadrature
amplitude modulation (MQAM) using an array of antennas with
maximal ratio combining (MRC) in a flat fading channel interference
limited system in a Nakagami-m fading environment is derived. The
analysis assumes an arbitrary number of independent and identically
distributed Nakagami interferers. The results for coded MQAM are
computed numerically for the case of (24,12) extended Golay code
and compared with uncoded MQAM by plotting error probabilities
versus average signal-to-interference ratio (SIR) for various values of
order of diversity N, number of distinct symbols M, in order to
examine the effect of cochannel interferers on the performance of the
digital communication system. The diversity gains and net gains are
also presented in tabular form in order to examine the performance of
digital communication system in the presence of interferers, as the
order of diversity increases. The analytical results presented in this
paper are expected to provide useful information needed for design
and analysis of digital communication systems with space diversity
in wireless fading channels.