Abstract: The submitted paper deals with the problems of
trapping and enriching the gases and aerosols of the substances to be
determined in the ambient atmosphere. Further, the paper is focused
on the working principle of the miniaturized portable continuous
concentrator we have designed and the possibilities of its
application in air sampling and accumulation of organic and
inorganic substances with which the air is contaminated. The stress is
laid on trapping vapours and aerosols of solid substances with the
comparatively low vapour tension such as explosive compounds.
Abstract: Decision fusion is one of hot research topics in
classification area, which aims to achieve the best possible
performance for the task at hand. In this paper, we
investigate the usefulness of this concept to improve change
detection accuracy in remote sensing. Thereby, outputs of
two fuzzy change detectors based respectively on
simultaneous and comparative analysis of multitemporal
data are fused by using fuzzy integral operators. This
method fuses the objective evidences produced by the
change detectors with respect to fuzzy measures that express
the difference of performance between them. The proposed
fusion framework is evaluated in comparison with some
ordinary fuzzy aggregation operators. Experiments carried
out on two SPOT images showed that the fuzzy integral was
the best performing. It improves the change detection
accuracy while attempting to equalize the accuracy rate in
both change and no change classes.
Abstract: Cosmic showers, during the transit through space, produce
sub - products as a result of interactions with the intergalactic
or interstellar medium which after entering earth generate secondary
particles called Extensive Air Shower (EAS). Detection and analysis
of High Energy Particle Showers involve a plethora of theoretical and
experimental works with a host of constraints resulting in inaccuracies
in measurements. Therefore, there exist a necessity to develop a
readily available system based on soft-computational approaches
which can be used for EAS analysis. This is due to the fact that soft
computational tools such as Artificial Neural Network (ANN)s can be
trained as classifiers to adapt and learn the surrounding variations. But
single classifiers fail to reach optimality of decision making in many
situations for which Multiple Classifier System (MCS) are preferred
to enhance the ability of the system to make decisions adjusting
to finer variations. This work describes the formation of an MCS
using Multi Layer Perceptron (MLP), Recurrent Neural Network
(RNN) and Probabilistic Neural Network (PNN) with data inputs
from correlation mapping Self Organizing Map (SOM) blocks and
the output optimized by another SOM. The results show that the setup
can be adopted for real time practical applications for prediction
of primary energy and location of EAS from density values captured
using detectors in a circular grid.
Abstract: Loop detectors report traffic characteristics in real
time. They are at the core of traffic control process. Intuitively,
one would expect that as density of detection increases, so would
the quality of estimates derived from detector data. However, as
detector deployment increases, the associated operating and
maintenance cost increases. Thus, traffic agencies often need to
decide where to add new detectors and which detectors should
continue receiving maintenance, given their resource constraints.
This paper evaluates the effect of detector spacing on freeway
travel time estimation. A freeway section (Interstate-15) in Salt
Lake City metropolitan region is examined. The research reveals
that travel time accuracy does not necessarily deteriorate with
increased detector spacing. Rather, the actual location of detectors
has far greater influence on the quality of travel time estimates.
The study presents an innovative computational approach that
delivers optimal detector locations through a process that relies on
Genetic Algorithm formulation.
Abstract: For gamma radiation detection, assemblies having
scintillation crystals and a photomultiplier tube, also there is a
preamplifier connected to the detector because the signals from
photomultiplier tube are of small amplitude. After pre-amplification
the signals are sent to the amplifier and then to the multichannel
analyser. The multichannel analyser sorts all incoming electrical
signals according to their amplitudes and sorts the detected photons
in channels covering small energy intervals. The energy range of
each channel depends on the gain settings of the multichannel
analyser and the high voltage across the photomultiplier tube. The
exit spectrum data of the two main isotopes studied ,putting data in
biomass program ,process it by Matlab program to get the solid
holdup image (solid spherical nuclear fuel)
Abstract: We present in this paper a new approach for specific JPEG steganalysis and propose studying statistics of the compressed DCT coefficients. Traditionally, steganographic algorithms try to preserve statistics of the DCT and of the spatial domain, but they cannot preserve both and also control the alteration of the compressed data. We have noticed a deviation of the entropy of the compressed data after a first embedding. This deviation is greater when the image is a cover medium than when the image is a stego image. To observe this deviation, we pointed out new statistic features and combined them with the Multiple Embedding Method. This approach is motivated by the Avalanche Criterion of the JPEG lossless compression step. This criterion makes possible the design of detectors whose detection rates are independent of the payload. Finally, we designed a Fisher discriminant based classifier for well known steganographic algorithms, Outguess, F5 and Hide and Seek. The experiemental results we obtained show the efficiency of our classifier for these algorithms. Moreover, it is also designed to work with low embedding rates (< 10-5) and according to the avalanche criterion of RLE and Huffman compression step, its efficiency is independent of the quantity of hidden information.
Abstract: Salient points are frequently used to represent local
properties of the image in content-based image retrieval. In this paper,
we present a reduction algorithm that extracts the local most salient
points such that they not only give a satisfying representation of an
image, but also make the image retrieval process efficiently. This
algorithm recursively reduces the continuous point set by their
corresponding saliency values under a top-down approach. The
resulting salient points are evaluated with an image retrieval system
using Hausdoff distance. In this experiment, it shows that our method
is robust and the extracted salient points provide better retrieval
performance comparing with other point detectors.
Abstract: Prior to the use of detectors, characteristics
comparison study was performed and baseline established. In patient
specific QA, the portal dosimetry mean values of area gamma,
average gamma and maximum gamma were 1.02, 0.31 and 1.31 with
standard deviation of 0.33, 0.03 and 0.14 for IMRT and the
corresponding values were 1.58, 0.48 and 1.73 with standard
deviation of 0.31, 0.06 and 0.66 for VMAT. With ImatriXX 2-D
array system, on an average 99.35% of the pixels passed the criteria
of 3%-3 mm gamma with standard deviation of 0.24 for dynamic
IMRT. For VMAT, the average value was 98.16% with a standard
deviation of 0.86. The results showed that both the systems can be
used in patient specific QA measurements for IMRT and VMAT.
The values obtained with the portal dosimetry system were found to
be relatively more consistent compared to those obtained with
ImatriXX 2-D array system.
Abstract: We report the size dependence of 1D superconductivity in ultrathin (10-130 nm) nanowires produced by coating suspended carbon nanotubes with a superconducting NbN thin film. The resistance-temperature characteristic curves for samples with ≧25 nm wire width show the superconducting transition. On the other hand, for the samples with 10-nm width, the superconducting transition is not exhibited owing to the quantum size effect. The differential resistance vs. current density characteristic curves show some peak, indicating that Josephson junctions are formed in nanowires. The presence of the Josephson junctions is well explained by the measurement of the magnetic field dependence of the critical current. These understanding allow for the further expansion of the potential application of NbN, which is utilized for single photon detectors and so on.
Abstract: Bleeding in the digestive duct is an important diagnostic parameter for patients. Blood in the endoscopic image can be determined by investigating the color tone of blood due to the degree of oxygenation, under- or over- illumination, food debris and secretions, etc. However, we found that how to pre-process raw images obtained from the capsule detectors was very important. We applied various image process methods suitable for the capsule endoscopic image in order to remove noises and unbalanced sensitivities for the image pixels. The results showed that much improvement was achieved by additional pre-processing techniques on the algorithm of determining bleeding areas.
Abstract: Semiconductor detector arrays are widely used in
high-temperature plasma diagnostics. They have a fast response,
which allows observation of many processes and instabilities in
tokamaks. In this paper, there are reviewed several diagnostics based
on semiconductor arrays as cameras, AXUV photodiodes (referred
often as fast “bolometers") and detectors of both soft X-rays and
visible light installed on the COMPASS tokamak recently. Fresh
results from both spring and summer campaigns in 2012 are
introduced. Examples of the utilization of the detectors are shown on
the plasma shape determination, fast calculation of the radiation
center, two-dimensional plasma radiation tomography in different
spectral ranges, observation of impurity inflow, and also on
investigation of MHD activity in the COMPASS tokamak discharges.
Abstract: Support Vector Machine (SVM) is a recent class of statistical classification and regression techniques playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM is applied to an infrared (IR) binary communication system with different types of channel models including Ricean multipath fading and partially developed scattering channel with additive white Gaussian noise (AWGN) at the receiver. The structure and performance of SVM in terms of the bit error rate (BER) metric is derived and simulated for these channel stochastic models and the computational complexity of the implementation, in terms of average computational time per bit, is also presented. The performance of SVM is then compared to classical binary signal maximum likelihood detection using a matched filter driven by On-Off keying (OOK) modulation. We found that the performance of SVM is superior to that of the traditional optimal detection schemes used in statistical communication, especially for very low signal-to-noise ratio (SNR) ranges. For large SNR, the performance of the SVM is similar to that of the classical detectors. The implication of these results is that SVM can prove very beneficial to IR communication systems that notoriously suffer from low SNR at the cost of increased computational complexity.
Abstract: Traffic Management and Information Systems, which rely on a system of sensors, aim to describe in real-time traffic in urban areas using a set of parameters and estimating them. Though the state of the art focuses on data analysis, little is done in the sense of prediction. In this paper, we describe a machine learning system for traffic flow management and control for a prediction of traffic flow problem. This new algorithm is obtained by combining Random Forests algorithm into Adaboost algorithm as a weak learner. We show that our algorithm performs relatively well on real data, and enables, according to the Traffic Flow Evaluation model, to estimate and predict whether there is congestion or not at a given time on road intersections.
Abstract: A method and apparatus for noninvasive measurement
of blood glucose concentration based on transilluminated laser beam
via the Index Finger has been reported in this paper. This method
depends on atomic gas (He-Ne) laser operating at 632.8nm
wavelength. During measurement, the index finger is inserted into the
glucose sensing unit, the transilluminated optical signal is converted
into an electrical signal, compared with the reference electrical
signal, and the obtained difference signal is processed by signal
processing unit which presents the results in the form of blood
glucose concentration. This method would enable the monitoring
blood glucose level of the diabetic patient continuously, safely and
noninvasively.
Abstract: Support Vector Machine (SVM) is a statistical
learning tool developed to a more complex concept of
structural risk minimization (SRM). In this paper, SVM is
applied to signal detection in communication systems in the
presence of channel noise in various environments in the form
of Rayleigh fading, additive white Gaussian background noise
(AWGN), and interference noise generalized as additive color
Gaussian noise (ACGN). The structure and performance of
SVM in terms of the bit error rate (BER) metric is derived and
simulated for these advanced stochastic noise models and the
computational complexity of the implementation, in terms of
average computational time per bit, is also presented. The
performance of SVM is then compared to conventional binary
signaling optimal model-based detector driven by binary
phase shift keying (BPSK) modulation. We show that the
SVM performance is superior to that of conventional matched
filter-, innovation filter-, and Wiener filter-driven detectors,
even in the presence of random Doppler carrier deviation,
especially for low SNR (signal-to-noise ratio) ranges. For
large SNR, the performance of the SVM was similar to that of
the classical detectors. However, the convergence between
SVM and maximum likelihood detection occurred at a higher
SNR as the noise environment became more hostile.
Abstract: The amounts of radioactivity in the igneous rocks
have been investigated; samples were collected from the total of eight
basalt rock types in the northeastern of Kurdistan region/Iraq. The
activity concentration of 226Ra (238U) series, 228Ac (232Th) series, 40K
and 137Cs were measured using Planar HPGe and NaI(Tl) detectors.
Along the study area the radium equivalent activities Raeq in Bq/Kg
of samples under investigation were found in the range of 22.16 to
77.31 Bq/Kg with an average value of 44.8 Bq/Kg, this value is much
below the internationally accepted value of 370 Bq/Kg. To estimate
the health effects of this natural radioactive composition, the average
values of absorbed gamma dose rate D (55 nGyh-1), Indoor and
outdoor annual effective dose rates Eied (0.11 mSvy-1) . and Eoed
(0.03 mSvy-1), External hazard index Hex (0.138) and internal hazard
index Hin(0.154), and representative level index Iγr (0.386) have been
calculated and found to be lower than the worldwide average values.
Abstract: The network traffic data provided for the design of
intrusion detection always are large with ineffective information and
enclose limited and ambiguous information about users- activities.
We study the problems and propose a two phases approach in our
intrusion detection design. In the first phase, we develop a
correlation-based feature selection algorithm to remove the worthless
information from the original high dimensional database. Next, we
design an intrusion detection method to solve the problems of
uncertainty caused by limited and ambiguous information. In the
experiments, we choose six UCI databases and DARPA KDD99
intrusion detection data set as our evaluation tools. Empirical studies
indicate that our feature selection algorithm is capable of reducing the
size of data set. Our intrusion detection method achieves a better
performance than those of participating intrusion detectors.
Abstract: In the past years a lot of effort has been made in the
field of face detection. The human face contains important features
that can be used by vision-based automated systems in order to
identify and recognize individuals. Face location, the primary step of
the vision-based automated systems, finds the face area in the input
image. An accurate location of the face is still a challenging task.
Viola-Jones framework has been widely used by researchers in order
to detect the location of faces and objects in a given image. Face
detection classifiers are shared by public communities, such as
OpenCV. An evaluation of these classifiers will help researchers to
choose the best classifier for their particular need. This work focuses
of the evaluation of face detection classifiers minding facial
landmarks.
Abstract: The temporal nature of negative selection is an under exploited area. In a negative selection system, newly generated antibodies go through a maturing phase, and the survivors of the phase then wait to be activated by the incoming antigens after certain number of matches. These without having enough matches will age and die, while these with enough matches (i.e., being activated) will become active detectors. A currently active detector may also age and die if it cannot find any match in a pre-defined (lengthy) period of time. Therefore, what matters in a negative selection system is the dynamics of the involved parties in the current time window, not the whole time duration, which may be up to eternity. This property has the potential to define the uniqueness of negative selection in comparison with the other approaches. On the other hand, a negative selection system is only trained with “normal" data samples. It has to learn and discover unknown “abnormal" data patterns on the fly by itself. Consequently, it is more appreciate to utilize negation selection as a system for pattern discovery and recognition rather than just pattern recognition. In this paper, we study the potential of using negative selection in discovering unknown temporal patterns.
Abstract: Sleep spindles are the most interesting hallmark of
stage 2 sleep EEG. Their accurate identification in a
polysomnographic signal is essential for sleep professionals to help
them mark Stage 2 sleep. Sleep Spindles are also promising objective
indicators for neurodegenerative disorders. Visual spindle scoring
however is a tedious workload. In this paper three different
approaches are used for the automatic detection of sleep spindles:
Short Time Fourier Transform, Wavelet Transform and Wave
Morphology for Spindle Detection. In order to improve the results, a
combination of the three detectors is presented and comparison with
human expert scorers is performed. The best performance is obtained
with a combination of the three algorithms which resulted in a
sensitivity and specificity of 94% when compared to human expert
scorers.