Abstract: In this paper a class of analog algorithms based on the
concept of Cellular Neural Network (CNN) is applied in some
processing operations of some important medical images, namely
retina images, for detecting various symptoms connected with
diabetic retinopathy. Some specific processing tasks like
morphological operations, linear filtering and thresholding are
proposed, the corresponding template values are given and
simulations on real retina images are provided.
Abstract: In this paper, we have developed an explicit analytical
drain current model comprising surface channel potential and
threshold voltage in order to explain the advantages of the proposed
Gate Stack Double Diffusion (GSDD) MOSFET design over the
conventional MOSFET with the same geometric specifications that
allow us to use the benefits of the incorporation of the high-k layer
between the oxide layer and gate metal aspect on the immunity of the
proposed design against the self-heating effects. In order to show the
efficiency of our proposed structure, we propose the simulation of the
power chopper circuit. The use of the proposed structure to design a
power chopper circuit has showed that the (GSDD) MOSFET can
improve the working of the circuit in terms of power dissipation and
self-heating effect immunity. The results so obtained are in close
proximity with the 2D simulated results thus confirming the validity
of the proposed model.
Abstract: The rapid growth of e-Commerce services is
significantly observed in the past decade. However, the method to
verify the authenticated users still widely depends on numeric
approaches. A new search on other verification methods suitable for
online e-Commerce is an interesting issue. In this paper, a new online
signature-verification method using angular transformation is
presented. Delay shifts existing in online signatures are estimated by
the estimation method relying on angle representation. In the
proposed signature-verification algorithm, all components of input
signature are extracted by considering the discontinuous break points
on the stream of angular values. Then the estimated delay shift is
captured by comparing with the selected reference signature and the
error matching can be computed as a main feature used for verifying
process. The threshold offsets are calculated by two types of error
characteristics of the signature verification problem, False Rejection
Rate (FRR) and False Acceptance Rate (FAR). The level of these two
error rates depends on the decision threshold chosen whose value is
such as to realize the Equal Error Rate (EER; FAR = FRR). The
experimental results show that through the simple programming,
employed on Internet for demonstrating e-Commerce services, the
proposed method can provide 95.39% correct verifications and 7%
better than DP matching based signature-verification method. In
addition, the signature verification with extracting components
provides more reliable results than using a whole decision making.
Abstract: Emerging Bio-engineering fields such as Brain
Computer Interfaces, neuroprothesis devices and modeling and
simulation of neural networks have led to increased research activity
in algorithms for the detection, isolation and classification of Action
Potentials (AP) from noisy data trains. Current techniques in the field
of 'unsupervised no-prior knowledge' biosignal processing include
energy operators, wavelet detection and adaptive thresholding. These
tend to bias towards larger AP waveforms, AP may be missed due to
deviations in spike shape and frequency and correlated noise
spectrums can cause false detection. Also, such algorithms tend to
suffer from large computational expense.
A new signal detection technique based upon the ideas of phasespace
diagrams and trajectories is proposed based upon the use of a
delayed copy of the AP to highlight discontinuities relative to
background noise. This idea has been used to create algorithms that
are computationally inexpensive and address the above problems.
Distinct AP have been picked out and manually classified from
real physiological data recorded from a cockroach. To facilitate
testing of the new technique, an Auto Regressive Moving Average
(ARMA) noise model has been constructed bases upon background
noise of the recordings. Along with the AP classification means this
model enables generation of realistic neuronal data sets at arbitrary
signal to noise ratio (SNR).
Abstract: In this paper, we were introduces a skin detection
method using a histogram approximation based on the mean shift
algorithm. The proposed method applies the mean shift procedure to a
histogram of a skin map of the input image, generated by comparison
with standard skin colors in the CbCr color space, and divides the
background from the skin region by selecting the maximum value
according to brightness level. The proposed method detects the skin
region using the mean shift procedure to determine a maximum value
that becomes the dividing point, rather than using a manually selected
threshold value, as in existing techniques. Even when skin color is
contaminated by illumination, the procedure can accurately segment
the skin region and the background region. The proposed method may
be useful in detecting facial regions as a pretreatment for face
recognition in various types of illumination.
Abstract: Segmentation is an important step in medical image
analysis and classification for radiological evaluation or computer
aided diagnosis. The CAD (Computer Aided Diagnosis ) of lung CT
generally first segment the area of interest (lung) and then analyze
the separately obtained area for nodule detection in order to
diagnosis the disease. For normal lung, segmentation can be
performed by making use of excellent contrast between air and
surrounding tissues. However this approach fails when lung is
affected by high density pathology. Dense pathologies are present in
approximately a fifth of clinical scans, and for computer analysis
such as detection and quantification of abnormal areas it is vital that
the entire and perfectly lung part of the image is provided and no
part, as present in the original image be eradicated. In this paper we
have proposed a lung segmentation technique which accurately
segment the lung parenchyma from lung CT Scan images. The
algorithm was tested against the 25 datasets of different patients
received from Ackron Univeristy, USA and AGA Khan Medical
University, Karachi, Pakistan.
Abstract: Dengue virus is transmitted from person to person
through the biting of infected Aedes Aegypti mosquitoes. DEN-1,
DEN-2, DEN-3 and DEN-4 are four serotypes of this virus. Infection
with one of these four serotypes apparently produces permanent
immunity to it, but only temporary cross immunity to the others. The
length of time during incubation of dengue virus in human and
mosquito are considered in this study. The dengue patients are
classified into infected and infectious classes. The infectious human
can transmit dengue virus to susceptible mosquitoes but infected
human can not. The transmission model of this disease is formulated.
The human population is divided into susceptible, infected, infectious
and recovered classes. The mosquito population is separated into
susceptible, infected and infectious classes. Only infectious
mosquitoes can transmit dengue virus to the susceptible human. We
analyze this model by using dynamical analysis method. The
threshold condition is discussed to reduce the outbreak of this
disease.
Abstract: Advancement in Artificial Intelligence has lead to the
developments of various “smart" devices. Character recognition
device is one of such smart devices that acquire partial human
intelligence with the ability to capture and recognize various
characters in different languages. Firstly multiscale neural training
with modifications in the input training vectors is adopted in this
paper to acquire its advantage in training higher resolution character
images. Secondly selective thresholding using minimum distance
technique is proposed to be used to increase the level of accuracy of
character recognition. A simulator program (a GUI) is designed in
such a way that the characters can be located on any spot on the
blank paper in which the characters are written. The results show that
such methods with moderate level of training epochs can produce
accuracies of at least 85% and more for handwritten upper case
English characters and numerals.
Abstract: Feature selection study is gaining importance due to its contribution to save classification cost in terms of time and computation load. In search of essential features, one of the methods to search the features is via the decision tree. Decision tree act as an intermediate feature space inducer in order to choose essential features. In decision tree-based feature selection, some studies used decision tree as a feature ranker with a direct threshold measure, while others remain the decision tree but utilized pruning condition that act as a threshold mechanism to choose features. This paper proposed threshold measure using Manhattan Hierarchical Cluster distance to be utilized in feature ranking in order to choose relevant features as part of the feature selection process. The result is promising, and this method can be improved in the future by including test cases of a higher number of attributes.
Abstract: In this study an extensive experimental research is
carried out to develop a better understanding of the effects of Piano Key (PK) weir geometry on weir flow threshold submergence.
Experiments were conducted in a 12 m long, 0.4 m wide and 0.7 m deep rectangular glass wall flume. The main objectives were to
investigate the effect of the PK weir geometries including the weir
length, weir height, inlet-outlet key widths, upstream and
downstream apex overhangs, and slopped floors on threshold submergence and study the hydraulic flow characteristics. From the
experimental results, a practical formula is proposed to evaluate the flow threshold submergence over PK weirs.
Abstract: In the present work, we have developed a symmetric electrochemical capacitor based on the nanostructured iron oxide (Fe3O4)-activated carbon (AC) nanocomposite materials. The physical properties of the nanocomposites were characterized by Scanning Electron Microscopy (SEM) and Brunauer-Emmett-Teller (BET) analysis. The electrochemical performances of the composite electrode in 1.0 M Na2SO3 and 1.0 M Na2SO4 aqueous solutions were evaluated using cyclic voltammetry (CV) and electrochemical impedance spectroscopy (EIS). The composite electrode with 4 wt% of iron oxide nanomaterials exhibits the highest capacitance of 86 F/g. The experimental results clearly indicate that the incorporation of iron oxide nanomaterials at low concentration to the composite can improve the capacitive performance, mainly attributed to the contribution of the pseudocapacitance charge storage mechanism and the enhancement on the effective surface area of the electrode. Nevertheless, there is an optimum threshold on the amount of iron oxide that needs to be incorporated into the composite system. When this optimum threshold is exceeded, the capacitive performance of the electrode starts to deteriorate, as a result of the undesired particle aggregation, which is clearly indicated in the SEM analysis. The electrochemical performance of the composite electrode is found to be superior when Na2SO3 is used as the electrolyte, if compared to the Na2SO4 solution. It is believed that Fe3O4 nanoparticles can provide favourable surface adsorption sites for sulphite (SO3 2-) anions which act as catalysts for subsequent redox and intercalation reactions.
Abstract: The aim of this study was to determine noise level of
six different types of machines in printing companies in Novi Sad.
The A-weighted levels on Leq, Lmax and Lmin Sound Pressure Level
(SPL) in dBA were measured. It was found that the folders, offset
printing presses and binding machines are the predominant noise
sources. The noise levels produced by 12 of 38 machines exceed the
limiting threshold level of 85 dBA, tolerated by law. Since it was
determined that the average noise level for folders (87.7 dB) exceeds
the permitted value the octave analysis of noise was performed.
Abstract: We developed a vision interface immersive projection system, CAVE in virtual rea using hand gesture recognition with computer vis background image was subtracted from current webcam and we convert the color space of the imag Then we mask skin regions using skin color range t a noise reduction operation. We made blobs fro gestures were recognized using these blobs. Using recognition, we could implement an effective bothering devices for CAVE. e framework for an reality research field vision techniques. ent image frame age into HSV space. e threshold and apply from the image and ing our hand gesture e interface without
Abstract: The motivation for adaptive modulation and coding is
to adjust the method of transmission to ensure that the maximum
efficiency is achieved over the link at all times. The receiver
estimates the channel quality and reports it back to the transmitter.
The transmitter then maps the reported quality into a link mode. This
mapping however, is not a one-to-one mapping. In this paper we
investigate a method for selecting the proper modulation scheme.
This method can dynamically adapt the mapping of the Signal-to-
Noise Ratio (SNR) into a link mode. It enables the use of the right
modulation scheme irrespective of changes in the channel conditions
by incorporating errors in the received data. We propose a Markov
model for this method, and use it to derive the average switching
thresholds and the average throughput. We show that the average
throughput of this method outperforms the conventional threshold
method.
Abstract: The fuzzy technique is an operator introduced in order
to simulate at a mathematical level the compensatory behavior in
process of decision making or subjective evaluation. The following
paper introduces such operators on hand of computer vision
application.
In this paper a novel method based on fuzzy logic reasoning
strategy is proposed for edge detection in digital images without
determining the threshold value. The proposed approach begins by
segmenting the images into regions using floating 3x3 binary matrix.
The edge pixels are mapped to a range of values distinct from each
other. The robustness of the proposed method results for different
captured images are compared to those obtained with the linear Sobel
operator. It is gave a permanent effect in the lines smoothness and
straightness for the straight lines and good roundness for the curved
lines. In the same time the corners get sharper and can be defined
easily.
Abstract: Vision based tracking problem is solved through a
combination of optical flow, MACH filter and log r-θ mapping.
Optical flow is used for detecting regions of movement in video
frames acquired under variable lighting conditions. The region of
movement is segmented and then searched for the target. A template
is used for target recognition on the segmented regions for detecting
the region of interest. The template is trained offline on a sequence of
target images that are created using the MACH filter and log r-θ
mapping. The template is applied on areas of movement in
successive frames and strong correlation is seen for in-class targets.
Correlation peaks above a certain threshold indicate the presence of
target and the target is tracked over successive frames.
Abstract: Lung cancer accounts for the most cancer related deaths for men as well as for women. The identification of cancer associated genes and the related pathways are essential to provide an important possibility in the prevention of many types of cancer. In this work two filter approaches, namely the information gain and the biomarker identifier (BMI) are used for the identification of different types of small-cell and non-small-cell lung cancer. A new method to determine the BMI thresholds is proposed to prioritize genes (i.e., primary, secondary and tertiary) using a k-means clustering approach. Sets of key genes were identified that can be found in several pathways. It turned out that the modified BMI is well suited for microarray data and therefore BMI is proposed as a powerful tool for the search for new and so far undiscovered genes related to cancer.
Abstract: Many studies have been conducted for derivation of
attenuation relationships worldwide, however few relationships have
been developed to use for the seismic region of Iranian plateau and
only few of these studies have been conducted for derivation of
attenuation relationships for parameters such as uniform duration.
Uniform duration is the total time during which the acceleration is
larger than a given threshold value (default is 5% of PGA). In this
study, the database was same as that used previously by Ghodrati
Amiri et al. (2007) with same correction methods for earthquake
records in Iran. However in this study, records from earthquakes with
MS< 4.0 were excluded from this database, each record has
individually filtered afterward, and therefore the dataset has been
expanded. These new set of attenuation relationships for Iran are
derived based on tectonic conditions with soil classification into rock
and soil. Earthquake parameters were chosen to be
hypocentral distance and magnitude in order to make it easier to use
the relationships for seismic hazard analysis. Tehran is the capital
city of Iran wit ha large number of important structures. In this study,
a probabilistic approach has been utilized for seismic hazard
assessment of this city. The resulting uniform duration against return
period diagrams are suggested to be used in any projects in the area.
Abstract: In this paper, we propose a texture feature-based
language identification using wavelet-domain BDIP (block difference
of inverse probabilities) and BVLC (block variance of local
correlation coefficients) features and FFT (fast Fourier transform)
feature. In the proposed method, wavelet subbands are first obtained
by wavelet transform from a test image and denoised by Donoho-s
soft-thresholding. BDIP and BVLC operators are next applied to the
wavelet subbands. FFT blocks are also obtained by 2D (twodimensional)
FFT from the blocks into which the test image is
partitioned. Some significant FFT coefficients in each block are
selected and magnitude operator is applied to them. Moments for each
subband of BDIP and BVLC and for each magnitude of significant
FFT coefficients are then computed and fused into a feature vector. In
classification, a stabilized Bayesian classifier, which adopts variance
thresholding, searches the training feature vector most similar to the
test feature vector. Experimental results show that the proposed
method with the three operations yields excellent language
identification even with rather low feature dimension.
Abstract: On-line handwritten scripts are usually dealt with pen tip traces from pen-down to pen-up positions. Time evaluation of the pen coordinates is also considered along with trajectory information. However, the data obtained needs a lot of preprocessing including filtering, smoothing, slant removing and size normalization before recognition process. Instead of doing such lengthy preprocessing, this paper presents a simple approach to extract the useful character information. This work evaluates the use of the counter- propagation neural network (CPN) and presents feature extraction mechanism in full detail to work with on-line handwriting recognition. The obtained recognition rates were 60% to 94% using the CPN for different sets of character samples. This paper also describes a performance study in which a recognition mechanism with multiple thresholds is evaluated for counter-propagation architecture. The results indicate that the application of multiple thresholds has significant effect on recognition mechanism. The method is applicable for off-line character recognition as well. The technique is tested for upper-case English alphabets for a number of different styles from different peoples.