Abstract: It is well recognized that the green house gases such
as Chlorofluoro Carbon (CFC), CH4, CO2 etc. are responsible
directly or indirectly for the increase in the average global temperature
of the Earth. The presence of CFC is responsible for
the depletion of ozone concentration in the atmosphere due to
which the heat accompanied with the sun rays are less absorbed
causing increase in the atmospheric temperature of the Earth. The
gases like CH4 and CO2 are also responsible for the increase in
the atmospheric temperature. The increase in the temperature level
directly or indirectly affects the dynamics of interacting species
systems. Therefore, in this paper a mathematical model is proposed
and analysed using stability theory to asses the effects of increasing
temperature due to greenhouse gases on the survival or extinction of
populations in a prey-predator system. A threshold value in terms
of a stress parameter is obtained which determines the extinction or
existence of populations in the underlying system.
Abstract: Ultra-low-power (ULP) circuits have received
widespread attention due to the rapid growth of biomedical
applications and Battery-less Electronics. Subthreshold region of
transistor operation is used in ULP circuits. Major research challenge
in the subthreshold operating region is to extract the ULP benefits
with minimal degradation in speed and robustness. Process, Voltage
and Temperature (PVT) variations significantly affect the
performance of subthreshold circuits. Designed performance
parameters of ULP circuits may vary largely due to temperature
variations. Hence, this paper investigates the effect of temperature
variation on device and circuit performance parameters at different
biasing voltages in the subthreshold region. Simulation results clearly
demonstrate that in deep subthreshold and near threshold voltage
regions, performance parameters are significantly affected whereas in
moderate subthreshold region, subthreshold circuits are more
immune to temperature variations. This establishes that moderate
subthreshold region is ideal for temperature immune circuits.
Abstract: This paper describes a novel monitoring scheme to
minimize total active power in digital circuits depend on the demand
frequency, by adjusting automatically both supply voltage and
threshold voltages based on circuit operating conditions such as
temperature, process variations, and desirable frequency. The delay
monitoring results, will be control and apply so as to be maintained at
the minimum value at which the chip is able to operate for a given
clock frequency. Design details of power monitor are examined using
simulation framework in 32nm BTPM model CMOS process.
Experimental results show the overhead of proposed circuit in terms
of its power consumption is about 40 μW for 32nm technology;
moreover the results show that our proposed circuit design is not far
sensitive to the temperature variations and also process variations.
Besides, uses the simple blocks which offer good sensitivity, high
speed, the continuously feedback loop. This design provides up to
40% reduction in power consumption in active mode.
Abstract: In this paper, gate leakage current has been mitigated
by the use of novel nanoscale MOSFET with Source/Drain-to-Gate
Non-overlapped and high-k spacer structure for the first time. A
compact analytical model has been developed to study the gate
leakage behaviour of proposed MOSFET structure. The result
obtained has found good agreement with the Sentaurus Simulation.
Fringing gate electric field through the dielectric spacer induces
inversion layer in the non-overlap region to act as extended S/D
region. It is found that optimal Source/Drain-to-Gate Non-overlapped
and high-k spacer structure has reduced the gate leakage current to
great extent as compared to those of an overlapped structure. Further,
the proposed structure had improved off current, subthreshold slope
and DIBL characteristic. It is concluded that this structure solves the
problem of high leakage current without introducing the extra series
resistance.
Abstract: In recent years, copulas have become very popular in
financial research and actuarial science as they are more flexible in
modelling the co-movements and relationships of risk factors as compared
to the conventional linear correlation coefficient by Pearson.
However, a precise estimation of the copula parameters is vital in
order to correctly capture the (possibly nonlinear) dependence structure
and joint tail events. In this study, we employ two optimization
heuristics, namely Differential Evolution and Threshold Accepting to
tackle the parameter estimation of multivariate t distribution models
in the EML approach. Since the evolutionary optimizer does not rely
on gradient search, the EML approach can be applied to estimation of
more complicated copula models such as high-dimensional copulas.
Our experimental study shows that the proposed method provides
more robust and more accurate estimates as compared to the IFM
approach.
Abstract: In this paper we investigate the electrical
characteristics of a new structure of gate all around strained silicon
nanowire field effect transistors (FETs) with dual dielectrics by
changing the radius (RSiGe) of silicon-germanium (SiGe) wire and
gate dielectric. Indeed the effect of high-κ dielectric on Field Induced
Barrier Lowering (FIBL) has been studied. Due to the higher electron
mobility in tensile strained silicon, the n-type FETs with strained
silicon channel have better drain current compare with the pure Si
one. In this structure gate dielectric divided in two parts, we have
used high-κ dielectric near the source and low-κ dielectric near the
drain to reduce the short channel effects. By this structure short
channel effects such as FIBL will be reduced indeed by increasing
the RSiGe, ID-VD characteristics will be improved. The leakage
current and transfer characteristics, the threshold-voltage (Vt), the
drain induced barrier height lowering (DIBL), are estimated with
respect to, gate bias (VG), RSiGe and different gate dielectrics. For
short channel effects, such as DIBL, gate all around strained silicon
nanowire FET have similar characteristics with the pure Si one while
dual dielectrics can improve short channel effects in this structure.
Abstract: A complex valued neural network is a neural network
which consists of complex valued input and/or weights and/or thresholds
and/or activation functions. Complex-valued neural networks
have been widening the scope of applications not only in electronics
and informatics, but also in social systems. One of the most important
applications of the complex valued neural network is in signal
processing. In Neural networks, generalized mean neuron model
(GMN) is often discussed and studied. The GMN includes a new
aggregation function based on the concept of generalized mean of all
the inputs to the neuron. This paper aims to present exhaustive results
of using Generalized Mean Neuron model in a complex-valued neural
network model that uses the back-propagation algorithm (called
-Complex-BP-) for learning. Our experiments results demonstrate the
effectiveness of a Generalized Mean Neuron Model in a complex
plane for signal processing over a real valued neural network. We
have studied and stated various observations like effect of learning
rates, ranges of the initial weights randomly selected, error functions
used and number of iterations for the convergence of error required on
a Generalized Mean neural network model. Some inherent properties
of this complex back propagation algorithm are also studied and
discussed.
Abstract: We present a new method for the fully automatic 3D
reconstruction of the coronary artery centerlines, using two X-ray
angiogram projection images from a single rotating monoplane
acquisition system. During the first stage, the input images are
smoothed using curve evolution techniques. Next, a simple yet
efficient multiscale method, based on the information of the Hessian
matrix, for the enhancement of the vascular structure is introduced.
Hysteresis thresholding using different image quantiles, is used to
threshold the arteries. This stage is followed by a thinning procedure
to extract the centerlines. The resulting skeleton image is then pruned
using morphological and pattern recognition techniques to remove
non-vessel like structures. Finally, edge-based stereo correspondence
is solved using a parallel evolutionary optimization method based on
f symbiosis. The detected 2D centerlines combined with disparity
map information allow the reconstruction of the 3D vessel
centerlines. The proposed method has been evaluated on patient data
sets for evaluation purposes.
Abstract: Automatic segmentation of skin lesions is the first step
towards development of a computer-aided diagnosis of melanoma.
Although numerous segmentation methods have been developed,
few studies have focused on determining the most discriminative
and effective color space for melanoma application. This paper
proposes a novel automatic segmentation algorithm using color space
analysis and clustering-based histogram thresholding, which is able to
determine the optimal color channel for segmentation of skin lesions.
To demonstrate the validity of the algorithm, it is tested on a set of 30
high resolution dermoscopy images and a comprehensive evaluation
of the results is provided, where borders manually drawn by four
dermatologists, are compared to automated borders detected by the
proposed algorithm. The evaluation is carried out by applying three
previously used metrics of accuracy, sensitivity, and specificity and
a new metric of similarity. Through ROC analysis and ranking the
metrics, it is shown that the best results are obtained with the X and
XoYoR color channels which results in an accuracy of approximately
97%. The proposed method is also compared with two state-ofthe-
art skin lesion segmentation methods, which demonstrates the
effectiveness and superiority of the proposed segmentation method.
Abstract: A human verification system is presented in this
paper. The system consists of several steps: background subtraction,
thresholding, line connection, region growing, morphlogy, star
skelatonization, feature extraction, feature matching, and decision
making. The proposed system combines an advantage of star
skeletonization and simple statistic features. A correlation matching
and probability voting have been used for verification, followed by a
logical operation in a decision making stage. The proposed system
uses small number of features and the system reliability is
convincing.
Abstract: Moisture is an important consideration in many
aspects ranging from irrigation, soil chemistry, golf course, corrosion
and erosion, road conditions, weather predictions, livestock feed
moisture levels, water seepage etc. Vegetation and crops always
depend more on the moisture available at the root level than on
precipitation occurrence. In this paper, design of an instrument is
discussed which tells about the variation in the moisture contents of
soil. This is done by measuring the amount of water content in soil by
finding the variation in capacitance of soil with the help of a
capacitive sensor. The greatest advantage of soil moisture sensor is
reduced water consumption. The sensor is also be used to set lower
and upper threshold to maintain optimum soil moisture saturation and
minimize water wilting, contributes to deeper plant root growth
,reduced soil run off /leaching and less favorable condition for insects
and fungal diseases. Capacitance method is preferred because, it
provides absolute amount of water content and also measures water
content at any depth.
Abstract: This paper proposes a new decision making approch
based on quantitative possibilistic influence diagrams which are
extension of standard influence diagrams in the possibilistic framework.
We will in particular treat the case where several expert
opinions relative to value nodes are available. An initial expert assigns
confidence degrees to other experts and fixes a similarity threshold
that provided possibility distributions should respect. To illustrate our
approach an evaluation algorithm for these multi-source possibilistic
influence diagrams will also be proposed.
Abstract: Coal will continue to be the predominant source of
global energy for coming several decades. The huge generation of fly
ash (FA) from combustion of coal in thermal power plants (TPPs) is
apprehended to pose the concerns of its disposal and utilization. FA
application based on its typical characteristics as soil ameliorant for
agriculture and forestry is the potential area, and hence the global
attempt. The inferences drawn suffer from the variations of ash
characteristics, soil types, and agro-climatic conditions; thereby
correlating the effects of ash between various plant species and soil
types is difficult. Indian FAs have low bulk density, high water
holding capacity and porosity, rich silt-sized particles, alkaline
nature, negligible solubility, and reasonable plant nutrients. Findings
of the demonstrations trials for more than two decades from lab/pot
to field scale long-term experiments are developed as FA soil
amendment technology (FASAT) by Central Institute of Mining and
Fuel Research (CIMFR), Dhanbad. Performance of different crops
and plant species in cultivable and problematic soils, are
encouraging, eco-friendly, and being adopted by the farmers. FA
application includes ash alone and in combination with
inorganic/organic amendments; combination treatments including
bio-solids perform better than FA alone. Optimum dose being up to
100 t/ha for cultivable land and up to/ or above 200 t/ha of FA for
waste/degraded land/mine refuse, depending on the characteristics of
ash and soil. The elemental toxicity in Indian FA is usually not of
much concern owing to alkaline ashes, oxide forms of elements, and
elemental concentration within the threshold limits for soil
application. Combating toxicity, if any, is possible through
combination treatments with organic materials and phytoremediation.
Government initiatives through extension programme
involving farmers and ash generating organizations need to be
accelerated
Abstract: In this paper, a new BiCMOS CCII and CCCII,
capable of operate at ±0.5V and having wide dynamic range with
achieved bandwidth of 480MHz and 430MHz respectively have been
proposed. The structures have been found to be insensitive to the
threshold voltage variations. The proposed circuits are suitable for
implementation using 0.25μm BiCMOS technology. Pspice
simulations confirm the performance of the proposed structures.
Abstract: In this paper we proposed a method for finding video
frames representing one sign in the finger alphabet. The method is
based on determining hands location, segmentation and the use of
standard video quality evaluation metrics. Metric calculation is
performed only in regions of interest. Sliding mechanism for finding
local extrema and adaptive threshold based on local averaging is used
for key frames selection. The success rate is evaluated by recall,
precision and F1 measure. The method effectiveness is compared
with metrics applied to all frames. Proposed method is fast, effective
and relatively easy to realize by simple input video preprocessing
and subsequent use of tools designed for video quality measuring.
Abstract: We study in this paper the effect of the scene
changing on image sequences coding system using Embedded
Zerotree Wavelet (EZW). The scene changing considered here is the
full motion which may occurs. A special image sequence is generated
where the scene changing occurs randomly. Two scenarios are
considered: In the first scenario, the system must provide the
reconstruction quality as best as possible by the management of the
bit rate (BR) while the scene changing occurs. In the second scenario,
the system must keep the bit rate as constant as possible by the
management of the reconstruction quality. The first scenario may be
motivated by the availability of a large band pass transmission
channel where an increase of the bit rate may be possible to keep the
reconstruction quality up to a given threshold. The second scenario
may be concerned by the narrow band pass transmission channel
where an increase of the bit rate is not possible. In this last case,
applications for which the reconstruction quality is not a constraint
may be considered. The simulations are performed with five scales
wavelet decomposition using the 9/7-tap filter bank biorthogonal
wavelet. The entropy coding is performed using a specific defined
binary code book and EZW algorithm. Experimental results are
presented and compared to LEAD H263 EVAL. It is shown that if
the reconstruction quality is the constraint, the system increases the
bit rate to obtain the required quality. In the case where the bit rate
must be constant, the system is unable to provide the required quality
if the scene change occurs; however, the system is able to improve
the quality while the scene changing disappears.
Abstract: Mining sequential patterns from large customer transaction databases has been recognized as a key research topic in database systems. However, the previous works more focused on mining sequential patterns at a single concept level. In this study, we introduced concept hierarchies into this problem and present several algorithms for discovering multiple-level sequential patterns based on the hierarchies. An experiment was conducted to assess the performance of the proposed algorithms. The performances of the algorithms were measured by the relative time spent on completing the mining tasks on two different datasets. The experimental results showed that the performance depends on the characteristics of the datasets and the pre-defined threshold of minimal support for each level of the concept hierarchy. Based on the experimental results, some suggestions were also given for how to select appropriate algorithm for a certain datasets.
Abstract: In this work, we are interested in developing a speech denoising tool by using a discrete wavelet packet transform (DWPT). This speech denoising tool will be employed for applications of recognition, coding and synthesis. For noise reduction, instead of applying the classical thresholding technique, some wavelet packet nodes are set to zero and the others are thresholded. To estimate the non stationary noise level, we employ the spectral entropy. A comparison of our proposed technique to classical denoising methods based on thresholding and spectral subtraction is made in order to evaluate our approach. The experimental implementation uses speech signals corrupted by two sorts of noise, white and Volvo noises. The obtained results from listening tests show that our proposed technique is better than spectral subtraction. The obtained results from SNR computation show the superiority of our technique when compared to the classical thresholding method using the modified hard thresholding function based on u-law algorithm.
Abstract: Segmentation and quantification of stenosis is an
important task in assessing coronary artery disease. One of the main
challenges is measuring the real diameter of curved vessels.
Moreover, uncertainty in segmentation of different tissues in the
narrow vessel is an important issue that affects accuracy. This paper
proposes an algorithm to extract coronary arteries and measure the
degree of stenosis. Markovian fuzzy clustering method is applied to
model uncertainty arises from partial volume effect problem. The
algorithm employs: segmentation, centreline extraction, estimation of
orthogonal plane to centreline, measurement of the degree of
stenosis. To evaluate the accuracy and reproducibility, the approach
has been applied to a vascular phantom and the results are compared
with real diameter. The results of 10 patient datasets have been
visually judged by a qualified radiologist. The results reveal the
superiority of the proposed method compared to the Conventional
thresholding Method (CTM) on both datasets.
Abstract: This study introduces a new method for detecting,
sorting, and localizing spikes from multiunit EEG recordings. The
method combines the wavelet transform, which localizes distinctive
spike features, with Super-Paramagnetic Clustering (SPC) algorithm,
which allows automatic classification of the data without assumptions
such as low variance or Gaussian distributions. Moreover, the method
is capable of setting amplitude thresholds for spike detection. The
method makes use of several real EEG data sets, and accordingly the
spikes are detected, clustered and their times were detected.