Abstract: Using 1km grid datasets representing monthly mean
precipitation, monthly mean temperature, and dry matter production
(DMP), we considered the regional plant production ability in
Southeast and South Asia, and also employed pixel-by-pixel
correlation analysis to assess the intensity of relation between climate
factors and plant production. While annual DMP in South Asia was
approximately less than 2,000kg, the one in most part of Southeast
Asia exceeded 2,500 - 3,000kg. It suggested that plant production in
Southeast Asia was superior to South Asia, however, Rain-Use
Efficiency (RUE) representing dry matter production per 1mm
precipitation showed that inland of Indochina Peninsula and India
were higher than islands in Southeast Asia. By the results of
correlation analysis between climate factors and DMP, while the area
in most parts of Indochina Peninsula indicated negative correlation
coefficients between DMP and precipitation or temperature, the area
in Malay Peninsula and islands showed negative correlation to
precipitation and positive one to temperature, and most part of India
dominating South Asia showed positive to precipitation and negative
to temperature. In addition, the areas where the correlation coefficients
exceeded |0.8| were regarded as “susceptible" to climate factors, and
the areas smaller than |0.2| were “insusceptible". By following the
discrimination, the map implying expected impacts by climate change
was provided.
Abstract: In this paper, we present a robust and secure
algorithm for watermarking, the watermark is first transformed into
the frequency domain using the discrete wavelet transform (DWT).
Then the entire DWT coefficient except the LL (Band) discarded,
these coefficients are permuted and encrypted by specific mixing.
The encrypted coefficients are inserted into the most significant
spectral components of the stego-image using a chaotic system. This
technique makes our watermark non-vulnerable to the attack (like
compression, and geometric distortion) of an active intruder, or due
to noise in the transmission link.
Abstract: Hazardous Material transportation by road is coupled
with inherent risk of accidents causing loss of lives, grievous injuries,
property losses and environmental damages. The most common type
of hazmat road accident happens to be the releases (78%) of
hazardous substances, followed by fires (28%), explosions (14%) and
vapour/ gas clouds (6 %.).
The paper is discussing initially the probable 'Impact Zones'
likely to be caused by one flammable (LPG) and one toxic (ethylene
oxide) chemicals being transported through a sizable segment of a
State Highway connecting three notified Industrial zones in Surat
district in Western India housing 26 MAH industrial units. Three
'hotspots' were identified along the highway segment depending on
the particular chemical traffic and the population distribution within
500 meters on either sides. The thermal radiation and explosion
overpressure have been calculated for LPG / Ethylene Oxide BLEVE
scenarios along with toxic release scenario for ethylene oxide.
Besides, the dispersion calculations for ethylene oxide toxic release
have been made for each 'hotspot' location and the impact zones
have been mapped for the LOC concentrations. Subsequently, the
maximum Initial Isolation and the protective zones were calculated
based on ERPG-3 and ERPG-2 values of ethylene oxide respectively
which are estimated taking the worst case scenario under worst
weather conditions. The data analysis will be helpful to the local
administration in capacity building with respect to rescue /
evacuation and medical preparedness and quantitative inputs to
augment the District Offsite Emergency Plan document.
Abstract: In this paper, a framework for the simplification and
standardization of metaheuristic related parameter-tuning by applying
a four phase methodology, utilizing Design of Experiments and
Artificial Neural Networks, is presented. Metaheuristics are multipurpose
problem solvers that are utilized on computational optimization
problems for which no efficient problem specific algorithm
exist. Their successful application to concrete problems requires the
finding of a good initial parameter setting, which is a tedious and
time consuming task. Recent research reveals the lack of approach
when it comes to this so called parameter-tuning process. In the
majority of publications, researchers do have a weak motivation for
their respective choices, if any. Because initial parameter settings
have a significant impact on the solutions quality, this course of
action could lead to suboptimal experimental results, and thereby
a fraudulent basis for the drawing of conclusions.
Abstract: This paper reports a distributed mutual exclusion
algorithm for mobile Ad-hoc networks. The network is clustered
hierarchically. The proposed algorithm considers the clustered
network as a logical tree and develops a token passing scheme
to get the mutual exclusion. The performance analysis and
simulation results show that its message requirement is optimal,
and thus the algorithm is energy efficient.
Abstract: Amount of dissolve oxygen in a river has a great direct affect on aquatic macroinvertebrates and this would influence on the region ecosystem indirectly. In this paper it is tried to predict dissolved oxygen in rivers by employing an easy Fuzzy Logic Modeling, Wang Mendel method. This model just uses previous records to estimate upcoming values. For this purpose daily and hourly records of eight stations in Au Sable watershed in Michigan, United States are employed for 12 years and 50 days period respectively. Calculations indicate that for long period prediction it is better to increase input intervals. But for filling missed data it is advisable to decrease the interval. Increasing partitioning of input and output features influence a little on accuracy but make the model too time consuming. Increment in number of input data also act like number of partitioning. Large amount of train data does not modify accuracy essentially, so, an optimum training length should be selected.
Abstract: The use of amine mixtures employing
methyldiethanolamine (MDEA), monoethanolamine (MEA), and diethanolamine (DEA) have been investigated for a variety of cases
using a process simulation program called HYSYS. The results show that, at high pressures, amine mixtures have little or no advantage in the cases studied. As the pressure is lowered, it becomes more difficult for MDEA to meet residual gas requirements and mixtures can usually improve plant performance. Since the CO2 reaction rate
with the primary and secondary amines is much faster than with
MDEA, the addition of small amounts of primary or secondary amines to an MDEA based solution should greatly improve the overall reaction rate of CO2 with the amine solution. The addition of MEA caused the CO2 to be absorbed more strongly in the upper portion of the column than for MDEA along. On the other hand,
raising the concentration for MEA to 11%wt, CO2 is almost
completely absorbed in the lower portion of the column. The addition of MEA would be most advantageous.
Thus, in areas where MDEA cannot meet the residual gas
requirements, the use of amine mixtures can usually improve the plant
performance.
Abstract: The third generation (3G) of cellular system adopted
the spread spectrum as solution for the transmission of the data in the
physical layer. Contrary to systems IS-95 or CDMAOne (systems
with spread spectrum of the preceding generation), the new standard,
called Universal Mobil Telecommunications System (UMTS), uses
long codes in the down link. The system is conceived for the vocal
communication and the transmission of the data. In particular, the
down link is very important, because of the asymmetrical request of
the data, i.e., more remote loading towards the mobiles than towards
the basic station. Moreover, the UMTS uses for the down link an
orthogonal spreading out with a variable factor of spreading out
(OVSF for Orthogonal Variable Spreading Factor). This
characteristic makes it possible to increase the flow of data of one or
more users by reducing their factor of spreading out without
changing the factor of spreading out of other users. In the current
standard of the UMTS, two techniques to increase the performances
of the down link were proposed, the diversity of sending antenna and
the codes space-time. These two techniques fight only fainding. The
receiver proposed for the mobil station is the RAKE, but one can
imagine a receiver more sophisticated, able to reduce the interference
between users and the impact of the coloured noise and interferences
to narrow band. In this context, where the users have long codes
synchronized with variable factor of spreading out and ignorance by
the mobile of the other active codes/users, the use of the sequences of
code pseudo-noises different lengths is presented in the form of one
of the most appropriate solutions.
Abstract: The structure of retinal vessels is a prominent feature,
that reveals information on the state of disease that are reflected in
the form of measurable abnormalities in thickness and colour.
Vascular structures of retina, for implementation of clinical diabetic
retinopathy decision making system is presented in this paper.
Retinal Vascular structure is with thin blood vessel, whose accuracy
is highly dependent upon the vessel segmentation. In this paper the
blood vessel thickness is automatically detected using preprocessing
techniques and vessel segmentation algorithm. First the capture
image is binarized to get the blood vessel structure clearly, then it is
skeletonised to get the overall structure of all the terminal and
branching nodes of the blood vessels. By identifying the terminal
node and the branching points automatically, the main and branching
blood vessel thickness is estimated. Results are presented and
compared with those provided by clinical classification on 50 vessels
collected from Bejan Singh Eye hospital..
Abstract: In present article the model of Blended Learning, its advantage at foreign language teaching, and also some problems that can arise during its use are considered. The Blended Learning is a special organization of learning, which allows to combine classroom work and modern technologies in electronic distance teaching environment. Nowadays a lot of European educational institutions and companies use such technology. Through this method: student gets the opportunity to learn in a group (classroom) with a teacher and additionally at home at a convenient time; student himself sets the optimal speed and intensity of the learning process; this method helps student to discipline himself and learn to work independently.
Abstract: In this paper we investigate the electrical
characteristics of a new structure of gate all around strained silicon
nanowire field effect transistors (FETs) with dual dielectrics by
changing the radius (RSiGe) of silicon-germanium (SiGe) wire and
gate dielectric. Indeed the effect of high-κ dielectric on Field Induced
Barrier Lowering (FIBL) has been studied. Due to the higher electron
mobility in tensile strained silicon, the n-type FETs with strained
silicon channel have better drain current compare with the pure Si
one. In this structure gate dielectric divided in two parts, we have
used high-κ dielectric near the source and low-κ dielectric near the
drain to reduce the short channel effects. By this structure short
channel effects such as FIBL will be reduced indeed by increasing
the RSiGe, ID-VD characteristics will be improved. The leakage
current and transfer characteristics, the threshold-voltage (Vt), the
drain induced barrier height lowering (DIBL), are estimated with
respect to, gate bias (VG), RSiGe and different gate dielectrics. For
short channel effects, such as DIBL, gate all around strained silicon
nanowire FET have similar characteristics with the pure Si one while
dual dielectrics can improve short channel effects in this structure.
Abstract: This paper presents a Particle Swarm Optimization
(PSO) method for determining the optimal parameters of a first-order
controller for TCP/AQM system. The model TCP/AQM is described
by a second-order system with time delay. First, the analytical
approach, based on the D-decomposition method and Lemma of
Kharitonov, is used to determine the stabilizing regions of a firstorder
controller. Second, the optimal parameters of the controller are
obtained by the PSO algorithm. Finally, the proposed method is
implemented in the Network Simulator NS-2 and compared with the
PI controller.
Abstract: In this paper an algorithm is used to detect the color defects of ceramic tiles. First the image of a normal tile is clustered using GCMA; Genetic C-means Clustering Algorithm; those results in best cluster centers. C-means is a common clustering algorithm which optimizes an objective function, based on a measure between data points and the cluster centers in the data space. Here the objective function describes the mean square error. After finding the best centers, each pixel of the image is assigned to the cluster with closest cluster center. Then, the maximum errors of clusters are computed. For each cluster, max error is the maximum distance between its center and all the pixels which belong to it. After computing errors all the pixels of defected tile image are clustered based on the centers obtained from normal tile image in previous stage. Pixels which their distance from their cluster center is more than the maximum error of that cluster are considered as defected pixels.
Abstract: An important step in studying the statistics of
fingerprint minutia features is to reliably extract minutia features from
the fingerprint images. A new reliable method of computation for
minutiae feature extraction from fingerprint images is presented. A
fingerprint image is treated as a textured image. An orientation flow
field of the ridges is computed for the fingerprint image. To
accurately locate ridges, a new ridge orientation based computation
method is proposed. After ridge segmentation a new method of
computation is proposed for smoothing the ridges. The ridge skeleton
image is obtained and then smoothed using morphological operators
to detect the features. A post processing stage eliminates a large
number of false features from the detected set of minutiae features.
The detected features are observed to be reliable and accurate.
Abstract: Quality control charts are very effective in detecting
out of control signals but when a control chart signals an out of
control condition of the process mean, searching for a special cause
in the vicinity of the signal time would not always lead to prompt
identification of the source(s) of the out of control condition as the
change point in the process parameter(s) is usually different from the
signal time. It is very important to manufacturer to determine at what
point and which parameters in the past caused the signal. Early
warning of process change would expedite the search for the special
causes and enhance quality at lower cost. In this paper the quality
variables under investigation are assumed to follow a multivariate
normal distribution with known means and variance-covariance
matrix and the process means after one step change remain at the new
level until the special cause is being identified and removed, also it is
supposed that only one variable could be changed at the same time.
This research applies artificial neural network (ANN) to identify the
time the change occurred and the parameter which caused the change
or shift. The performance of the approach was assessed through a
computer simulation experiment. The results show that neural
network performs effectively and equally well for the whole shift
magnitude which has been considered.
Abstract: In this paper, based on the estimation of the Cauchy matrix of linear impulsive differential equations, by using Banach fixed point theorem and Gronwall-Bellman-s inequality, some sufficient conditions are obtained for the existence and exponential stability of almost periodic solution for Cohen-Grossberg shunting inhibitory cellular neural networks (SICNNs) with continuously distributed delays and impulses. An example is given to illustrate the main results.
Abstract: Support Vector Machine (SVM) is a statistical learning tool that was initially developed by Vapnik in 1979 and later developed to a more complex concept of structural risk minimization (SRM). SVM is playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM was applied to the detection of medical ultrasound images in the presence of partially developed speckle noise. The simulation was done for single look and multi-look speckle models to give a complete overlook and insight to the new proposed model of the SVM-based detector. The structure of the SVM was derived and applied to clinical ultrasound images and its performance in terms of the mean square error (MSE) metric was calculated. We showed that the SVM-detected ultrasound images have a very low MSE and are of good quality. The quality of the processed speckled images improved for the multi-look model. Furthermore, the contrast of the SVM detected images was higher than that of the original non-noisy images, indicating that the SVM approach increased the distance between the pixel reflectivity levels (detection hypotheses) in the original images.
Abstract: Image convolution similar to the receptive fields
found in mammalian visual pathways has long been used in
conventional image processing in the form of Gabor masks.
However, no VLSI implementation of parallel, multi-layered pulsed
processing has been brought forward which would emulate this
property. We present a technical realization of such a pulsed image
processing scheme. The discussed IC also serves as a general testbed
for VLSI-based pulsed information processing, which is of interest
especially with regard to the robustness of representing an analog
signal in the phase or duration of a pulsed, quasi-digital signal, as
well as the possibility of direct digital manipulation of such an
analog signal. The network connectivity and processing properties
are reconfigurable so as to allow adaptation to various processing
tasks.
Abstract: Importance of software quality is increasing leading to development of new sophisticated techniques, which can be used in constructing models for predicting quality attributes. One such technique is Artificial Neural Network (ANN). This paper examined the application of ANN for software quality prediction using Object- Oriented (OO) metrics. Quality estimation includes estimating maintainability of software. The dependent variable in our study was maintenance effort. The independent variables were principal components of eight OO metrics. The results showed that the Mean Absolute Relative Error (MARE) was 0.265 of ANN model. Thus we found that ANN method was useful in constructing software quality model.
Abstract: We present a new method for the fully automatic 3D
reconstruction of the coronary artery centerlines, using two X-ray
angiogram projection images from a single rotating monoplane
acquisition system. During the first stage, the input images are
smoothed using curve evolution techniques. Next, a simple yet
efficient multiscale method, based on the information of the Hessian
matrix, for the enhancement of the vascular structure is introduced.
Hysteresis thresholding using different image quantiles, is used to
threshold the arteries. This stage is followed by a thinning procedure
to extract the centerlines. The resulting skeleton image is then pruned
using morphological and pattern recognition techniques to remove
non-vessel like structures. Finally, edge-based stereo correspondence
is solved using a parallel evolutionary optimization method based on
f symbiosis. The detected 2D centerlines combined with disparity
map information allow the reconstruction of the 3D vessel
centerlines. The proposed method has been evaluated on patient data
sets for evaluation purposes.