Abstract: This research work is aimed at speech recognition
using scaly neural networks. A small vocabulary of 11 words were
established first, these words are “word, file, open, print, exit, edit,
cut, copy, paste, doc1, doc2". These chosen words involved with
executing some computer functions such as opening a file, print
certain text document, cutting, copying, pasting, editing and exit.
It introduced to the computer then subjected to feature extraction
process using LPC (linear prediction coefficients). These features are
used as input to an artificial neural network in speaker dependent
mode. Half of the words are used for training the artificial neural
network and the other half are used for testing the system; those are
used for information retrieval.
The system components are consist of three parts, speech
processing and feature extraction, training and testing by using neural
networks and information retrieval.
The retrieve process proved to be 79.5-88% successful, which is
quite acceptable, considering the variation to surrounding, state of
the person, and the microphone type.
Abstract: In ad hoc networks, the main issue about designing of protocols is quality of service, so that in wireless sensor networks the main constraint in designing protocols is limited energy of sensors. In fact, protocols which minimize the power consumption in sensors are more considered in wireless sensor networks. One approach of reducing energy consumption in wireless sensor networks is to reduce the number of packages that are transmitted in network. The technique of collecting data that combines related data and prevent transmission of additional packages in network can be effective in the reducing of transmitted packages- number. According to this fact that information processing consumes less power than information transmitting, Data Aggregation has great importance and because of this fact this technique is used in many protocols [5]. One of the Data Aggregation techniques is to use Data Aggregation tree. But finding one optimum Data Aggregation tree to collect data in networks with one sink is a NP-hard problem. In the Data Aggregation technique, related information packages are combined in intermediate nodes and form one package. So the number of packages which are transmitted in network reduces and therefore, less energy will be consumed that at last results in improvement of longevity of network. Heuristic methods are used in order to solve the NP-hard problem that one of these optimization methods is to solve Simulated Annealing problems. In this article, we will propose new method in order to build data collection tree in wireless sensor networks by using Simulated Annealing algorithm and we will evaluate its efficiency whit Genetic Algorithm.
Abstract: Researchers have been applying tional intelligence (AI/CI) methods to computer games. In this research field, further researchesare required to compare AI/CI
methods with respect to each game application. In th
our experimental result on the comparison of three evolutionary algorithms – evolution strategy, genetic algorithm, and their hybrid
applied to evolving controller agents for the CIG 2007 Simulated Car Racing competition. Our experimental result shows that, premature
convergence of solutions was observed in the case of ES, and GA outperformed ES in the last half of generations. Besides, a hybrid
which uses GA first and ES next evolved the best solution among the whole solutions being generated. This result shows the ability of GA in
globally searching promising areas in the early stage and the ability of ES in locally searching the focused area (fine-tuning solutions).
Abstract: Nevertheless the widespread application of finite
mixture models in segmentation, finite mixture model selection is
still an important issue. In fact, the selection of an adequate number
of segments is a key issue in deriving latent segments structures and
it is desirable that the selection criteria used for this end are effective.
In order to select among several information criteria, which may
support the selection of the correct number of segments we conduct a
simulation study. In particular, this study is intended to determine
which information criteria are more appropriate for mixture model
selection when considering data sets with only categorical
segmentation base variables. The generation of mixtures of
multinomial data supports the proposed analysis. As a result, we
establish a relationship between the level of measurement of
segmentation variables and some (eleven) information criteria-s
performance. The criterion AIC3 shows better performance (it
indicates the correct number of the simulated segments- structure
more often) when referring to mixtures of multinomial segmentation
base variables.
Abstract: Image Compression using Artificial Neural Networks
is a topic where research is being carried out in various directions
towards achieving a generalized and economical network.
Feedforward Networks using Back propagation Algorithm adopting
the method of steepest descent for error minimization is popular and
widely adopted and is directly applied to image compression.
Various research works are directed towards achieving quick
convergence of the network without loss of quality of the restored
image. In general the images used for compression are of different
types like dark image, high intensity image etc. When these images
are compressed using Back-propagation Network, it takes longer
time to converge. The reason for this is, the given image may
contain a number of distinct gray levels with narrow difference with
their neighborhood pixels. If the gray levels of the pixels in an image
and their neighbors are mapped in such a way that the difference in
the gray levels of the neighbors with the pixel is minimum, then
compression ratio as well as the convergence of the network can be
improved. To achieve this, a Cumulative distribution function is
estimated for the image and it is used to map the image pixels. When
the mapped image pixels are used, the Back-propagation Neural
Network yields high compression ratio as well as it converges
quickly.
Abstract: This paper presents the region based segmentation method for ultrasound images using local statistics. In this segmentation approach the homogeneous regions depends on the image granularity features, where the interested structures with dimensions comparable to the speckle size are to be extracted. This method uses a look up table comprising of the local statistics of every pixel, which are consisting of the homogeneity and similarity bounds according to the kernel size. The shape and size of the growing regions depend on this look up table entries. The algorithms are implemented by using connected seeded region growing procedure where each pixel is taken as seed point. The region merging after the region growing also suppresses the high frequency artifacts. The updated merged regions produce the output in formed of segmented image. This algorithm produces the results that are less sensitive to the pixel location and it also allows a segmentation of the accurate homogeneous regions.
Abstract: Solid fuel transient burning behavior under oxidizer
gas flow is numerically investigated. It is done using analysis of the
regression rate responses to the imposed sudden and oscillatory
variation at inflow properties. The conjugate problem is considered
by simultaneous solution of flow and solid phase governing
equations to compute the fuel regression rate. The advection
upstream splitting method is used as flow computational scheme in
finite volume method. The ignition phase is completely simulated to
obtain the exact initial condition for response analysis. The results
show that the transient burning effects which lead to the combustion
instabilities and intermittent extinctions could be observed in solid
fuels as the solid propellants.
Abstract: Despite the recent surge of research in control of
worm propagation, currently, there is no effective defense system
against such cyber attacks. We first design a distributed detection
architecture called Detection via Distributed Blackholes (DDBH).
Our novel detection mechanism could be implemented via virtual
honeypots or honeynets. Simulation results show that a worm can be
detected with virtual honeypots on only 3% of the nodes. Moreover,
the worm is detected when less than 1.5% of the nodes are infected.
We then develop two control strategies: (1) optimal dynamic trafficblocking,
for which we determine the condition that guarantees
minimum number of removed nodes when the worm is contained and
(2) predictive dynamic traffic-blocking–a realistic deployment of
the optimal strategy on scale-free graphs. The predictive dynamic
traffic-blocking, coupled with the DDBH, ensures that more than
40% of the network is unaffected by the propagation at the time
when the worm is contained.
Abstract: Let X be a connected space, X be a space, let p : X -→ X be a continuous map and let (X, p) be a covering space of X. In the first section we give some preliminaries from covering spaces and their automorphism groups. In the second section we derive some algebraic properties of both universal and regular covering spaces (X, p) of X and also their automorphism groups A(X, p).
Abstract: The purpose of this study was to analyze the correlation
between permitted building areas and housing distribution ratios and
their fluctuation, and test a distribution model during 3 successive governments in 5 cities including Bucheon in reference to the time
series administrative data, and thereby, interpret the results of the analysis in association with the policies pursued by the successive
governments to examine the structural fluctuation of permitted building areas and housing distribution ratios.
In order to analyze the fluctuation of permitted building areas and
housing distribution ratios during 3 successive governments and
examine the cycles of the time series data, the spectral analysis was performed, and in order to analyze the correlation between permitted
building areas and housing distribution ratios, the tabulation was performed to describe the correlations statistically, and in order to
explain about differences of fluctuation distribution of permitted building areas and housing distribution ratios among 3 governments,
the goodness of fit test was conducted.
Abstract: This paper analytically investigates the 3D flow
pattern at the confluences of two rectangular channels having 900
angles using Navier-Stokes equations based on Reynolds Stress
Turbulence Model (RSM). The equations are solved by the Finite-
Volume Method (FVM) and the flow is analyzed in terms of steadystate
(single-phased) conditions. The Shumate experimental findings
were used to test the validity of data. Comparison of the simulation
model with the experimental ones indicated a close proximity
between the flow patterns of the two sets. Effects of the discharge
ratio on separation zone dimensions created in the main-channel
downstream of the confluence indicated an inverse relation, where a
decrease in discharge ratio, will entail an increase in the length and
width of the separation zone. The study also found the model as a
powerful analytical tool in the feasibility study of hydraulic
engineering projects.
Abstract: One major issue that is regularly cited as a block to
the widespread use of online assessments in eLearning, is that of the
authentication of the student and the level of confidence that an
assessor can have that the assessment was actually completed by that
student. Currently, this issue is either ignored, in which case
confidence in the assessment and any ensuing qualification is
damaged, or else assessments are conducted at central, controlled
locations at specified times, losing the benefits of the distributed
nature of the learning programme. Particularly as we move towards
constructivist models of learning, with intentions towards achieving
heutagogic learning environments, the benefits of a properly
managed online assessment system are clear. Here we discuss some
of the approaches that could be adopted to address these issues,
looking at the use of existing security and biometric techniques,
combined with some novel behavioural elements. These approaches
offer the opportunity to validate the student on accessing an
assessment, on submission, and also during the actual production of
the assessment. These techniques are currently under development in
the DECADE project, and future work will evaluate and report their
use..
Abstract: In this study, a synthetic pathway was created by
assembling genes from Clostridium butyricum and Escherichia coli
in different combinations. Among the genes were dhaB1 and dhaB2
from C. butyricum VPI1718 coding for glycerol dehydratase (GDHt)
and its activator (GDHtAc), respectively, involved in the conversion
of glycerol to 3-hydroxypropionaldehyde (3-HPA). The yqhD gene
from E.coli BL21 was also included which codes for an NADPHdependent
1,3-propanediol oxidoreductase isoenzyme (PDORI)
reducing 3-HPA to 1,3-propanediol (1,3-PD). Molecular modeling
analysis indicated that the conformation of fusion protein of YQHD
and DHAB1 was favorable for direct molecular channeling of the
intermediate 3-HPA. According to the simulation results, the yqhD
and dhaB1 gene were assembled in the upstream of dhaB2 to express
a fusion protein, yielding the recombinant strain E. coliBL21
(DE3)//pET22b+::yqhD-dhaB1_dhaB2 (strain BP41Y3). Strain
BP41Y3 gave 10-fold higher 1,3-PD concentration than E. coliBL21
(DE3)//pET22b+::yqhD-dhaB1_dhaB2 (strain BP31Y2) expressing
the recombinant enzymes simultaneously but in a non-fusion mode.
This is the first report using a gene fusion approach to enhance the
biological conversion of glycerol to the value added compound 1,3-
PD.
Abstract: Voltage flicker problems have long existed in several
of the distribution areas served by the Taiwan Power Company. In
the past, those research results indicating that the estimated ΔV10
value based on the conventional method is significantly smaller than
the survey value. This paper is used to study the relationship between
the voltage flicker problems and harmonic power variation for the
power system with electric arc furnaces. This investigation discussed
thought the effect of harmonic power fluctuation with flicker
estimate value. The method of field measurement, statistics and
simulation is used. The survey results demonstrate that 10 ΔV
estimate must account for the effect of harmonic power variation.
Abstract: This questionnaire-based study, aimed to measure and
compare the awareness of English reading strategies among EFL
learners at Bangkok University (BU) classified by their gender, field
of study, and English learning experience. Proportional stratified
random sampling was employed to formulate a sample of 380 BU
students. The data were statistically analyzed in terms of the mean
and standard deviation. t-Test analysis was used to find differences in
awareness of reading strategies between two groups (-male and
female- /-science and social-science students). In addition, one-way
analysis of variance (ANOVA) was used to compare reading strategy
awareness among BU students with different lengths of English
learning experience. The results of this study indicated that the
overall awareness of reading strategies of EFL learners at BU was at
a high level (ðÑ = 3.60) and that there was no statistically significant
difference between males and females, and among students who have
different lengths of English learning experience at the significance
level of 0.05. However, significant differences among students
coming from different fields of study were found at the same level of
significance.
Abstract: In general, image-based 3D scenes can now be found in many popular vision systems, computer games and virtual reality tours. So, It is important to segment ROI (region of interest) from input scenes as a preprocessing step for geometric stricture detection in 3D scene. In this paper, we propose a method for segmenting ROI based on tensor voting and Dirichlet process mixture model. In particular, to estimate geometric structure information for 3D scene from a single outdoor image, we apply the tensor voting and Dirichlet process mixture model to a image segmentation. The tensor voting is used based on the fact that homogeneous region in an image are usually close together on a smooth region and therefore the tokens corresponding to centers of these regions have high saliency values. The proposed approach is a novel nonparametric Bayesian segmentation method using Gaussian Dirichlet process mixture model to automatically segment various natural scenes. Finally, our method can label regions of the input image into coarse categories: “ground", “sky", and “vertical" for 3D application. The experimental results show that our method successfully segments coarse regions in many complex natural scene images for 3D.
Abstract: The performance and complexity of QoS routing depends on the complex interaction between a large set of parameters. This paper investigated the scaling properties of source-directed link-state routing in large core networks. The simulation results show that the routing algorithm, network topology, and link cost function each have a significant impact on the probability of successfully routing new connections. The experiments confirm and extend the findings of other studies, and also lend new insight designing efficient quality-of-service routing policies in large networks.
Abstract: Iterative learning control aims to achieve zero tracking
error of a specific command. This is accomplished by iteratively
adjusting the command given to a feedback control system, based on
the tracking error observed in the previous iteration. One would like
the iterations to converge to zero tracking error in spite of any error
present in the model used to design the learning law. First, this need
for stability robustness is discussed, and then the need for robustness
of the property that the transients are well behaved. Methods of
producing the needed robustness to parameter variations and to
singular perturbations are presented. Then a method involving
reverse time runs is given that lets the world behavior produce the
ILC gains in such a way as to eliminate the need for a mathematical
model. Since the real world is producing the gains, there is no issue
of model error. Provided the world behaves linearly, the approach
gives an ILC law with both stability robustness and good transient
robustness, without the need to generate a model.
Abstract: This paper examines the relationships between and
among the various drivers of climate change that have both climatic
and ecological consequences for vegetation and land cover change in
arctic areas, particularly in arctic Alaska. It discusses the various
processes that have created spatial and climatic structures that have
facilitated observable vegetation and land cover changes in the
Arctic. Also, it indicates that the drivers of both climatic and
ecological changes in the Arctic are multi-faceted and operate in a
system with both positive and negative feedbacks that largely results
in further increases or decreases of the initial drivers of climatic and
vegetation change mainly at the local and regional scales. It
demonstrates that the impact of arctic warming on land cover change
and the Arctic ecosystems is not unidirectional and one dimensional
in nature but it represents a multi-directional and multi-dimensional
forces operating in a feedback system.
Abstract: This paper presents a new true RMS-to-DC converter
circuit based on a square-root-domain squarer/divider. The circuit is
designed by employing up-down translinear loop and using of
MOSFET transistors that operate in strong inversion saturation
region. The converter offer advantages of two-quadrant input current,
low circuit complexity, low supply voltage (1.2V) and immunity
from the body effect. The circuit has been simulated by HSPICE.
The simulation results are seen to conform to the theoretical analysis
and shows benefits of the proposed circuit.