Abstract: The use of neural networks for recognition application is generally constrained by their inherent parameters inflexibility after the training phase. This means no adaptation is accommodated for input variations that have any influence on the network parameters. Attempts were made in this work to design a neural network that includes an additional mechanism that adjusts the threshold values according to the input pattern variations. The new approach is based on splitting the whole network into two subnets; main traditional net and a supportive net. The first deals with the required output of trained patterns with predefined settings, while the second tolerates output generation dynamically with tuning capability for any newly applied input. This tuning comes in the form of an adjustment to the threshold values. Two levels of supportive net were studied; one implements an extended additional layer with adjustable neuronal threshold setting mechanism, while the second implements an auxiliary net with traditional architecture performs dynamic adjustment to the threshold value of the main net that is constructed in dual-layer architecture. Experiment results and analysis of the proposed designs have given quite satisfactory conducts. The supportive layer approach achieved over 90% recognition rate, while the multiple network technique shows more effective and acceptable level of recognition. However, this is achieved at the price of network complexity and computation time. Recognition generalization may be also improved by accommodating capabilities involving all the innate structures in conjugation with Intelligence abilities with the needs of further advanced learning phases.
Abstract: The Genetic Algorithm (GA) is one of the most important methods used to solve many combinatorial optimization problems. Therefore, many researchers have tried to improve the GA by using different methods and operations in order to find the optimal solution within reasonable time. This paper proposes an improved GA (IGA), where the new crossover operation, population reformulates operation, multi mutation operation, partial local optimal mutation operation, and rearrangement operation are used to solve the Traveling Salesman Problem. The proposed IGA was then compared with three GAs, which use different crossover operations and mutations. The results of this comparison show that the IGA can achieve better results for the solutions in a faster time.
Abstract: Mostly transforms are used for speech data
compressions which are lossy algorithms. Such algorithms are
tolerable for speech data compression since the loss in quality is not
perceived by the human ear. However the vector quantization (VQ)
has a potential to give more data compression maintaining the same
quality. In this paper we propose speech data compression algorithm
using vector quantization technique. We have used VQ algorithms
LBG, KPE and FCG. The results table shows computational
complexity of these three algorithms. Here we have introduced a new
performance parameter Average Fractional Change in Speech
Sample (AFCSS). Our FCG algorithm gives far better performance
considering mean absolute error, AFCSS and complexity as
compared to others.
Abstract: The most important subtype of non-Hodgkin-s
lymphoma is the Diffuse Large B-Cell Lymphoma. Approximately
40% of the patients suffering from it respond well to therapy,
whereas the remainder needs a more aggressive treatment, in order to
better their chances of survival. Data Mining techniques have helped
to identify the class of the lymphoma in an efficient manner. Despite
that, thousands of genes should be processed to obtain the results.
This paper presents a comparison of the use of various attribute
selection methods aiming to reduce the number of genes to be
searched, looking for a more effective procedure as a whole.
Abstract: In Content-Based Image Retrieval systems it is
important to use an efficient indexing technique in order to perform
and accelerate the search in huge databases. The used indexing
technique should also support the high dimensions of image features.
In this paper we present the hierarchical index NOHIS-tree (Non
Overlapping Hierarchical Index Structure) when we scale up to very
large databases. We also present a study of the influence of clustering
on search time. The performance test results show that NOHIS-tree
performs better than SR-tree. Tests also show that NOHIS-tree keeps
its performances in high dimensional spaces. We include the
performance test that try to determine the number of clusters in
NOHIS-tree to have the best search time.
Abstract: This paper presents the development of recurrent neural network based fuzzy inference system for identification and control of dynamic nonlinear plant. The structure and algorithms of fuzzy system based on recurrent neural network are described. To train unknown parameters of the system the supervised learning algorithm is used. As a result of learning, the rules of neuro-fuzzy system are formed. The neuro-fuzzy system is used for the identification and control of nonlinear dynamic plant. The simulation results of identification and control systems based on recurrent neuro-fuzzy network are compared with the simulation results of other neural systems. It is found that the recurrent neuro-fuzzy based system has better performance than the others.
Abstract: In deregulated operating regime power system security is an issue that needs due thoughtfulness from researchers in the horizon of unbundling of generation and transmission. Electric power systems are exposed to various contingencies. Network contingencies often contribute to overloading of branches, violation of voltages and also leading to problems of security/stability. To maintain the security of the systems, it is desirable to estimate the effect of contingencies and pertinent control measurement can be taken on to improve the system security. This paper presents the application of particle swarm optimization algorithm to find the optimal location of multi type FACTS devices in a power system in order to eliminate or alleviate the line over loads. The optimizations are performed on the parameters, namely the location of the devices, their types, their settings and installation cost of FACTS devices for single and multiple contingencies. TCSC, SVC and UPFC are considered and modeled for steady state analysis. The selection of UPFC and TCSC suitable location uses the criteria on the basis of improved system security. The effectiveness of the proposed method is tested for IEEE 6 bus and IEEE 30 bus test systems.
Abstract: With a growing number of digital libraries and other
open education repositories being made available throughout the
world, effective search and retrieval tools are necessary to access the
desired materials that surpass the effectiveness of traditional, allinclusive
search engines. This paper discusses the design and use of
Folksemantic, a platform that integrates OpenCourseWare search,
Open Educational Resource recommendations, and social network
functionality into a single open source project. The paper describes
how the system was originally envisioned, its goals for users, and
data that provides insight into how it is actually being used. Data
sources include website click-through data, query logs, web server
log files and user account data. Based on a descriptive analysis of its
current use, modifications to the platform's design are recommended
to better address goals of the system, along with recommendations
for additional phases of research.
Abstract: The DNA microarray technology concurrently monitors the expression levels of thousands of genes during significant biological processes and across the related samples. The better understanding of functional genomics is obtained by extracting the patterns hidden in gene expression data. It is handled by clustering which reveals natural structures and identify interesting patterns in the underlying data. In the proposed work clustering gene expression data is done through an Advanced Nelder Mead (ANM) algorithm. Nelder Mead (NM) method is a method designed for optimization process. In Nelder Mead method, the vertices of a triangle are considered as the solutions. Many operations are performed on this triangle to obtain a better result. In the proposed work, the operations like reflection and expansion is eliminated and a new operation called spread-out is introduced. The spread-out operation will increase the global search area and thus provides a better result on optimization. The spread-out operation will give three points and the best among these three points will be used to replace the worst point. The experiment results are analyzed with optimization benchmark test functions and gene expression benchmark datasets. The results show that ANM outperforms NM in both benchmarks.
Abstract: In this paper, we present some new upper bounds for
the spectral radius of iterative matrices based on the concept of
doubly α diagonally dominant matrix. And subsequently, we give
two examples to show that our results are better than the earlier ones.
Abstract: Creation and maintenance of knowledge management
systems has been recognized as an important research area.
Consecutively lack of accurate results from knowledge management
systems limits the organization to apply their knowledge
management processes. This leads to a failure in getting the right
information to the right people at the right time thus followed by a
deficiency in decision making processes. An Intranet offers a
powerful tool for communication and collaboration, presenting data
and information, and the means that creates and shares knowledge,
all in one easily accessible place. This paper proposes an archetype
describing how a knowledge management system, with the support
of intranet capabilities, could very much increase the accuracy of
capturing, storing and retrieving knowledge based processes thereby
increasing the efficiency of the system. This system will expect a
critical mass of usage, by the users, for intranet to function as
knowledge management systems. This prototype would lead to a
design of an application that would impose creation and maintenance
of an effective knowledge management system through intranet. The
aim of this paper is to introduce an effective system to handle
capture, store and distribute knowledge management in a form that
may not lead to any failure which exists in most of the systems. The
methodology used in the system would require all the employees, in
the organization, to contribute the maximum to deliver the system to
a successful arena. The system is still in its initial mode and thereby
the authors are under the process to practically implement the ideas,
as mentioned in the system, to produce satisfactory results.
Abstract: Electrical discharge machining (EDM) is well
established machining technique mainly used to machine complex
geometries on difficult-to-machine materials and high strength
temperature resistant alloys. In the present research, the objective is
to study the shape of the electrode and establish the application of
liquid nitrogen in reducing distortion of the electrode during
electrical discharge machining of M2 grade high speed steel using
copper electrodes. Study of roundness was performed on the
electrode to observe the shape of the electrode for both conventional
EDM and EDM with cryogenically cooled electrode. Scanning
Electron Microscope (SEM) has been used to study the shape of
electrode tip. The effect of various parameters such as discharge
current and pulse on time has been studied to understand the behavior
of distortion of electrode. It has been concluded that the shape
retention is better in case of liquid nitrogen cooled electrode.
Abstract: Identification of cancer genes that might anticipate
the clinical behaviors from different types of cancer disease is
challenging due to the huge number of genes and small number of
patients samples. The new method is being proposed based on
supervised learning of classification like support vector machines
(SVMs).A new solution is described by the introduction of the
Maximized Margin (MM) in the subset criterion, which permits to
get near the least generalization error rate. In class prediction
problem, gene selection is essential to improve the accuracy and to
identify genes for cancer disease. The performance of the new
method was evaluated with real-world data experiment. It can give
the better accuracy for classification.
Abstract: Mammalian genomes contain large number of
retroelements (SINEs, LINEs and LTRs) which could affect
expression of protein coding genes through associated transcription
factor binding sites (TFBS). Activity of the retroelement-associated
TFBS in many genes is confirmed experimentally but their global
functional impact remains unclear. Human SINEs (Alu repeats) and
mouse SINEs (B1 and B2 repeats) are known to be clustered in GCrich
gene rich genome segments consistent with the view that they
can contribute to regulation of gene expression. We have shown
earlier that Alu are involved in formation of cis-regulatory modules
(clusters of TFBS) in human promoters, and other authors reported
that Alu located near promoter CpG islands have an increased
frequency of CpG dinucleotides suggesting that these Alu are
undermethylated. Human Alu and mouse B1/B2 elements have an
internal bipartite promoter for RNA polymerase III containing
conserved sequence motif called B-box which can bind basal
transcription complex TFIIIC. It has been recently shown that TFIIIC
binding to B-box leads to formation of a boundary which limits
spread of repressive chromatin modifications in S. pombe. SINEassociated
B-boxes may have similar function but conservation of
TFIIIC binding sites in SINEs located near mammalian promoters
has not been studied earlier. Here we analysed abundance and
distribution of retroelements (SINEs, LINEs and LTRs) in annotated
sequences of the Database of mammalian transcription start sites
(DBTSS). Fractions of SINEs in human and mouse promoters are
slightly lower than in all genome but >40% of human and mouse
promoters contain Alu or B1/B2 elements within -1000 to +200 bp
interval relative to transcription start site (TSS). Most of these SINEs
is associated with distal segments of promoters (-1000 to -200 bp
relative to TSS) indicating that their insertion at distances >200 bp
upstream of TSS is tolerated during evolution. Distribution of SINEs
in promoters correlates negatively with the distribution of CpG
sequences. Using analysis of abundance of 12-mer motifs from the
B1 and Alu consensus sequences in genome and DBTSS it has been
confirmed that some subsegments of Alu and B1 elements are poorly
conserved which depends in part on the presence of CpG
dinucleotides. One of these CpG-containing subsegments in B1
elements overlaps with SINE-associated B-box and it shows better
conservation in DBTSS compared to genomic sequences. It has been
also studied conservation in DBTSS and genome of the B-box
containing segments of old (AluJ, AluS) and young (AluY) Alu
repeats and found that CpG sequence of the B-box of old Alu is
better conserved in DBTSS than in genome. This indicates that Bbox-
associated CpGs in promoters are better protected from
methylation and mutation than B-box-associated CpGs in genomic
SINEs. These results are consistent with the view that potential
TFIIIC binding motifs in SINEs associated with human and mouse
promoters may be functionally important. These motifs may protect
promoters from repressive histone modifications which spread from
adjacent sequences. This can potentially explain well known
clustering of SINEs in GC-rich gene rich genome compartments and
existence of unmethylated CpG islands.
Abstract: The radius-of-curvature (ROC) defines the degree of
curvature along the centerline of a roadway whereby a travelling
vehicle must follow. Roadway designs must encompass ROC in
mitigating the cost of earthwork associated with construction while
also allowing vehicles to travel at maximum allowable design speeds.
Thus, a road will tend to follow natural topography where possible,
but curvature must also be optimized to permit fast, but safe vehicle
speeds. The more severe the curvature of the road, the slower the
permissible vehicle speed. For route planning, whether for urban
settings, emergency operations, or even parcel delivery, ROC is a
necessary attribute of road arcs for computing travel time.
It is extremely rare for a geo-spatial database to contain ROC. This
paper will present a procedure and mathematical algorithm to
calculate and assign ROC to a segment pair and/or polyline.
Abstract: In this paper we present a study of the impact of connection schemes on the performance of iterative decoding of Generalized Parallel Concatenated block (GPCB) constructed from one step majority logic decodable (OSMLD) codes and we propose a new connection scheme for decoding them. All iterative decoding connection schemes use a soft-input soft-output threshold decoding algorithm as a component decoder. Numerical result for GPCB codes transmitted over Additive White Gaussian Noise (AWGN) channel are provided. It will show that the proposed scheme is better than Hagenauer-s scheme and Lucas-s scheme [1] and slightly better than the Pyndiah-s scheme.
Abstract: Realistic systems generally are systems with various
inputs and outputs also known as Multiple Input Multiple Output
(MIMO). Such systems usually prove to be complex and difficult to
model and control purposes. Therefore, decomposition was used to
separate individual inputs and outputs. A PID is assigned to each
individual pair to regulate desired settling time. Suitable parameters
of PIDs obtained from Genetic Algorithm (GA), using Mean of
Squared Error (MSE) objective function.
Abstract: This paper presents comparison among methods of
determination of the characteristic polynomial coefficients. First, the
resultant systems from the methods are compared based on frequency
criteria such as the closed loop bandwidth, gain and phase margins.
Then the step responses of the resultant systems are compared on the
basis of the transient behavior criteria including overshoot, rise time,
settling time and error (via IAE, ITAE, ISE and ITSE integral
indices). Also relative stability of the systems is compared together.
Finally the best choices in regards to the above diverse criteria are
presented.
Abstract: This paper summarizes and compares approaches to
solving the knapsack problem and its known application in capital
budgeting. The first approach uses deterministic methods and can be
applied to small-size tasks with a single constraint. We can also
apply commercial software systems such as the GAMS modelling
system. However, because of NP-completeness of the problem, more
complex problem instances must be solved by means of heuristic
techniques to achieve an approximation of the exact solution in a
reasonable amount of time. We show the problem representation and
parameter settings for a genetic algorithm framework.
Abstract: Motivated by the recent work of Herbert, Hayen, Macaskill and Walter [Interval estimation for the difference of two independent variances. Communications in Statistics, Simulation and Computation, 40: 744-758, 2011.], we investigate, in this paper, new confidence intervals for the difference between two normal population variances based on the generalized confidence interval of Weerahandi [Generalized Confidence Intervals. Journal of the American Statistical Association, 88(423): 899-905, 1993.] and the closed form method of variance estimation of Zou, Huo and Taleban [Simple confidence intervals for lognormal means and their differences with environmental applications. Environmetrics 20: 172-180, 2009]. Monte Carlo simulation results indicate that our proposed confidence intervals give a better coverage probability than that of the existing confidence interval. Also two new confidence intervals perform similarly based on their coverage probabilities and their average length widths.