Abstract: Many artificial intelligence (AI) techniques are inspired
by problem-solving strategies found in nature. Robustness is a key
feature in many natural systems. This paper studies robustness in
artificial neural networks (ANNs) and proposes several novel, nature
inspired ANN architectures. The paper includes encouraging results
from experimental studies on these networks showing increased
robustness.
Abstract: Generalized Center String (GCS) problem are
generalized from Common Approximate Substring problem
and Common substring problems. GCS are known to be
NP-hard allowing the problems lies in the explosion of
potential candidates. Finding longest center string without
concerning the sequence that may not contain any motifs is
not known in advance in any particular biological gene
process. GCS solved by frequent pattern-mining techniques
and known to be fixed parameter tractable based on the
fixed input sequence length and symbol set size. Efficient
method known as Bpriori algorithms can solve GCS with
reasonable time/space complexities. Bpriori 2 and Bpriori
3-2 algorithm are been proposed of any length and any
positions of all their instances in input sequences. In this
paper, we reduced the time/space complexity of Bpriori
algorithm by Constrained Based Frequent Pattern mining
(CBFP) technique which integrates the idea of Constraint
Based Mining and FP-tree mining. CBFP mining technique
solves the GCS problem works for all center string of any
length, but also for the positions of all their mutated copies
of input sequence. CBFP mining technique construct TRIE
like with FP tree to represent the mutated copies of center
string of any length, along with constraints to restraint
growth of the consensus tree. The complexity analysis for
Constrained Based FP mining technique and Bpriori
algorithm is done based on the worst case and average case
approach. Algorithm's correctness compared with the
Bpriori algorithm using artificial data is shown.
Abstract: In Virtual organization, Knowledge Discovery (KD)
service contains distributed data resources and computing grid nodes.
Computational grid is integrated with data grid to form Knowledge
Grid, which implements Apriori algorithm for mining association
rule on grid network. This paper describes development of parallel
and distributed version of Apriori algorithm on Globus Toolkit using
Message Passing Interface extended with Grid Services (MPICHG2).
The creation of Knowledge Grid on top of data and
computational grid is to support decision making in real time
applications. In this paper, the case study describes design and
implementation of local and global mining of frequent item sets. The
experiments were conducted on different configurations of grid
network and computation time was recorded for each operation. We
analyzed our result with various grid configurations and it shows
speedup of computation time is almost superlinear.
Abstract: This paper aims at improving web server performance
by establishing a middleware layer between web and database
servers, which minimizes the overload on the database server. A
middleware system has been developed as a service mainly to
improve the performance. This system manages connection accesses
in a way that would result in reducing the overload on the database
server. In addition to the connection management, this system acts as
an object-oriented model for best utilization of operating system
resources. A web developer can use this Service Broker to improve
web server performance.
Abstract: The application of the synchronous dynamic random
access memory (SDRAM) has gone beyond the scope of personal
computers for quite a long time. It comes into hand whenever a big
amount of low price and still high speed memory is needed. Most of
the newly developed stand alone embedded devices in the field of
image, video and sound processing take more and more use of it. The
big amount of low price memory has its trade off – the speed. In
order to take use of the full potential of the memory, an efficient
controller is needed. Efficient stands for maximum random accesses
to the memory both for reading and writing and less area after
implementation. This paper proposes a target device independent
DDR SDRAM pipelined controller and provides performance
comparison with available solutions.
Abstract: Selecting the word translation from a set of target
language words, one that conveys the correct sense of source word
and makes more fluent target language output, is one of core
problems in machine translation. In this paper we compare the 3
methods of estimating word translation probabilities for selecting the
translation word in Thai – English Machine Translation. The 3
methods are (1) Method based on frequency of word translation, (2)
Method based on collocation of word translation, and (3) Method
based on Expectation Maximization (EM) algorithm. For evaluation
we used Thai – English parallel sentences generated by NECTEC.
The method based on EM algorithm is the best method in comparison
to the other methods and gives the satisfying results.
Abstract: The conjugate gradient optimization algorithm is combined with the modified back propagation algorithm to yield a computationally efficient algorithm for training multilayer perceptron (MLP) networks (CGFR/AG). The computational efficiency is enhanced by adaptively modifying initial search direction as described in the following steps: (1) Modification on standard back propagation algorithm by introducing a gain variation term in the activation function, (2) Calculation of the gradient descent of error with respect to the weights and gains values and (3) the determination of a new search direction by using information calculated in step (2). The performance of the proposed method is demonstrated by comparing accuracy and computation time with the conjugate gradient algorithm used in MATLAB neural network toolbox. The results show that the computational efficiency of the proposed method was better than the standard conjugate gradient algorithm.
Abstract: An alternative approach to the use of Discrete Fourier
Transform (DFT) for Magnetic Resonance Imaging (MRI) reconstruction
is the use of parametric modeling technique. This method
is suitable for problems in which the image can be modeled by
explicit known source functions with a few adjustable parameters.
Despite the success reported in the use of modeling technique as an
alternative MRI reconstruction technique, two important problems
constitutes challenges to the applicability of this method, these are
estimation of Model order and model coefficient determination. In
this paper, five of the suggested method of evaluating the model
order have been evaluated, these are: The Final Prediction Error
(FPE), Akaike Information Criterion (AIC), Residual Variance (RV),
Minimum Description Length (MDL) and Hannan and Quinn (HNQ)
criterion. These criteria were evaluated on MRI data sets based on the
method of Transient Error Reconstruction Algorithm (TERA). The
result for each criterion is compared to result obtained by the use of a
fixed order technique and three measures of similarity were evaluated.
Result obtained shows that the use of MDL gives the highest measure
of similarity to that use by a fixed order technique.
Abstract: Extracting thematic (semantic) roles is one of the
major steps in representing text meaning. It refers to finding the
semantic relations between a predicate and syntactic constituents in a
sentence. In this paper we present a rule-based approach to extract
semantic roles from Persian sentences. The system exploits a twophase
architecture to (1) identify the arguments and (2) label them
for each predicate.
For the first phase we developed a rule based shallow parser to
chunk Persian sentences and for the second phase we developed a
knowledge-based system to assign 16 selected thematic roles to the
chunks. The experimental results of testing each phase are shown at
the end of the paper.
Abstract: A logic model for analyzing complex systems- stability
is very useful to many areas of sciences. In the real world, we are
enlightened from some natural phenomena such as “biosphere", “food
chain", “ecological balance" etc. By research and practice, and taking
advantage of the orthogonality and symmetry defined by the theory of
multilateral matrices, we put forward a logic analysis model of
stability of complex systems with three relations, and prove it by
means of mathematics. This logic model is usually successful in
analyzing stability of a complex system. The structure of the logic
model is not only clear and simple, but also can be easily used to
research and solve many stability problems of complex systems. As an
application, some examples are given.
Abstract: QoS Routing aims to find paths between senders and
receivers satisfying the QoS requirements of the application which
efficiently using the network resources and underlying routing
algorithm to be able to find low-cost paths that satisfy given QoS
constraints. The problem of finding least-cost routing is known to be
NP-hard or complete and some algorithms have been proposed to
find a near optimal solution. But these heuristics or algorithms either
impose relationships among the link metrics to reduce the complexity
of the problem which may limit the general applicability of the
heuristic, or are too costly in terms of execution time to be applicable
to large networks. In this paper, we concentrate an algorithm that
finds a near-optimal solution fast and we named this algorithm as
optimized Delay Constrained Routing (ODCR), which uses an
adaptive path weight function together with an additional constraint
imposed on the path cost, to restrict search space and hence ODCR
finds near optimal solution in much quicker time.
Abstract: This paper presents and evaluates a new classification
method that aims to improve classifiers performances and speed up
their training process. The proposed approach, called labeled
classification, seeks to improve convergence of the BP (Back
propagation) algorithm through the addition of an extra feature
(labels) to all training examples. To classify every new example, tests
will be carried out each label. The simplicity of implementation is the
main advantage of this approach because no modifications are
required in the training algorithms. Therefore, it can be used with
others techniques of acceleration and stabilization. In this work, two
models of the labeled classification are proposed: the LMLP
(Labeled Multi Layered Perceptron) and the LNFC (Labeled Neuro
Fuzzy Classifier). These models are tested using Iris, wine, texture
and human thigh databases to evaluate their performances.
Abstract: Unified Modeling language (UML) is one of the
important modeling languages used for the visual representation of
the research problem. In the present paper, UML model is designed
for the Instruction pipeline which is used for the evaluation of the
instructions of software programs. The class and sequence diagrams
are designed & performance is evaluated for instructions of a sample
program through a case study.
Abstract: All Text processing systems allow their users to
search a pattern of string from a given text. String matching is
fundamental to database and text processing applications. Every text
editor must contain a mechanism to search the current document for
arbitrary strings. Spelling checkers scan an input text for words in the
dictionary and reject any strings that do not match. We store our
information in data bases so that later on we can retrieve the same
and this retrieval can be done by using various string matching
algorithms. This paper is describing a new string matching algorithm
for various applications. A new algorithm has been designed with the
help of Rabin Karp Matcher, to improve string matching process.
Abstract: Computer game industry has experienced exponential
growth in recent years. A game is a recreational activity involving
one or more players. Game input is information such as data,
commands, etc., which is passed to the game system at run time from
an external source. Conversely, game outputs are information which
are generated by the game system and passed to an external target,
but which is not used internally by the game. This paper identifies a
new classification scheme for game input and output, which is based
on player-s input and output. Using this, relationship table for game
input classifier and output classifier is developed.
Abstract: This research uses computational linguistics, an area of study that employs a computer to process natural language, and aims at discerning the patterns that exist in declarative sentences used in technical texts. The approach is mathematical, and the focus is on instructional texts found on web pages. The technique developed by the author and named the MAYA Semantic Technique is used here and organized into four stages. In the first stage, the parts of speech in each sentence are identified. In the second stage, the subject of the sentence is determined. In the third stage, MAYA performs a frequency analysis on the remaining words to determine the verb and its object. In the fourth stage, MAYA does statistical analysis to determine the content of the web page. The advantage of the MAYA Semantic Technique lies in its use of mathematical principles to represent grammatical operations which assist processing and accuracy if performed on unambiguous text. The MAYA Semantic Technique is part of a proposed architecture for an entire web-based intelligent tutoring system. On a sample set of sentences, partial semantics derived using the MAYA Semantic Technique were approximately 80% accurate. The system currently processes technical text in one domain, namely Cµ programming. In this domain all the keywords and programming concepts are known and understood.
Abstract: Nowadays, organizations and business has several motivating factors to protect an individual-s privacy. Confidentiality refers to type of sharing information to third parties. This is always referring to private information, especially for personal information that usually needs to keep as a private. Because of the important of privacy concerns today, we need to design a database system that suits with privacy. Agrawal et. al. has introduced Hippocratic Database also we refer here as a privacy-aware database. This paper will explain how HD can be a future trend for web-based application to enhance their privacy level of trustworthiness among internet users.
Abstract: Reverse Engineering is a very important process in
Software Engineering. It can be performed backwards from system
development life cycle (SDLC) in order to get back the source data
or representations of a system through analysis of its structure,
function and operation. We use reverse engineering to introduce an
automatic tool to generate system requirements from its program
source codes. The tool is able to accept the Cµ programming source
codes, scan the source codes line by line and parse the codes to
parser. Then, the engine of the tool will be able to generate system
requirements for that specific program to facilitate reuse and
enhancement of the program. The purpose of producing the tool is to
help recovering the system requirements of any system when the
system requirements document (SRD) does not exist due to
undocumented support of the system.
Abstract: Information sharing and gathering are important in the rapid advancement era of technology. The existence of WWW has caused rapid growth of information explosion. Readers are overloaded with too many lengthy text documents in which they are more interested in shorter versions. Oil and gas industry could not escape from this predicament. In this paper, we develop an Automated Text Summarization System known as AutoTextSumm to extract the salient points of oil and gas drilling articles by incorporating statistical approach, keywords identification, synonym words and sentence-s position. In this study, we have conducted interviews with Petroleum Engineering experts and English Language experts to identify the list of most commonly used keywords in the oil and gas drilling domain. The system performance of AutoTextSumm is evaluated using the formulae of precision, recall and F-score. Based on the experimental results, AutoTextSumm has produced satisfactory performance with F-score of 0.81.
Abstract: Hand gesture is one of the typical methods used in
sign language for non-verbal communication. It is most commonly
used by people who have hearing or speech problems to
communicate among themselves or with normal people. Various sign
language systems have been developed by manufacturers around the
globe but they are neither flexible nor cost-effective for the end
users. This paper presents a system prototype that is able to
automatically recognize sign language to help normal people to
communicate more effectively with the hearing or speech impaired
people. The Sign to Voice system prototype, S2V, was developed
using Feed Forward Neural Network for two-sequence signs
detection. Different sets of universal hand gestures were captured
from video camera and utilized to train the neural network for
classification purpose. The experimental results have shown that
neural network has achieved satisfactory result for sign-to-voice
translation.