Abstract: There exists an injective, information-preserving function
that maps a semantic network (i.e a directed labeled network)
to a directed network (i.e. a directed unlabeled network). The edge
label in the semantic network is represented as a topological feature
of the directed network. Also, there exists an injective function that
maps a directed network to an undirected network (i.e. an undirected
unlabeled network). The edge directionality in the directed network
is represented as a topological feature of the undirected network.
Through function composition, there exists an injective function that
maps a semantic network to an undirected network. Thus, aside from
space constraints, the semantic network construct does not have any
modeling functionality that is not possible with either a directed
or undirected network representation. Two proofs of this idea will
be presented. The first is a proof of the aforementioned function
composition concept. The second is a simpler proof involving an
undirected binary encoding of a semantic network.
Abstract: This research uses computational linguistics, an area of study that employs a computer to process natural language, and aims at discerning the patterns that exist in declarative sentences used in technical texts. The approach is mathematical, and the focus is on instructional texts found on web pages. The technique developed by the author and named the MAYA Semantic Technique is used here and organized into four stages. In the first stage, the parts of speech in each sentence are identified. In the second stage, the subject of the sentence is determined. In the third stage, MAYA performs a frequency analysis on the remaining words to determine the verb and its object. In the fourth stage, MAYA does statistical analysis to determine the content of the web page. The advantage of the MAYA Semantic Technique lies in its use of mathematical principles to represent grammatical operations which assist processing and accuracy if performed on unambiguous text. The MAYA Semantic Technique is part of a proposed architecture for an entire web-based intelligent tutoring system. On a sample set of sentences, partial semantics derived using the MAYA Semantic Technique were approximately 80% accurate. The system currently processes technical text in one domain, namely Cµ programming. In this domain all the keywords and programming concepts are known and understood.
Abstract: Reverse Engineering is a very important process in
Software Engineering. It can be performed backwards from system
development life cycle (SDLC) in order to get back the source data
or representations of a system through analysis of its structure,
function and operation. We use reverse engineering to introduce an
automatic tool to generate system requirements from its program
source codes. The tool is able to accept the Cµ programming source
codes, scan the source codes line by line and parse the codes to
parser. Then, the engine of the tool will be able to generate system
requirements for that specific program to facilitate reuse and
enhancement of the program. The purpose of producing the tool is to
help recovering the system requirements of any system when the
system requirements document (SRD) does not exist due to
undocumented support of the system.
Abstract: Fractional Fourier Transform is a powerful tool,
which is a generalization of the classical Fourier Transform. This
paper provides a mathematical relation relating the span in Fractional
Fourier domain with the amplitude and phase functions of the signal,
which is further used to study the variation of quality factor with
different values of the transform order. It is seen that with the
increase in the number of transients in the signal, the deviation of
average Fractional Fourier span from the frequency bandwidth
increases. Also, with the increase in the transient nature of the signal,
the optimum value of transform order can be estimated based on the
quality factor variation, and this value is found to be very close to
that for which one can obtain the most compact representation. With
the entire mathematical analysis and experimentation, we consolidate
the fact that Fractional Fourier Transform gives more optimal
representations for a number of transform orders than Fourier
transform.
Abstract: In this paper, a model of self-organizing spiking neural networks is introduced and applied to mobile robot environment representation and path planning problem. A network of spike-response-model neurons with a recurrent architecture is used to create robot-s internal representation from surrounding environment. The overall activity of network simulates a self-organizing system with unsupervised learning. A modified A* algorithm is used to find the best path using this internal representation between starting and goal points. This method can be used with good performance for both known and unknown environments.
Abstract: Under the limitation of investment budget, a utility
company is required to maximize the utilization of their existing
assets during their life cycle satisfying both engineering and financial
requirements. However, utility does not have knowledge about the
status of each asset in the portfolio neither in terms of technical nor
financial values. This paper presents a knowledge based model for
the utility companies in order to make an optimal decision on power
transformer with their utilization. CommonKADS methodology, a
structured development for knowledge and expertise representation,
is utilized for designing and developing knowledge based model. A
case study of One MVA power transformer of Nepal Electricity
Authority is presented. The results show that the reusable knowledge
can be categorized, modeled and utilized within the utility company
using the proposed methodologies. Moreover, the results depict that
utility company can achieve both engineering and financial benefits
from its utilization.
Abstract: Based on the component approach, three kinds of
dynamic load models, including a single –motor model, a two-motor
model and composite load model have been developed for the
stability studies of Khuzestan power system. The study results are
presented in this paper. Voltage instability is a dynamic phenomenon
and therefore requires dynamic representation of the power system
components. Industrial loads contain a large fraction of induction
machines. Several models of different complexity are available for
the description investigations. This study evaluates the dynamic
performances of several dynamic load models in combination with
the dynamics of a load changing transformer. Case study is steel
industrial substation in Khuzestan power systems.
Abstract: Data mining uses a variety of techniques each of which
is useful for some particular task. It is important to have a deep
understanding of each technique and be able to perform sophisticated
analysis. In this article we describe a tool built to simulate a variation
of the Kohonen network to perform unsupervised clustering and
support the entire data mining process up to results visualization. A
graphical representation helps the user to find out a strategy to
optimize classification by adding, moving or delete a neuron in order
to change the number of classes. The tool is able to automatically
suggest a strategy to optimize the number of classes optimization, but
also support both tree classifications and semi-lattice organizations of
the classes to give to the users the possibility of passing from one
class to the ones with which it has some aspects in common.
Examples of using tree and semi-lattice classifications are given to
illustrate advantages and problems. The tool is applied to classify
macroeconomic data that report the most developed countries- import
and export. It is possible to classify the countries based on their
economic behaviour and use the tool to characterize the commercial
behaviour of a country in a selected class from the analysis of
positive and negative features that contribute to classes formation.
Possible interrelationships between the classes and their meaning are
also discussed.
Abstract: Electronic Systems are the core of everyday lives.
They form an integral part in financial networks, mass transit,
telephone systems, power plants and personal computers. Electronic
systems are increasingly based on complex VLSI (Very Large Scale
Integration) integrated circuits. Initial electronic design automation is
concerned with the design and production of VLSI systems. The next
important step in creating a VLSI circuit is Physical Design. The
input to the physical design is a logical representation of the system
under design. The output of this step is the layout of a physical
package that optimally or near optimally realizes the logical
representation. Physical design problems are combinatorial in nature
and of large problem sizes. Darwin observed that, as variations are
introduced into a population with each new generation, the less-fit
individuals tend to extinct in the competition of basic necessities.
This survival of fittest principle leads to evolution in species. The
objective of the Genetic Algorithms (GA) is to find an optimal
solution to a problem .Since GA-s are heuristic procedures that can
function as optimizers, they are not guaranteed to find the optimum,
but are able to find acceptable solutions for a wide range of
problems. This survey paper aims at a study on Efficient Algorithms
for VLSI Physical design and observes the common traits of the
superior contributions.
Abstract: The frequency contents of the non-stationary
signals vary with time. For proper characterization of such
signals, a smart time-frequency representation is necessary.
Classically, the STFT (short-time Fourier transform) is
employed for this purpose. Its limitation is the fixed timefrequency
resolution. To overcome this drawback an enhanced
STFT version is devised. It is based on the signal driven
sampling scheme, which is named as the cross-level sampling.
It can adapt the sampling frequency and the window function
(length plus shape) by following the input signal local
variations. This adaptation results into the proposed technique
appealing features, which are the adaptive time-frequency
resolution and the computational efficiency.
Abstract: Super-quadrics can represent a set of implicit surfaces,
which can be used furthermore as primitive surfaces to construct a
complex object via Boolean set operations in implicit surface
modeling. In fact, super-quadrics were developed to create a
parametric surface by performing spherical product on two parametric
curves and some of the resulting parametric surfaces were also
represented as implicit surfaces. However, because not every
parametric curve can be redefined implicitly, this causes only implicit
super-elliptic and super-hyperbolic curves are applied to perform
spherical product and so only implicit super-ellipsoids and
hyperboloids are developed in super-quadrics. To create implicit
surfaces with more diverse shapes than super-quadrics, this paper
proposes an implicit representation of spherical product, which
performs spherical product on two implicit curves like super-quadrics
do. By means of the implicit representation, many new implicit curves
such as polygonal, star-shaped and rose-shaped curves can be used to
develop new implicit surfaces with a greater variety of shapes than
super-quadrics, such as polyhedrons, hyper-ellipsoids, superhyperboloids
and hyper-toroids containing star-shaped and roseshaped
major and minor circles. Besides, the newly developed implicit
surfaces can also be used to define new primitive implicit surfaces for
constructing a more complex implicit surface in implicit surface
modeling.
Abstract: Recently the usefulness of Concept Abduction, a novel non-monotonic inference service for Description Logics (DLs), has been argued in the context of ontology-based applications such as semantic matchmaking and resource retrieval. Based on tableau calculus, a method has been proposed to realize this reasoning task in ALN, a description logic that supports simple cardinality restrictions as well as other basic constructors. However, in many ontology-based systems, the representation of ontology would require expressive formalisms for capturing domain-specific constraints, this language is not sufficient. In order to increase the applicability of the abductive reasoning method in such contexts, we would like to present in the scope of this paper an extension of the tableaux-based algorithm for dealing with concepts represented inALCQ, the description logic that extends ALN with full concept negation and quantified number restrictions.
Abstract: Problem solving has traditionally been one of the principal research areas for artificial intelligence. Yet, although artificial intelligence reasoning techniques have been employed in several product support systems, the benefit of integrating product support, knowledge engineering, and problem solving, is still unclear. This paper studies the synergy of these areas and proposes a knowledge engineering framework that integrates product support systems and artificial intelligence techniques. The framework includes four spaces; the data, problem, hypothesis, and solution ones. The data space incorporates the knowledge needed for structured reasoning to take place, the problem space contains representations of problems, and the hypothesis space utilizes a multimodal reasoning approach to produce appropriate solutions in the form of virtual documents. The solution space is used as the gateway between the system and the user. The proposed framework enables the development of product support systems in terms of smaller, more manageable steps while the combination of different reasoning techniques provides a way to overcome the lack of documentation resources.
Abstract: Business Process Modeling (BPM) is the first and
most important step in business process management lifecycle. Graph
based formalism and rule based formalism are the two most
predominant formalisms on which process modeling languages are
developed. BPM technology continues to face challenges in coping
with dynamic business environments where requirements and goals
are constantly changing at the execution time. Graph based
formalisms incur problems to react to dynamic changes in Business
Process (BP) at the runtime instances. In this research, an adaptive
and flexible framework based on the integration between Object
Oriented diagramming technique and Petri Net modeling language is
proposed in order to support change management techniques for
BPM and increase the representation capability for Object Oriented
modeling for the dynamic changes in the runtime instances. The
proposed framework is applied in a higher education environment to
achieve flexible, updatable and dynamic BP.
Abstract: The Model for Knowledge Base of Computational Objects
(KBCO model) has been successfully applied to represent the
knowledge of human like Plane Geometry, Physical, Calculus. However,
the original model cannot easyly apply in inorganic chemistry
field because of the knowledge specific problems. So, the aim of
this article is to introduce how we extend the Computional Object
(Com-Object) in KBCO model, kinds of fact, problems model, and
inference algorithms to develop a program for solving problems
in inorganic chemistry. Our purpose is to develop the application
that can help students in their study inorganic chemistry at schools.
This application was built successful by using Maple, C# and WPF
technology. It can solve automatically problems and give human
readable solution agree with those writting by students and teachers.
Abstract: A new hybrid coding method for compressing
animated polygonal meshes is presented. This paper assumes
the simplistic representation of the geometric data: a temporal
sequence of polygonal meshes for each discrete frame of the
animated sequence. The method utilizes a delta coding and an
octree-based method. In this hybrid method, both the octree
approach and the delta coding approach are applied to each
single frame in the animation sequence in parallel. The
approach that generates the smaller encoded file size is chosen
to encode the current frame. Given the same quality
requirement, the hybrid coding method can achieve much
higher compression ratio than the octree-only method or the
delta-only method. The hybrid approach can represent 3D
animated sequences with higher compression factors while
maintaining reasonable quality. It is easy to implement and have
a low cost encoding process and a fast decoding process, which
make it a better choice for real time application.
Abstract: Protein residue contact map is a compact
representation of secondary structure of protein. Due to the
information hold in the contact map, attentions from researchers in
related field were drawn and plenty of works have been done
throughout the past decade. Artificial intelligence approaches have
been widely adapted in related works such as neural networks,
genetic programming, and Hidden Markov model as well as support
vector machine. However, the performance of the prediction was not
generalized which probably depends on the data used to train and
generate the prediction model. This situation shown the importance
of the features or information used in affecting the prediction
performance. In this research, support vector machine was used to
predict protein residue contact map on different combination of
features in order to show and analyze the effectiveness of the
features.
Abstract: In a handwriting recognition problem, characters can
be represented using chain codes. The main problem in representing
characters using chain code is optimizing the length of the chain
code. This paper proposes to use randomized algorithm to minimize
the length of Freeman Chain Codes (FCC) generated from isolated
handwritten characters. Feedforward neural network is used in the
classification stage to recognize the image characters. Our test results
show that by applying the proposed model, we reached a relatively
high accuracy for the problem of isolated handwritten when tested on
NIST database.
Abstract: In this paper three different approaches for person
verification and identification, i.e. by means of fingerprints, face and
voice recognition, are studied. Face recognition uses parts-based
representation methods and a manifold learning approach. The
assessment criterion is recognition accuracy. The techniques under
investigation are: a) Local Non-negative Matrix Factorization
(LNMF); b) Independent Components Analysis (ICA); c) NMF with
sparse constraints (NMFsc); d) Locality Preserving Projections
(Laplacianfaces). Fingerprint detection was approached by classical
minutiae (small graphical patterns) matching through image
segmentation by using a structural approach and a neural network as
decision block. As to voice / speaker recognition, melodic cepstral
and delta delta mel cepstral analysis were used as main methods, in
order to construct a supervised speaker-dependent voice recognition
system. The final decision (e.g. “accept-reject" for a verification
task) is taken by using a majority voting technique applied to the
three biometrics. The preliminary results, obtained for medium
databases of fingerprints, faces and voice recordings, indicate the
feasibility of our study and an overall recognition precision (about
92%) permitting the utilization of our system for a future complex
biometric card.
Abstract: The human head representations usually are based on
the morphological – structural components of a real model. Over the
time became more and more necessary to achieve full virtual models
that comply very rigorous with the specifications of the human
anatomy. Still, making and using a model perfectly fitted with the
real anatomy is a difficult task, because it requires large hardware
resources and significant times for processing. That is why it is
necessary to choose the best compromise solution, which keeps the
right balance between the details perfection and the resources
consumption, in order to obtain facial animations with real-time
rendering. We will present here the way in which we achieved such a
3D system that we intend to use as a base point in order to create
facial animations with real-time rendering, used in medicine to find
and to identify different types of pathologies.