Abstract: Artificial neural networks (ANN) have the ability to model input-output relationships from processing raw data. This characteristic makes them invaluable in industry domains where such knowledge is scarce at best. In the recent decades, in order to overcome the black-box characteristic of ANNs, researchers have attempted to extract the knowledge embedded within ANNs in the form of rules that can be used in inference systems. This paper presents a new technique that is able to extract a small set of rules from a two-layer ANN. The extracted rules yield high classification accuracy when implemented within a fuzzy inference system. The technique targets industry domains that possess less complex problems for which no expert knowledge exists and for which a simpler solution is preferred to a complex one. The proposed technique is more efficient, simple, and applicable than most of the previously proposed techniques.
Abstract: Program slicing is the task of finding all statements in
a program that directly or indirectly influence the value of a variable
occurrence. The set of statements that can affect the value of a
variable at some point in a program is called a program backward
slice. In several software engineering applications, such as program
debugging and measuring program cohesion and parallelism, several
slices are computed at different program points. The existing
algorithms for computing program slices are introduced to compute a
slice at a program point. In these algorithms, the program, or the
model that represents the program, is traversed completely or
partially once. To compute more than one slice, the same algorithm
is applied for every point of interest in the program. Thus, the same
program, or program representation, is traversed several times.
In this paper, an algorithm is introduced to compute all forward
static slices of a computer program by traversing the program
representation graph once. Therefore, the introduced algorithm is
useful for software engineering applications that require computing
program slices at different points of a program. The program
representation graph used in this paper is called Program Dependence
Graph (PDG).
Abstract: In this work we present an efficient approach for face
recognition in the infrared spectrum. In the proposed approach
physiological features are extracted from thermal images in order to
build a unique thermal faceprint. Then, a distance transform is used
to get an invariant representation for face recognition. The obtained
physiological features are related to the distribution of blood vessels
under the face skin. This blood network is unique to each individual
and can be used in infrared face recognition. The obtained results are
promising and show the effectiveness of the proposed scheme.
Abstract: Writer identification is one of the areas in pattern
recognition that attract many researchers to work in, particularly in
forensic and biometric application, where the writing style can be
used as biometric features for authenticating an identity. The
challenging task in writer identification is the extraction of unique
features, in which the individualistic of such handwriting styles
can be adopted into bio-inspired generalized global shape for
writer identification. In this paper, the feasibility of generalized
global shape concept of complimentary binding in Artificial
Immune System (AIS) for writer identification is explored. An
experiment based on the proposed framework has been conducted
to proof the validity and feasibility of the proposed approach for
off-line writer identification.
Abstract: The approach based on the wavelet transform has
been widely used for image denoising due to its multi-resolution
nature, its ability to produce high levels of noise reduction and the
low level of distortion introduced. However, by removing noise, high
frequency components belonging to edges are also removed, which
leads to blurring the signal features. This paper proposes a new
method of image noise reduction based on local variance and edge
analysis. The analysis is performed by dividing an image into 32 x 32
pixel blocks, and transforming the data into wavelet domain. Fast
lifting wavelet spatial-frequency decomposition and reconstruction is
developed with the advantages of being computationally efficient and
boundary effects minimized. The adaptive thresholding by local
variance estimation and edge strength measurement can effectively
reduce image noise while preserve the features of the original image
corresponding to the boundaries of the objects. Experimental results
demonstrate that the method performs well for images contaminated
by natural and artificial noise, and is suitable to be adapted for
different class of images and type of noises. The proposed algorithm
provides a potential solution with parallel computation for real time
or embedded system application.
Abstract: An application framework provides a reusable
design and implementation for a family of software systems.
Frameworks are introduced to reduce the cost of a product line
(i.e., family of products that share the common features). Software
testing is a time consuming and costly ongoing activity during the
application software development process. Generating reusable test
cases for the framework applications at the framework
development stage, and providing and using the test cases to test
part of the framework application whenever the framework is used
reduces the application development time and cost considerably.
Framework Interface Classes (FICs) are classes introduced by
the framework hooks to be implemented at the application
development stage. They can have reusable test cases generated at
the framework development stage and provided with the
framework to test the implementations of the FICs at the
application development stage. In this paper, we conduct a case
study using thirteen applications developed using three
frameworks; one domain oriented and two application oriented.
The results show that, in general, the percentage of the number of
FICs in the applications developed using domain frameworks is, on
average, greater than the percentage of the number of FICs in the
applications developed using application frameworks.
Consequently, the reduction of the application unit testing time
using the reusable test cases generated for domain frameworks is,
in general, greater than the reduction of the application unit testing
time using the reusable test cases generated for application
frameworks.
Abstract: The design of distributed systems involves the
partitioning of the system into components or partitions and the
allocation of these components to physical nodes. Techniques have
been proposed for both the partitioning and allocation process.
However these techniques suffer from a number of limitations. For
instance object replication has the potential to greatly improve the
performance of an object orientated distributed system but can be
difficult to use effectively and there are few techniques that support
the developer in harnessing object replication.
This paper presents a methodological technique that helps
developers decide how objects should be allocated in order to
improve performance in a distributed system that supports
replication. The performance of the proposed technique is
demonstrated and tested on an example system.
Abstract: In an era of knowledge explosion, the growth of data
increases rapidly day by day. Since data storage is a limited resource,
how to reduce the data space in the process becomes a challenge issue.
Data compression provides a good solution which can lower the
required space. Data mining has many useful applications in recent
years because it can help users discover interesting knowledge in large
databases. However, existing compression algorithms are not
appropriate for data mining. In [1, 2], two different approaches were
proposed to compress databases and then perform the data mining
process. However, they all lack the ability to decompress the data to
their original state and improve the data mining performance. In this
research a new approach called Mining Merged Transactions with the
Quantification Table (M2TQT) was proposed to solve these problems.
M2TQT uses the relationship of transactions to merge related
transactions and builds a quantification table to prune the candidate
itemsets which are impossible to become frequent in order to improve
the performance of mining association rules. The experiments show
that M2TQT performs better than existing approaches.
Abstract: A multilayer self organizing neural neural network
(MLSONN) architecture for binary object extraction, guided by a beta
activation function and characterized by backpropagation of errors
estimated from the linear indices of fuzziness of the network output
states, is discussed. Since the MLSONN architecture is designed to
operate in a single point fixed/uniform thresholding scenario, it does
not take into cognizance the heterogeneity of image information in
the extraction process. The performance of the MLSONN architecture
with representative values of the threshold parameters of the beta
activation function employed is also studied. A three layer bidirectional
self organizing neural network (BDSONN) architecture
comprising fully connected neurons, for the extraction of objects from
a noisy background and capable of incorporating the underlying image
context heterogeneity through variable and adaptive thresholding,
is proposed in this article. The input layer of the network architecture
represents the fuzzy membership information of the image scene to
be extracted. The second layer (the intermediate layer) and the final
layer (the output layer) of the network architecture deal with the self
supervised object extraction task by bi-directional propagation of the
network states. Each layer except the output layer is connected to the
next layer following a neighborhood based topology. The output layer
neurons are in turn, connected to the intermediate layer following
similar topology, thus forming a counter-propagating architecture
with the intermediate layer. The novelty of the proposed architecture
is that the assignment/updating of the inter-layer connection weights
are done using the relative fuzzy membership values at the constituent
neurons in the different network layers. Another interesting feature
of the network lies in the fact that the processing capabilities of
the intermediate and the output layer neurons are guided by a beta
activation function, which uses image context sensitive adaptive
thresholding arising out of the fuzzy cardinality estimates of the
different network neighborhood fuzzy subsets, rather than resorting to
fixed and single point thresholding. An application of the proposed
architecture for object extraction is demonstrated using a synthetic
and a real life image. The extraction efficiency of the proposed
network architecture is evaluated by a proposed system transfer index
characteristic of the network.
Abstract: Functionalities and control behavior are both primary
requirements in design of a complex system. Automata theory plays
an important role in modeling behavior of a system. Z is an ideal
notation which is used for describing state space of a system and then
defining operations over it. Consequently, an integration of automata
and Z will be an effective tool for increasing modeling power for a
complex system. Further, nondeterministic finite automata (NFA)
may have different implementations and therefore it is needed to
verify the transformation from diagrams to a code. If we describe
formal specification of an NFA before implementing it, then
confidence over transformation can be increased. In this paper, we
have given a procedure for integrating NFA and Z. Complement of a
special type of NFA is defined. Then union of two NFAs is
formalized after defining their complements. Finally, formal
construction of intersection of NFAs is described. The specification
of this relationship is analyzed and validated using Z/EVES tool.
Abstract: An application framework provides a reusable design
and implementation for a family of software systems. Application
developers extend the framework to build their particular
applications using hooks. Hooks are the places identified to show
how to use and customize the framework. Hooks define the
Framework Interface Classes (FICs) and the specifications of their
methods. As part of the development life cycle, it is required to test
the implementations of the FICs. Building a testing model to express
the behavior of a class is an essential step for the generation of the
class-based test cases. The testing model has to be consistent with the
specifications provided for the hooks. State-based models consisting
of states and transitions are testing models well suited to objectoriented
software. Typically, hand-construction of a state-based
model of a class behavior is expensive, error-prone, and may result in
constructing an inconsistent model with the specifications of the class
methods, which misleads verification results. In this paper, a
technique is introduced to automatically synthesize a state-based
testing model for FICs using the specifications provided for the
hooks. A tool that supports the proposed technique is introduced.
Abstract: Most of the biclustering/projected clustering algorithms are based either on the Euclidean distance or correlation coefficient which capture only linear relationships. However, in many applications, like gene expression data and word-document data, non linear relationships may exist between the objects. Mutual Information between two variables provides a more general criterion to investigate dependencies amongst variables. In this paper, we improve upon our previous algorithm that uses mutual information for biclustering in terms of computation time and also the type of clusters identified. The algorithm is able to find biclusters with mixed relationships and is faster than the previous one. To the best of our knowledge, none of the other existing algorithms for biclustering have used mutual information as a similarity measure. We present the experimental results on synthetic data as well as on the yeast expression data. Biclusters on the yeast data were found to be biologically and statistically significant using GO Tool Box and FuncAssociate.
Abstract: An application framework provides a reusable design
and implementation for a family of software systems. Application
developers extend the framework to build their particular
applications using hooks. Hooks are the places identified to show
how to use and customize the framework. Hooks define the
Framework Interface Classes (FICs) and their possible specifications,
which helps in building reusable test cases for the implementations of
these classes. This paper introduces a novel technique called all
paths-state to generate state-based test cases to test the FICs at class
level. The technique is experimentally evaluated. The empirical
evaluation shows that all paths-state technique produces test cases
with a high degree of coverage for the specifications of the
implemented FICs comparing to test cases generated using round-trip
path and all-transition techniques.
Abstract: In many countries, digital city or ubiquitous city
(u-City) projects have been initiated to provide digitalized economic
environments to cities. Recently in Korea, Kangwon Province has
started the u-Kangwon project to boost local economy with digitalized
tourism services. We analyze the limitations of the ubiquitous IT
approach through the u-Kangwon case. We have found that travelers
are more interested in quality over speed in access of information. For
improved service quality, we are looking to develop an
IT-convergence service design framework (ISDF). The ISDF is based
on the service engineering technique and composed of three parts:
Service Design, Service Simulation, and the Service Platform.
Abstract: Accurate demand forecasting is one of the most key
issues in inventory management of spare parts. The problem of
modeling future consumption becomes especially difficult for lumpy
patterns, which characterized by intervals in which there is no
demand and, periods with actual demand occurrences with large
variation in demand levels. However, many of the forecasting
methods may perform poorly when demand for an item is lumpy.
In this study based on the characteristic of lumpy demand patterns
of spare parts a hybrid forecasting approach has been developed,
which use a multi-layered perceptron neural network and a
traditional recursive method for forecasting future demands. In the
described approach the multi-layered perceptron are adapted to
forecast occurrences of non-zero demands, and then a conventional
recursive method is used to estimate the quantity of non-zero
demands. In order to evaluate the performance of the proposed
approach, their forecasts were compared to those obtained by using
Syntetos & Boylan approximation, recently employed multi-layered
perceptron neural network, generalized regression neural network
and elman recurrent neural network in this area. The models were
applied to forecast future demand of spare parts of Arak
Petrochemical Company in Iran, using 30 types of real data sets. The
results indicate that the forecasts obtained by using our proposed
mode are superior to those obtained by using other methods.
Abstract: Coloured Petri net (CPN) has been widely adopted in various areas in Computer Science, including protocol specification, performance evaluation, distributed systems and coordination in multi-agent systems. It provides a graphical representation of a system and has a strong mathematical foundation for proving various properties. This paper proposes a novel representation of a coloured Petri net using an extension of logic programming called abductive logic programming (ALP), which is purely based on classical logic. Under such a representation, an implementation of a CPN could be directly obtained, in which every inference step could be treated as a kind of equivalence preserved transformation. We would describe how to implement a CPN under such a representation using common meta-programming techniques in Prolog. We call our framework CPN-LP and illustrate its applications in modeling an intelligent agent.
Abstract: Due to memory leaks, often-valuable system memory
gets wasted and denied for other processes thereby affecting the
computational performance. If an application-s memory usage
exceeds virtual memory size, it can leads to system crash. Current
memory leak detection techniques for clusters are reactive and
display the memory leak information after the execution of the
process (they detect memory leak only after it occur).
This paper presents a Dynamic Memory Monitoring Agent
(DMMA) technique. DMMA framework is a dynamic memory leak
detection, that detects the memory leak while application is in
execution phase, when memory leak in any process in the cluster is
identified by DMMA it gives information to the end users to enable
them to take corrective actions and also DMMA submit the affected
process to healthy node in the system. Thus provides reliable service
to the user. DMMA maintains information about memory
consumption of executing processes and based on this information
and critical states, DMMA can improve reliability and
efficaciousness of cluster computing.
Abstract: In this paper, based on the work in [1], we further give
a general model for acquiring knowledge, which first focuses on the
research of how and when things involved in problems are made
then describes the goals, the energy and the time to give an optimum
model to decide how many related things are supposed to be involved
in. Finally, we acquire knowledge from this model in which there are
the attributes, actions and connections of the things involved at the
time when they are born and the time in their life. This model not
only improves AI theories, but also surely brings the effectiveness
and accuracy for AI system because systems are given more
knowledge when reasoning or computing is used to bring about
results.
Abstract: This paper presents the results of enhancing images from a left and right stereo pair in order to increase the resolution of a 3D representation of a scene generated from that same pair. A new neural network structure known as a Self Delaying Dynamic Network (SDN) has been used to perform the enhancement. The advantage of SDNs over existing techniques such as bicubic interpolation is their ability to cope with motion and noise effects. SDNs are used to generate two high resolution images, one based on frames taken from the left view of the subject, and one based on the frames from the right. This new high resolution stereo pair is then processed by a disparity map generator. The disparity map generated is compared to two other disparity maps generated from the same scene. The first is a map generated from an original high resolution stereo pair and the second is a map generated using a stereo pair which has been enhanced using bicubic interpolation. The maps generated using the SDN enhanced pairs match more closely the target maps. The addition of extra noise into the input images is less problematic for the SDN system which is still able to out perform bicubic interpolation.
Abstract: This paper takes the actual scene of Aletheia
University campus – the Class 2 national monument, the first
educational institute in northern Taiwan as an example, to present a
3D virtual navigation system which supports user positioning and
pre-download mechanism. The proposed system was designed based
on the principle of Voronoi Diagra) to divide the virtual scenes and
its multimedia information, which combining outdoor GPS
positioning and the indoor RFID location detecting function. When
users carry mobile equipments such as notebook computer, UMPC,
EeePC...etc., walking around the actual scenes of indoor and outdoor
areas of campus, this system can automatically detect the moving
path of users and pre-download the needed data so that users will
have a smooth and seamless navigation without waiting.