Abstract: In spite of all advancement in software testing,
debugging remains a labor-intensive, manual, time consuming, and
error prone process. A candidate solution to enhance debugging
process is to fuse it with testing process. To achieve this integration,
a possible solution may be categorizing common software tests and
errors followed by the effort on fixing the errors through general
solutions for each test/error pair. Our approach to address this issue is
based on Christopher Alexander-s pattern and pattern language
concepts. The patterns in this language are grouped into three major
sections and connect the three concepts of test, error, and debug.
These patterns and their hierarchical relationship shape a pattern
language that introduces a solution to solve software errors in a
known testing context.
Finally, we will introduce our developed framework ADE as a
sample implementation to support a pattern of proposed language,
which aims to automate the whole process of evolving software
design via evolutionary methods.
Abstract: With the proliferation of World Wide Web,
development of web-based technologies and the growth in web
content, the structure of a website becomes more complex and web
navigation becomes a critical issue to both web designers and users.
In this paper we define the content and web pages as two important
and influential factors in website navigation and paraphrase the
enhancement in the website navigation as making some useful
changes in the link structure of the website based on the
aforementioned factors. Then we suggest a new method for
proposing the changes using fuzzy approach to optimize the website
architecture. Applying the proposed method to a real case of Iranian
Civil Aviation Organization (CAO) website, we discuss the results of
the novel approach at the final section.
Abstract: In this paper, we proposed a method for detecting consistency violation between state machine diagrams and a sequence diagram defined in UML 2.0 using SMV. We extended a method expressing these diagrams defined in UML 1.0 with boolean formulas so that it can express a sequence diagram with combined fragments introduced in UML 2.0. This extension made it possible to represent three types of combined fragment: alternative, option and parallel. As a result of experiment, we confirmed that the proposed method could detect consistency violation correctly with SMV.
Abstract: Proxy signature helps the proxy signer to sign
messages on behalf of the original signer. It is very useful when
the original signer (e.g. the president of a company) is not
available to sign a specific document. If the original signer can
not forge valid proxy signatures through impersonating the proxy
signer, it will be robust in a virtual environment; thus the original
signer can not shift any illegal action initiated by herself to the
proxy signer. In this paper, we propose a new proxy signature
scheme. The new scheme can prevent the original signer from
impersonating the proxy signer to sign messages. The proposed
scheme is based on the regular ElGamal signature. In addition,
the fair privacy of the proxy signer is maintained. That means,
the privacy of the proxy signer is preserved; and the privacy can
be revealed when it is necessary.
Abstract: The major goal in defining and examining game
scenarios is to find good strategies as solutions to the game. A
plausible solution is a recommendation to the players on how to play
the game, which is represented as strategies guided by the various
choices available to the players. These choices invariably compel the
players (decision makers) to execute an action following some
conscious tactics. In this paper, we proposed a refinement-based
heuristic as a machine learning technique for human-like decision
making in playing Ayo game. The result showed that our machine
learning technique is more adaptable and more responsive in making
decision than human intelligence. The technique has the advantage
that a search is astutely conducted in a shallow horizon game tree.
Our simulation was tested against Awale shareware and an appealing
result was obtained.
Abstract: Previously, harmonic parameters (HPs) have been
selected as features extracted from EEG signals for automatic sleep
scoring. However, in previous studies, only one HP parameter was
used, which were directly extracted from the whole epoch of EEG
signal.
In this study, two different transformations were applied to extract
HPs from EEG signals: Hilbert-Huang transform (HHT) and wavelet
transform (WT). EEG signals are decomposed by the two
transformations; and features were extracted from different
components. Twelve parameters (four sets of HPs) were extracted.
Some of the parameters are highly diverse among different stages.
Afterward, HPs from two transformations were used to building a
rough sleep stages scoring model using the classifier SVM. The
performance of this model is about 78% using the features obtained by
our proposed extractions. Our results suggest that these features may
be useful for automatic sleep stages scoring.
Abstract: This paper discusses a design of nonlinear observer by
a formal linearization method using an application of Chebyshev Interpolation
in order to facilitate processes for synthesizing a nonlinear
observer and to improve the precision of linearization.
A dynamic nonlinear system is linearized with respect to a linearization
function, and a measurement equation is transformed into
an augmented linear one by the formal linearization method which is
based on Chebyshev interpolation. To the linearized system, a linear
estimation theory is applied and a nonlinear observer is derived. To
show effectiveness of the observer design, numerical experiments
are illustrated and they indicate that the design shows remarkable
performances for nonlinear systems.
Abstract: The data is available in abundance in any business
organization. It includes the records for finance, maintenance,
inventory, progress reports etc. As the time progresses, the data keep
on accumulating and the challenge is to extract the information from
this data bank. Knowledge discovery from these large and complex
databases is the key problem of this era. Data mining and machine
learning techniques are needed which can scale to the size of the
problems and can be customized to the application of business. For
the development of accurate and required information for particular
problem, business analyst needs to develop multidimensional models
which give the reliable information so that they can take right
decision for particular problem. If the multidimensional model does
not possess the advance features, the accuracy cannot be expected.
The present work involves the development of a Multidimensional
data model incorporating advance features. The criterion of
computation is based on the data precision and to include slowly
change time dimension. The final results are displayed in graphical
form.
Abstract: Power consumption is rapidly increased in data centers
because the number of data center is increased and more the scale of
data center become larger. Therefore, it is one of key research items to
reduce power consumption in data center. The peak power of a typical
server is around 250 watts. When a server is idle, it continues to use
around 60% of the power consumed when in use, though vendors are
putting effort into reducing this “idle" power load. Servers tend to
work at only around a 5% to 20% utilization rate, partly because of
response time concerns. An average of 10% of servers in their data
centers was unused. In those reason, we propose dynamic power
management system to reduce power consumption in green data
center. Experiment result shows that about 55% power consumption is
reduced at idle time.
Abstract: Automatic Extraction of Event information from
social text stream (emails, social network sites, blogs etc) is a vital
requirement for many applications like Event Planning and
Management systems and security applications. The key information
components needed from Event related text are Event title, location,
participants, date and time. Emails have very unique distinctions over
other social text streams from the perspective of layout and format
and conversation style and are the most commonly used
communication channel for broadcasting and planning events.
Therefore we have chosen emails as our dataset. In our work, we
have employed two statistical NLP methods, named as Finite State
Machines (FSM) and Hidden Markov Model (HMM) for the
extraction of event related contextual information. An application
has been developed providing a comparison among the two methods
over the event extraction task. It comprises of two modules, one for
each method, and works for both bulk as well as direct user input.
The results are evaluated using Precision, Recall and F-Score.
Experiments show that both methods produce high performance and
accuracy, however HMM was good enough over Title extraction and
FSM proved to be better for Venue, Date, and time.
Abstract: Unified Modeling Language (UML) extensions for real time embedded systems (RTES) co-design, are taking a growing interest by a great number of industrial and research communities. The extension mechanism is provided by UML profiles for RTES. It aims at improving an easily-understood method of system design for non-experts. On the other hand, one of the key items of the co- design methods is the Hardware/Software partitioning and scheduling tasks. Indeed, it is mandatory to define where and when tasks are implemented and run. Unfortunately the main goals of co-design are not included in the usual practice of UML profiles. So, there exists a need for mapping used models to an execution platform for both schedulability test and HW/SW partitioning. In the present work, test schedulability and design space exploration are performed at an early stage. The proposed approach adopts Model Driven Engineering MDE. It starts from UML specification annotated with the recent profile for the Modeling and Analysis of Real Time Embedded systems MARTE. Following refinement strategy, transformation rules allow to find a feasible schedule that satisfies timing constraints and to define where tasks will be implemented. The overall approach is experimented for the design of a football player robot application.
Abstract: Insufficient Quality of Service (QoS) of Voice over
Internet Protocol (VoIP) is a growing concern that has lead the need
for research and study. In this paper we investigate the performance
of VoIP and the impact of resource limitations on the performance of
Access Networks. The impact of VoIP performance in Access
Networks is particularly important in regions where Internet
resources are limited and the cost of improving these resources is
prohibitive. It is clear that perceived VoIP performance, as measured
by mean opinion score [2] in experiments, where subjects are asked
to rate communication quality, is determined by end-to-end delay on
the communication path, delay variation, packet loss, echo, the
coding algorithm in use and noise. These performance indicators can
be measured and the affect in the Access Network can be estimated.
This paper investigates the congestion in the Access Network to the
overall performance of VoIP services with the presence of other
substantial uses of internet and ways in which Access Networks can
be designed to improve VoIP performance. Methods for analyzing
the impact of the Access Network on VoIP performance will be
surveyed and reviewed. This paper also considers some approaches
for improving performance of VoIP by carrying out experiments
using Network Simulator version 2 (NS2) software with a view to
gaining a better understanding of the design of Access Networks.
Abstract: Protein 3D structure prediction has always been an
important research area in bioinformatics. In particular, the
prediction of secondary structure has been a well-studied research
topic. Despite the recent breakthrough of combining multiple
sequence alignment information and artificial intelligence algorithms
to predict protein secondary structure, the Q3 accuracy of various
computational prediction algorithms rarely has exceeded 75%. In a
previous paper [1], this research team presented a rule-based method
called RT-RICO (Relaxed Threshold Rule Induction from Coverings)
to predict protein secondary structure. The average Q3 accuracy on
the sample datasets using RT-RICO was 80.3%, an improvement
over comparable computational methods. Although this demonstrated
that RT-RICO might be a promising approach for predicting
secondary structure, the algorithm-s computational complexity and
program running time limited its use. Herein a parallelized
implementation of a slightly modified RT-RICO approach is
presented. This new version of the algorithm facilitated the testing of
a much larger dataset of 396 protein domains [2]. Parallelized RTRICO
achieved a Q3 score of 74.6%, which is higher than the
consensus prediction accuracy of 72.9% that was achieved for the
same test dataset by a combination of four secondary structure
prediction methods [2].
Abstract: In recent times, the problem of Unsolicited Bulk
Email (UBE) or commonly known as Spam Email, has increased at a
tremendous growth rate. We present an analysis of survey based on
classifications of UBE in various research works. There are many
research instances for classification between spam and non-spam
emails but very few research instances are available for classification
of spam emails, per se. This paper does not intend to assert some
UBE classification to be better than the others nor does it propose
any new classification but it bemoans the lack of harmony on number
and definition of categories proposed by different researchers. The
paper also elaborates on factors like intent of spammer, content of
UBE and ambiguity in different categories as proposed in related
research works of classifications of UBE.
Abstract: Panoramic view generation has always offered
novel and distinct challenges in the field of image processing.
Panoramic view generation is nothing but construction of bigger
view mosaic image from set of partial images of the desired view.
The paper presents a solution to one of the problems of image
seascape formation where some of the partial images are color and
others are grayscale. The simplest solution could be to convert all
image parts into grayscale images and fusing them to get grayscale
image panorama. But in the multihued world, obtaining the colored
seascape will always be preferred. This could be achieved by picking
colors from the color parts and squirting them in grayscale parts of
the seascape. So firstly the grayscale image parts should be colored
with help of color image parts and then these parts should be fused to
construct the seascape image.
The problem of coloring grayscale images has no exact solution.
In the proposed technique of panoramic view generation, the job of
transferring color traits from reference color image to grayscale
image is done by palette based method. In this technique, the color
palette is prepared using pixel windows of some degrees taken from
color image parts. Then the grayscale image part is divided into pixel
windows with same degrees. For every window of grayscale image
part the palette is searched and equivalent color values are found,
which could be used to color grayscale window. For palette
preparation we have used RGB color space and Kekre-s LUV color
space. Kekre-s LUV color space gives better quality of coloring. The
searching time through color palette is improved over the exhaustive
search using Kekre-s fast search technique.
After coloring the grayscale image pieces the next job is fusion of
all these pieces to obtain panoramic view. For similarity estimation
between partial images correlation coefficient is used.
Abstract: Leo Breimans Random Forests (RF) is a recent
development in tree based classifiers and quickly proven to be one of
the most important algorithms in the machine learning literature. It
has shown robust and improved results of classifications on standard
data sets. Ensemble learning algorithms such as AdaBoost and
Bagging have been in active research and shown improvements in
classification results for several benchmarking data sets with mainly
decision trees as their base classifiers. In this paper we experiment to
apply these Meta learning techniques to the random forests. We
experiment the working of the ensembles of random forests on the
standard data sets available in UCI data sets. We compare the
original random forest algorithm with their ensemble counterparts
and discuss the results.
Abstract: SQL injection on web applications is a very popular
kind of attack. There are mechanisms such as intrusion detection
systems in order to detect this attack. These strategies often rely on
techniques implemented at high layers of the application but do not
consider the low level of system calls. The problem of only
considering the high level perspective is that an attacker can
circumvent the detection tools using certain techniques such as URL
encoding. One technique currently used for detecting low-level
attacks on privileged processes is the tracing of system calls. System
calls act as a single gate to the Operating System (OS) kernel; they
allow catching the critical data at an appropriate level of detail. Our
basic assumption is that any type of application, be it a system
service, utility program or Web application, “speaks” the language of
system calls when having a conversation with the OS kernel. At this
level we can see the actual attack while it is happening. We conduct
an experiment in order to demonstrate the suitability of system call
analysis for detecting SQL injection. We are able to detect the attack.
Therefore we conclude that system calls are not only powerful in
detecting low-level attacks but that they also enable us to detect highlevel
attacks such as SQL injection.
Abstract: A genetic algorithm (GA) based feature subset
selection algorithm is proposed in which the correlation structure of
the features is exploited. The subset of features is validated according
to the classification performance. Features derived from the
continuous wavelet transform are potentially strongly correlated.
GA-s that do not take the correlation structure of features into
account are inefficient. The proposed algorithm forms clusters of
correlated features and searches for a good candidate set of clusters.
Secondly a search within the clusters is performed. Different
simulations of the algorithm on a real-case data set with strong
correlations between features show the increased classification
performance. Comparison is performed with a standard GA without
use of the correlation structure.
Abstract: In order to implement flexibility as well as survivable
capacities over passive optical network (PON), a new automatic
random fault-recovery mechanism with array-waveguide-grating
based (AWG-based) optical switch (OSW) is presented. Firstly,
wavelength-division-multiplexing and optical code-division
multiple-access (WDM/OCDMA) scheme are configured to meet the
various geographical locations requirement between optical network
unit (ONU) and optical line terminal (OLT). The AWG-base optical
switch is designed and viewed as central star-mesh topology to
prohibit/decrease the duplicated redundant elements such as fiber and
transceiver as well. Hence, by simple monitoring and routing switch
algorithm, random fault-recovery capacity is achieved over
bi-directional (up/downstream) WDM/OCDMA scheme. When error
of distribution fiber (DF) takes place or bit-error-rate (BER) is higher
than 10-9 requirement, the primary/slave AWG-based OSW are
adjusted and controlled dynamically to restore the affected ONU
groups via the other working DFs immediately.
Abstract: The work reported in this paper is motivated by the fact that there is a need to apply autonomic computing concepts to parallel computing systems. Advancing on prior work based on intelligent cores [36], a swarm-array computing approach, this paper focuses on 'Intelligent agents' another swarm-array computing approach in which the task to be executed on a parallel computing core is considered as a swarm of autonomous agents. A task is carried to a computing core by carrier agents and is seamlessly transferred between cores in the event of a predicted failure, thereby achieving self-ware objectives of autonomic computing. The feasibility of the proposed swarm-array computing approach is validated on a multi-agent simulator.