Abstract: In this paper, the secure BioSemantic Scheme is
presented to bridge biological/biomedical research problems and
computational solutions via semantic computing. Due to the diversity
of problems in various research fields, the semantic capability
description language (SCDL) plays and important role as a common
language and generic form for problem formalization. SCDL is
expected the essential for future semantic and logical computing in
Biosemantic field. We show several example to Biomedical problems
in this paper. Moreover, in the coming age of cloud computing, the
security problem is considered to be crucial issue and we presented a
practical scheme to cope with this problem.
Abstract: Workflow scheduling is an important part of cloud
computing and based on different criteria it decides cost, execution
time, and performances. A cloud workflow system is a platform
service facilitating automation of distributed applications based on
new cloud infrastructure. An aspect which differentiates cloud
workflow system from others is market-oriented business model, an
innovation which challenges conventional workflow scheduling
strategies. Time and Cost optimization algorithm for scheduling
Hybrid Clouds (TCHC) algorithm decides which resource should be
chartered from public providers is combined with a new De-De
algorithm considering that every instance of single and multiple
workflows work without deadlocks. To offset this, two new concepts
- De-De Dodging Algorithm and Priority Based Decisive Algorithm -
combine with conventional deadlock avoidance issues by proposing
one algorithm that maximizes active (not just allocated) resource use
and reduces Makespan.
Abstract: Mobile Ad hoc Network is a set of self-governing
nodes which communicate through wireless links. Dynamic topology
MANETs makes routing a challenging task. Various routing
protocols are there, but due to various fundamental characteristic
open medium, changing topology, distributed collaboration and
constrained capability, these protocols are tend to various types of
security attacks. Black hole is one among them. In this attack,
malicious node represents itself as having the shortest path to the
destination but that path not even exists. In this paper, we aim to
develop a routing protocol for detection and prevention of black hole
attack by modifying AODV routing protocol. This protocol is able to
detect and prevent the black hole attack. Simulation is done using
NS-2, which shows the improvement in network performance.
Abstract: Establishing a secure communication of Internet
conferences for participants is very important. Before starting the
conference, all the participants establish a common conference key to
encrypt/decrypt communicated messages. It enables participants to
exchange the secure messages. Nevertheless, in the conference, if
there are any malicious participants who may try to upset the key
generation process causing other legal participants to obtain a different
conference key. In this article, we propose an improved conference
key agreement with fault-tolerant capability. The proposed scheme
can filter malicious participants at the beginning of the conference to
ensure that all participants obtain the same conference key. Compare
with other schemes, our scheme is more secure and efficient than
others.
Abstract: Feature selection has been used in many fields such as
classification, data mining and object recognition and proven to be
effective for removing irrelevant and redundant features from the
original dataset. In this paper, a new design of distributed intrusion
detection system using a combination feature selection model based
on bees and decision tree. Bees algorithm is used as the search
strategy to find the optimal subset of features, whereas decision tree
is used as a judgment for the selected features. Both the produced
features and the generated rules are used by Decision Making Mobile
Agent to decide whether there is an attack or not in the networks.
Decision Making Mobile Agent will migrate through the networks,
moving from node to another, if it found that there is an attack on one
of the nodes, it then alerts the user through User Interface Agent or
takes some action through Action Mobile Agent. The KDD Cup 99
dataset is used to test the effectiveness of the proposed system. The
results show that even if only four features are used, the proposed
system gives a better performance when it is compared with the
obtained results using all 41 features.
Abstract: In order to help the expert to validate association rules
extracted from data, some quality measures are proposed in the
literature. We distinguish two categories: objective and subjective
measures. The first one depends on a fixed threshold and on data
quality from which the rules are extracted. The second one consists
on providing to the expert some tools in the objective to explore and
visualize rules during the evaluation step. However, the number of
extracted rules to validate remains high. Thus, the manually mining
rules task is very hard. To solve this problem, we propose, in this
paper, a semi-automatic method to assist the expert during the
association rule's validation. Our method uses rule-based
classification as follow: (i) We transform association rules into
classification rules (classifiers), (ii) We use the generated classifiers
for data classification. (iii) We visualize association rules with their
quality classification to give an idea to the expert and to assist him
during validation process.
Abstract: Because current wireless communication requires high
reliability in a limited bandwidth environment, this paper proposes
the variable modulation scheme based on the codebook. The variable
modulation scheme adjusts transmission power using the codebook in
accordance with channel state. Also, if the codebook is composed of
many bits, the reliability is more improved by the proposed scheme.
The simulation results show that the performance of proposed scheme
has better reliability than the the performance of conventional scheme.
Abstract: Localization of nodes is one of the key issues of
Wireless Sensor Network (WSN) that gained a wide attention in
recent years. The existing localization techniques can be generally
categorized into two types: range-based and range-free. Compared
with rang-based schemes, the range-free schemes are more costeffective,
because no additional ranging devices are needed. As a
result, we focus our research on the range-free schemes. In this paper
we study three types of range-free location algorithms to compare the
localization error and energy consumption of each one. Centroid
algorithm requires a normal node has at least three neighbor anchors,
while DV-hop algorithm doesn’t have this requirement. The third
studied algorithm is the amorphous algorithm similar to DV-Hop
algorithm, and the idea is to calculate the hop distance between two
nodes instead of the linear distance between them. The simulation
results show that the localization accuracy of the amorphous
algorithm is higher than that of other algorithms and the energy
consumption does not increase too much.
Abstract: In this paper, a new SMC (Sliding Mode Control)
method with MP (Model Predictive Control) integral action for the
slip suppression of EV (Electric Vehicle) under braking is proposed.
The proposed method introduce the integral term with standard SMC
gain , where the integral gain is optimized for each control period by
the MPC algorithms. The aim of this method is to improve the safety
and the stability of EVs under braking by controlling the wheel slip
ratio. There also include numerical simulation results to demonstrate
the effectiveness of the method.
Abstract: In this article, we deal with a variant of the classical
course timetabling problem that has a practical application in many
areas of education. In particular, in this paper we are interested in
high schools remedial courses. The purpose of such courses is to
provide under-prepared students with the skills necessary to succeed
in their studies. In particular, a student might be under prepared in
an entire course, or only in a part of it. The limited availability
of funds, as well as the limited amount of time and teachers at
disposal, often requires schools to choose which courses and/or which
teaching units to activate. Thus, schools need to model the training
offer and the related timetabling, with the goal of ensuring the
highest possible teaching quality, by meeting the above-mentioned
financial, time and resources constraints. Moreover, there are some
prerequisites between the teaching units that must be satisfied. We
first present a Mixed-Integer Programming (MIP) model to solve
this problem to optimality. However, the presence of many peculiar
constraints contributes inevitably in increasing the complexity of
the mathematical model. Thus, solving it through a general-purpose
solver may be performed for small instances only, while solving
real-life-sized instances of such model requires specific techniques
or heuristic approaches. For this purpose, we also propose a heuristic
approach, in which we make use of a fast constructive procedure
to obtain a feasible solution. To assess our exact and heuristic
approaches we perform extensive computational results on both
real-life instances (obtained from a high school in Lecce, Italy) and
randomly generated instances. Our tests show that the MIP model is
never solved to optimality, with an average optimality gap of 57%.
On the other hand, the heuristic algorithm is much faster (in about the
50% of the considered instances it converges in approximately half of
the time limit) and in many cases allows achieving an improvement
on the objective function value obtained by the MIP model. Such an
improvement ranges between 18% and 66%.
Abstract: Over the past few years, a lot of research has been
conducted to bring Automatic Speech Recognition (ASR) into various
areas of Air Traffic Control (ATC), such as air traffic control
simulation and training, monitoring live operators for with the aim
of safety improvements, air traffic controller workload measurement
and conducting analysis on large quantities controller-pilot speech.
Due to the high accuracy requirements of the ATC context and its
unique challenges, automatic speech recognition has not been widely
adopted in this field. With the aim of providing a good starting
point for researchers who are interested bringing automatic speech
recognition into ATC, this paper gives an overview of possibilities
and challenges of applying automatic speech recognition in air traffic
control. To provide this overview, we present an updated literature
review of speech recognition technologies in general, as well as
specific approaches relevant to the ATC context. Based on this
literature review, criteria for selecting speech recognition approaches
for the ATC domain are presented, and remaining challenges and
possible solutions are discussed.
Abstract: A large amount of software products offer a wide
range and number of features. This is called featuritis or creeping
featurism and tends to rise with each release of the product. Feautiris
often adds unnecessary complexity to software, leading to longer
learning curves and overall confusing the users and degrading their
experience. We take a look to a new design approach tendency that
has been coming up, the so-called “What You Get is What You
Need” concept that argues that products should be very focused,
simple and with minimalistic interfaces in order to help users conduct
their tasks in distraction-free ambiences. This isn’t as simple to
implement as it might sound and the developers need to cut down
features. Our contribution illustrates and evaluates this design method
through a novel distraction-free diagramming tool named Delineato
Pro for Mac OS X in which the user is confronted with an empty
canvas when launching the software and where tools only show up
when really needed.
Abstract: Speaker Identification (SI) is the task of establishing
identity of an individual based on his/her voice characteristics. The SI
task is typically achieved by two-stage signal processing: training and
testing. The training process calculates speaker specific feature
parameters from the speech and generates speaker models
accordingly. In the testing phase, speech samples from unknown
speakers are compared with the models and classified. Even though
performance of speaker identification systems has improved due to
recent advances in speech processing techniques, there is still need of
improvement. In this paper, a Closed-Set Tex-Independent Speaker
Identification System (CISI) based on a Multiple Classifier System
(MCS) is proposed, using Mel Frequency Cepstrum Coefficient
(MFCC) as feature extraction and suitable combination of vector
quantization (VQ) and Gaussian Mixture Model (GMM) together
with Expectation Maximization algorithm (EM) for speaker
modeling. The use of Voice Activity Detector (VAD) with a hybrid
approach based on Short Time Energy (STE) and Statistical
Modeling of Background Noise in the pre-processing step of the
feature extraction yields a better and more robust automatic speaker
identification system. Also investigation of Linde-Buzo-Gray (LBG)
clustering algorithm for initialization of GMM, for estimating the
underlying parameters, in the EM step improved the convergence rate
and systems performance. It also uses relative index as confidence
measures in case of contradiction in identification process by GMM
and VQ as well. Simulation results carried out on voxforge.org
speech database using MATLAB highlight the efficacy of the
proposed method compared to earlier work.
Abstract: One of the most critical decision points in the design of a
face recognition system is the choice of an appropriate face representation.
Effective feature descriptors are expected to convey sufficient, invariant
and non-redundant facial information. In this work we propose a set of
Hahn moments as a new approach for feature description. Hahn moments
have been widely used in image analysis due to their invariance, nonredundancy
and the ability to extract features either globally and locally.
To assess the applicability of Hahn moments to Face Recognition we
conduct two experiments on the Olivetti Research Laboratory (ORL)
database and University of Notre-Dame (UND) X1 biometric collection.
Fusion of the global features along with the features from local facial
regions are used as an input for the conventional k-NN classifier. The
method reaches an accuracy of 93% of correctly recognized subjects for
the ORL database and 94% for the UND database.
Abstract: In this paper, we present a new segmentation approach
for focal liver lesions in contrast enhanced ultrasound imaging. This
approach, based on a two-cluster Fuzzy C-Means methodology,
considers type-II fuzzy sets to handle uncertainty due to the image
modality (presence of speckle noise, low contrast, etc.), and to
calculate the optimum inter-cluster threshold. Fine boundaries are
detected by a local recursive merging of ambiguous pixels. The
method has been tested on a representative database. Compared to
both Otsu and type-I Fuzzy C-Means techniques, the proposed
method significantly reduces the segmentation errors.
Abstract: Computer aided diagnosis systems provide vital
opinion to radiologists in the detection of early signs of breast cancer
from mammogram images. Architectural distortions, masses and
microcalcifications are the major abnormalities. In this paper, a
computer aided diagnosis system has been proposed for
distinguishing abnormal mammograms with architectural distortion
from normal mammogram. Four types of texture features GLCM
texture, GLRLM texture, fractal texture and spectral texture features
for the regions of suspicion are extracted. Support vector machine
has been used as classifier in this study. The proposed system yielded
an overall sensitivity of 96.47% and an accuracy of 96% for
mammogram images collected from digital database for screening
mammography database.
Abstract: This survey paper shows the recent state of model
comparison as it’s applies to Model Driven engineering. In Model
Driven Engineering to calculate the difference between the models is
a very important and challenging task. There are number of tasks
involved in model differencing that firstly starts with identifying and
matching the elements of the model. In this paper, we discuss how
model matching is accomplished, the strategies, techniques and the
types of the model. We also discuss the future direction. We found
out that many of the latest model comparison strategies are geared
near enabling Meta model and similarity based matching. Therefore
model versioning is the most dominant application of the model
comparison. Recently to work on comparison for versioning has
begun to deteriorate, giving way to different applications. Ultimately
there is wide change among the tools in the measure of client exertion
needed to perform model comparisons, as some require more push to
encourage more sweeping statement and expressive force.
Abstract: Mobile Ad Hoc Networks (MANETs) is a collection
of mobile devices forming a communication network without
infrastructure. MANET is vulnerable to security threats due to
network’s limited security, dynamic topology, scalability and the lack
of central management. The Quality of Service (QoS) routing in such
networks is limited by network breakage caused by node mobility or
nodes energy depletions. The impact of node mobility on trust
establishment is considered and its use to propagate trust through a
network is investigated in this paper. This work proposes an
enhanced Associativity Based Routing (ABR) with Fuzzy based
Trust (Fuzzy- ABR) routing protocol for MANET to improve QoS
and to mitigate network attacks.
Abstract: The access to relevant information that is adapted to
user’s needs, preferences and environment is a challenge in many
applications running. That causes an appearance of context-aware
systems. To facilitate the development of this class of applications, it
is necessary that these applications share a common context
metamodel. In this article, we will present our context metamodel
that is defined using the OMG Meta Object facility (MOF).This
metamodel is based on the analysis and synthesis of context concepts
proposed in literature.
Abstract: The growing number of computer viruses and the
detection of zero day malware have been the concern for security
researchers for a large period of time. Existing antivirus products
(AVs) rely on detecting virus signatures which do not provide a full
solution to the problems associated with these viruses. The use of
logic formulae to model the behaviour of viruses is one of the most
encouraging recent developments in virus research, which provides
alternatives to classic virus detection methods. In this paper, we
proposed a comparative study about different virus detection
techniques. This paper provides the advantages and drawbacks of
different detection techniques. Different techniques will be used in
this paper to provide a discussion about what technique is more
effective to detect computer viruses.