Abstract: Web mining is to discover and extract useful
Information. Different users may have different search goals when
they search by giving queries and submitting it to a search engine.
The inference and analysis of user search goals can be very useful for
providing an experience result for a user search query. In this project,
we propose a novel approach to infer user search goals by analyzing
search web logs. First, we propose a novel approach to infer user
search goals by analyzing search engine query logs, the feedback
sessions are constructed from user click-through logs and it
efficiently reflect the information needed for users. Second we
propose a preprocessing technique to clean the unnecessary data’s
from web log file (feedback session). Third we propose a technique
to generate pseudo-documents to representation of feedback sessions
for clustering. Finally we implement k-medoids clustering algorithm
to discover different user search goals and to provide a more optimal
result for a search query based on feedback sessions for the user.
Abstract: Communicating users' needs, goals and problems help
designers and developers overcome challenges faced by end users.
Personas are used to represent end users’ needs. In our research,
creating personas allowed the following questions to be answered:
Who are the potential user groups? What do they want to achieve by
using the service? What are the problems that users face? What
should the service provide to them? To develop realistic personas, we
conducted a focus group discussion with undergraduate and graduate
students and also interviewed a university librarian. The personas
were created to help evaluating the Institutional Repository that is
based on the DSpace system. The profiles helped to communicate
users' needs, abilities, tasks, and problems, and the task scenarios
used in the heuristic evaluation were based on these personas. Four
personas resulted of a focus group discussion with undergraduate and
graduate students and from interviewing a university librarian. We
then used these personas to create focused task-scenarios for a
heuristic evaluation on the system interface to ensure that it met
users' needs, goals, problems and desires. In this paper, we present
the process that we used to create the personas that led to devise the
task scenarios used in the heuristic evaluation as a follow up study of
the DSpace university repository.
Abstract: Laban Movement Analysis (LMA), developed in the
dance community over the past seventy years, is an effective method
for observing, describing, notating, and interpreting human
movement to enhance communication and expression in everyday
and professional life. Many applications that use motion capture data
might be significantly leveraged if the Laban qualities will be
recognized automatically. This paper presents an automated
recognition method of Laban qualities from motion capture skeletal
recordings and it is demonstrated on the output of Microsoft’s Kinect
V2 sensor.
Abstract: The growing number of computer viruses and the
detection of zero day malware have been the concern for security
researchers for a large period of time. Existing antivirus products
(AVs) rely on detecting virus signatures which do not provide a full
solution to the problems associated with these viruses. The use of
logic formulae to model the behaviour of viruses is one of the most
encouraging recent developments in virus research, which provides
alternatives to classic virus detection methods. In this paper, we
proposed a comparative study about different virus detection
techniques. This paper provides the advantages and drawbacks of
different detection techniques. Different techniques will be used in
this paper to provide a discussion about what technique is more
effective to detect computer viruses.
Abstract: The access to relevant information that is adapted to
user’s needs, preferences and environment is a challenge in many
applications running. That causes an appearance of context-aware
systems. To facilitate the development of this class of applications, it
is necessary that these applications share a common context
metamodel. In this article, we will present our context metamodel
that is defined using the OMG Meta Object facility (MOF).This
metamodel is based on the analysis and synthesis of context concepts
proposed in literature.
Abstract: Mobile Ad Hoc Networks (MANETs) is a collection
of mobile devices forming a communication network without
infrastructure. MANET is vulnerable to security threats due to
network’s limited security, dynamic topology, scalability and the lack
of central management. The Quality of Service (QoS) routing in such
networks is limited by network breakage caused by node mobility or
nodes energy depletions. The impact of node mobility on trust
establishment is considered and its use to propagate trust through a
network is investigated in this paper. This work proposes an
enhanced Associativity Based Routing (ABR) with Fuzzy based
Trust (Fuzzy- ABR) routing protocol for MANET to improve QoS
and to mitigate network attacks.
Abstract: This survey paper shows the recent state of model
comparison as it’s applies to Model Driven engineering. In Model
Driven Engineering to calculate the difference between the models is
a very important and challenging task. There are number of tasks
involved in model differencing that firstly starts with identifying and
matching the elements of the model. In this paper, we discuss how
model matching is accomplished, the strategies, techniques and the
types of the model. We also discuss the future direction. We found
out that many of the latest model comparison strategies are geared
near enabling Meta model and similarity based matching. Therefore
model versioning is the most dominant application of the model
comparison. Recently to work on comparison for versioning has
begun to deteriorate, giving way to different applications. Ultimately
there is wide change among the tools in the measure of client exertion
needed to perform model comparisons, as some require more push to
encourage more sweeping statement and expressive force.
Abstract: The teaching of computer programming for beginners
has been generally considered as a difficult and challenging task.
Several methodologies and research tools have been developed,
however, the difficulty of teaching still remains. Our work integrates
the state of the art in teaching programming with game software and
further provides metrics for the evaluation of student performance in
a collaborative activity of playing games. This paper aims to present a
multi-agent system architecture to be incorporated to the educational
collaborative game software for teaching programming that monitors,
evaluates and encourages collaboration by the participants. A
literature review has been made on the concepts of Collaborative
Learning, Multi-agents systems, collaborative games and techniques
to teach programming using these concepts simultaneously.
Abstract: This work is on decision tree-based classification for
the disbursement of scholarship. Tree-based data mining
classification technique is used in other to determine the generic rule
to be used to disburse the scholarship. The system based on the
defined rules from the tree is able to determine the class (status) to
which an applicant shall belong whether Granted or Not Granted. The
applicants that fall to the class of granted denote a successful
acquirement of scholarship while those in not granted class are
unsuccessful in the scheme. An algorithm that can be used to classify
the applicants based on the rules from tree-based classification was
also developed. The tree-based classification is adopted because of its
efficiency, effectiveness, and easy to comprehend features. The
system was tested with the data of National Information Technology
Development Agency (NITDA) Abuja, a Parastatal of Federal
Ministry of Communication Technology that is mandated to develop
and regulate information technology in Nigeria. The system was
found working according to the specification. It is therefore
recommended for all scholarship disbursement organizations.
Abstract: Computer aided diagnosis systems provide vital
opinion to radiologists in the detection of early signs of breast cancer
from mammogram images. Architectural distortions, masses and
microcalcifications are the major abnormalities. In this paper, a
computer aided diagnosis system has been proposed for
distinguishing abnormal mammograms with architectural distortion
from normal mammogram. Four types of texture features GLCM
texture, GLRLM texture, fractal texture and spectral texture features
for the regions of suspicion are extracted. Support vector machine
has been used as classifier in this study. The proposed system yielded
an overall sensitivity of 96.47% and an accuracy of 96% for
mammogram images collected from digital database for screening
mammography database.
Abstract: In this paper, we present a new segmentation approach
for focal liver lesions in contrast enhanced ultrasound imaging. This
approach, based on a two-cluster Fuzzy C-Means methodology,
considers type-II fuzzy sets to handle uncertainty due to the image
modality (presence of speckle noise, low contrast, etc.), and to
calculate the optimum inter-cluster threshold. Fine boundaries are
detected by a local recursive merging of ambiguous pixels. The
method has been tested on a representative database. Compared to
both Otsu and type-I Fuzzy C-Means techniques, the proposed
method significantly reduces the segmentation errors.
Abstract: One of the most critical decision points in the design of a
face recognition system is the choice of an appropriate face representation.
Effective feature descriptors are expected to convey sufficient, invariant
and non-redundant facial information. In this work we propose a set of
Hahn moments as a new approach for feature description. Hahn moments
have been widely used in image analysis due to their invariance, nonredundancy
and the ability to extract features either globally and locally.
To assess the applicability of Hahn moments to Face Recognition we
conduct two experiments on the Olivetti Research Laboratory (ORL)
database and University of Notre-Dame (UND) X1 biometric collection.
Fusion of the global features along with the features from local facial
regions are used as an input for the conventional k-NN classifier. The
method reaches an accuracy of 93% of correctly recognized subjects for
the ORL database and 94% for the UND database.
Abstract: Speaker Identification (SI) is the task of establishing
identity of an individual based on his/her voice characteristics. The SI
task is typically achieved by two-stage signal processing: training and
testing. The training process calculates speaker specific feature
parameters from the speech and generates speaker models
accordingly. In the testing phase, speech samples from unknown
speakers are compared with the models and classified. Even though
performance of speaker identification systems has improved due to
recent advances in speech processing techniques, there is still need of
improvement. In this paper, a Closed-Set Tex-Independent Speaker
Identification System (CISI) based on a Multiple Classifier System
(MCS) is proposed, using Mel Frequency Cepstrum Coefficient
(MFCC) as feature extraction and suitable combination of vector
quantization (VQ) and Gaussian Mixture Model (GMM) together
with Expectation Maximization algorithm (EM) for speaker
modeling. The use of Voice Activity Detector (VAD) with a hybrid
approach based on Short Time Energy (STE) and Statistical
Modeling of Background Noise in the pre-processing step of the
feature extraction yields a better and more robust automatic speaker
identification system. Also investigation of Linde-Buzo-Gray (LBG)
clustering algorithm for initialization of GMM, for estimating the
underlying parameters, in the EM step improved the convergence rate
and systems performance. It also uses relative index as confidence
measures in case of contradiction in identification process by GMM
and VQ as well. Simulation results carried out on voxforge.org
speech database using MATLAB highlight the efficacy of the
proposed method compared to earlier work.
Abstract: A large amount of software products offer a wide
range and number of features. This is called featuritis or creeping
featurism and tends to rise with each release of the product. Feautiris
often adds unnecessary complexity to software, leading to longer
learning curves and overall confusing the users and degrading their
experience. We take a look to a new design approach tendency that
has been coming up, the so-called “What You Get is What You
Need” concept that argues that products should be very focused,
simple and with minimalistic interfaces in order to help users conduct
their tasks in distraction-free ambiences. This isn’t as simple to
implement as it might sound and the developers need to cut down
features. Our contribution illustrates and evaluates this design method
through a novel distraction-free diagramming tool named Delineato
Pro for Mac OS X in which the user is confronted with an empty
canvas when launching the software and where tools only show up
when really needed.
Abstract: Over the past few years, a lot of research has been
conducted to bring Automatic Speech Recognition (ASR) into various
areas of Air Traffic Control (ATC), such as air traffic control
simulation and training, monitoring live operators for with the aim
of safety improvements, air traffic controller workload measurement
and conducting analysis on large quantities controller-pilot speech.
Due to the high accuracy requirements of the ATC context and its
unique challenges, automatic speech recognition has not been widely
adopted in this field. With the aim of providing a good starting
point for researchers who are interested bringing automatic speech
recognition into ATC, this paper gives an overview of possibilities
and challenges of applying automatic speech recognition in air traffic
control. To provide this overview, we present an updated literature
review of speech recognition technologies in general, as well as
specific approaches relevant to the ATC context. Based on this
literature review, criteria for selecting speech recognition approaches
for the ATC domain are presented, and remaining challenges and
possible solutions are discussed.
Abstract: In this article, we deal with a variant of the classical
course timetabling problem that has a practical application in many
areas of education. In particular, in this paper we are interested in
high schools remedial courses. The purpose of such courses is to
provide under-prepared students with the skills necessary to succeed
in their studies. In particular, a student might be under prepared in
an entire course, or only in a part of it. The limited availability
of funds, as well as the limited amount of time and teachers at
disposal, often requires schools to choose which courses and/or which
teaching units to activate. Thus, schools need to model the training
offer and the related timetabling, with the goal of ensuring the
highest possible teaching quality, by meeting the above-mentioned
financial, time and resources constraints. Moreover, there are some
prerequisites between the teaching units that must be satisfied. We
first present a Mixed-Integer Programming (MIP) model to solve
this problem to optimality. However, the presence of many peculiar
constraints contributes inevitably in increasing the complexity of
the mathematical model. Thus, solving it through a general-purpose
solver may be performed for small instances only, while solving
real-life-sized instances of such model requires specific techniques
or heuristic approaches. For this purpose, we also propose a heuristic
approach, in which we make use of a fast constructive procedure
to obtain a feasible solution. To assess our exact and heuristic
approaches we perform extensive computational results on both
real-life instances (obtained from a high school in Lecce, Italy) and
randomly generated instances. Our tests show that the MIP model is
never solved to optimality, with an average optimality gap of 57%.
On the other hand, the heuristic algorithm is much faster (in about the
50% of the considered instances it converges in approximately half of
the time limit) and in many cases allows achieving an improvement
on the objective function value obtained by the MIP model. Such an
improvement ranges between 18% and 66%.
Abstract: In this paper, a new SMC (Sliding Mode Control)
method with MP (Model Predictive Control) integral action for the
slip suppression of EV (Electric Vehicle) under braking is proposed.
The proposed method introduce the integral term with standard SMC
gain , where the integral gain is optimized for each control period by
the MPC algorithms. The aim of this method is to improve the safety
and the stability of EVs under braking by controlling the wheel slip
ratio. There also include numerical simulation results to demonstrate
the effectiveness of the method.
Abstract: This research presents the main ideas to implement an
intelligent system composed by communicating wireless sensors
measuring environmental data linked to drought indicators (such as
air temperature, soil moisture , etc...). On the other hand, the setting
up of a spatio temporal database communicating with a Web mapping
application for a monitoring in real time in activity 24:00 /day, 7
days/week is proposed to allow the screening of the drought
parameters time evolution and their extraction. Thus this system
helps detecting surfaces touched by the phenomenon of drought.
Spatio-temporal conceptual models seek to answer the users who
need to manage soil water content for irrigating or fertilizing or other
activities pursuing crop yield augmentation. Effectively, spatiotemporal
conceptual models enable users to obtain a diagram of
readable and easy data to apprehend. Based on socio-economic
information, it helps identifying people impacted by the phenomena
with the corresponding severity especially that this information is
accessible by farmers and stakeholders themselves. The study will be
applied in Siliana watershed Northern Tunisia.
Abstract: Localization of nodes is one of the key issues of
Wireless Sensor Network (WSN) that gained a wide attention in
recent years. The existing localization techniques can be generally
categorized into two types: range-based and range-free. Compared
with rang-based schemes, the range-free schemes are more costeffective,
because no additional ranging devices are needed. As a
result, we focus our research on the range-free schemes. In this paper
we study three types of range-free location algorithms to compare the
localization error and energy consumption of each one. Centroid
algorithm requires a normal node has at least three neighbor anchors,
while DV-hop algorithm doesn’t have this requirement. The third
studied algorithm is the amorphous algorithm similar to DV-Hop
algorithm, and the idea is to calculate the hop distance between two
nodes instead of the linear distance between them. The simulation
results show that the localization accuracy of the amorphous
algorithm is higher than that of other algorithms and the energy
consumption does not increase too much.
Abstract: Because current wireless communication requires high
reliability in a limited bandwidth environment, this paper proposes
the variable modulation scheme based on the codebook. The variable
modulation scheme adjusts transmission power using the codebook in
accordance with channel state. Also, if the codebook is composed of
many bits, the reliability is more improved by the proposed scheme.
The simulation results show that the performance of proposed scheme
has better reliability than the the performance of conventional scheme.