Abstract: Clustering involves the partitioning of n objects into k
clusters. Many clustering algorithms use hard-partitioning techniques
where each object is assigned to one cluster. In this paper we propose
an overlapping algorithm MCOKE which allows objects to belong to
one or more clusters. The algorithm is different from fuzzy clustering
techniques because objects that overlap are assigned a membership
value of 1 (one) as opposed to a fuzzy membership degree. The
algorithm is also different from other overlapping algorithms that
require a similarity threshold be defined a priori which can be
difficult to determine by novice users.
Abstract: Fast changing knowledge systems on the Internet can
be accessed more efficiently with the help of automatic document
summarization and updating techniques. The aim of multi-document
update summary generation is to construct a summary unfolding the
mainstream of data from a collection of documents based on the
hypothesis that the user has already read a set of previous documents.
In order to provide a lot of semantic information from the documents,
deeper linguistic or semantic analysis of the source documents were
used instead of relying only on document word frequencies to select
important concepts. In order to produce a responsive summary,
meaning oriented structural analysis is needed. To address this issue,
the proposed system presents a document summarization approach
based on sentence annotation with aspects, prepositions and named
entities. Semantic element extraction strategy is used to select
important concepts from documents which are used to generate
enhanced semantic summary.
Abstract: Different tools and technologies were implemented
for Crisis Response and Management (CRM) which is generally
using available network infrastructure for information exchange.
Depending on type of disaster or crisis, network infrastructure could
be affected and it could not be able to provide reliable connectivity.
Thus any tool or technology that depends on the connectivity could
not be able to fulfill its functionalities. As a solution, a new message
exchange framework has been developed. Framework provides
offline/online information exchange platform for CRM Information
Systems (CRMIS) and it uses XML compression and packet
prioritization algorithms and is based on open source web
technologies. By introducing offline capabilities to the web
technologies, framework will be able to perform message exchange
on unreliable networks. The experiments done on the simulation
environment provide promising results on low bandwidth networks
(56kbps and 28.8 kbps) with up to 50% packet loss and the solution is
to successfully transfer all the information on these low quality
networks where the traditional 2 and 3 tier applications failed.
Abstract: The Great East Japan Earthquake occurred at 14:46 on Friday, March 11, 2011. It was the most powerful known earthquake to have hit Japan. The earthquake triggered extremely destructive tsunami waves of up to 40.5 meters in height. We focus on the ship’s evacuation from tsunami. Then we analyze about ships evacuation from tsunami using multi-agent simulation and we want to prepare for a coming earthquake. We developed a simulation model of ships that set sail from the port in order to evacuate from the tsunami considering the ship carrying dangerous goods.
Abstract: In this paper, the improvement by deconvolution of
the depth resolution in Secondary Ion Mass Spectrometry (SIMS)
analysis is considered. Indeed, we have developed a new Tikhonov-
Miller deconvolution algorithm where a priori model of the solution
is included. This is a denoisy and pre-deconvoluted signal obtained
from: firstly, by the application of wavelet shrinkage algorithm,
secondly by the introduction of the obtained denoisy signal in an
iterative deconvolution algorithm. In particular, we have focused the
light on the effect of the iterations number on the evolution of the
deconvoluted signals. The SIMS profiles are multilayers of Boron in
Silicon matrix.
Abstract: The study of the electrical signals produced by neural
activities of human brain is called Electroencephalography. In this
paper, we propose an automatic and efficient EEG signal
classification approach. The proposed approach is used to classify the
EEG signal into two classes: epileptic seizure or not. In the proposed
approach, we start with extracting the features by applying Discrete
Wavelet Transform (DWT) in order to decompose the EEG signals
into sub-bands. These features, extracted from details and
approximation coefficients of DWT sub-bands, are used as input to
Principal Component Analysis (PCA). The classification is based on
reducing the feature dimension using PCA and deriving the supportvectors
using Support Vector Machine (SVM). The experimental are
performed on real and standard dataset. A very high level of
classification accuracy is obtained in the result of classification.
Abstract: One image is worth more than thousand words.
Images if analyzed can reveal useful information. Low level image
processing deals with the extraction of specific feature from a single
image. Now the question arises: What technique should be used to
extract patterns of very large and detailed image database? The
answer of the question is: “Image Mining”. Image Mining deals with
the extraction of image data relationship, implicit knowledge, and
another pattern from the collection of images or image database. It is
nothing but the extension of Data Mining. In the following paper, not
only we are going to scrutinize the current techniques of image
mining but also present a new technique for mining images using
Genetic Algorithm.
Abstract: Currently, there is excessively growing information
about places on Facebook, which is the largest social network but
such information is not explicitly organized and ranked. Therefore
users cannot exploit such data to recommend places conveniently and
quickly. This paper proposes a Facebook application and an Android
application that recommend places based on the number of check-ins
of those places, the distance of those places from the current location,
the number of people who like Facebook page of those places, and
the number of talking about of those places. Related Facebook data is
gathered via Facebook API requests. The experimental results of the
developed applications show that the applications can recommend
places and rank interesting places from the most to the least. We have
found that the average satisfied score of the proposed Facebook
application is 4.8 out of 5. The users’ satisfaction can increase by
adding the app features that support personalization in terms of
interests and preferences.
Abstract: Random epistemologies and hash tables have garnered
minimal interest from both security experts and experts in the last
several years. In fact, few information theorists would disagree with
the evaluation of expert systems. In our research, we discover how
flip-flop gates can be applied to the study of superpages. Though
such a hypothesis at first glance seems perverse, it is derived from
known results.
Abstract: Graphical User Interface (GUI) is essential to
programming, as is any other characteristic or feature, due to the fact
that GUI components provide the fundamental interaction between
the user and the program. Thus, we must give more interest to GUI
during building and development of systems. Also, we must give a
greater attention to the user who is the basic corner in the dealing
with the GUI. This paper introduces an approach for designing GUI
from one of the models of business workflows which describe the
workflow behavior of a system, specifically through Activity
Diagrams (AD).
Abstract: The inherent skin patterns created at the joints in the
finger exterior are referred as finger knuckle-print. It is exploited to
identify a person in a unique manner because the finger knuckle print
is greatly affluent in textures. In biometric system, the region of
interest is utilized for the feature extraction algorithm. In this paper,
local and global features are extracted separately. Fast Discrete
Orthonormal Stockwell Transform is exploited to extract the local
features. Global feature is attained by escalating the size of Fast
Discrete Orthonormal Stockwell Transform to infinity. Two features
are fused to increase the recognition accuracy. A matching distance is
calculated for both the features individually. Then two distances are
merged mutually to acquire the final matching distance. The
proposed scheme gives the better performance in terms of equal error
rate and correct recognition rate.
Abstract: Estimation of model parameters is necessary to predict
the behavior of a system. Model parameters are estimated using
optimization criteria. Most algorithms use historical data to estimate
model parameters. The known target values (actual) and the output
produced by the model are compared. The differences between the
two form the basis to estimate the parameters. In order to compare
different models developed using the same data different criteria are
used. The data obtained for short scale projects are used here. We
consider software effort estimation problem using radial basis
function network. The accuracy comparison is made using various
existing criteria for one and two predictors. Then, we propose a new
criterion based on linear least squares for evaluation and compared
the results of one and two predictors. We have considered another
data set and evaluated prediction accuracy using the new criterion.
The new criterion is easy to comprehend compared to single statistic.
Although software effort estimation is considered, this method is
applicable for any modeling and prediction.
Abstract: This paper seeks to analyse the benefits of big data
and more importantly the challenges it pose to the subject of privacy
and data protection. First, the nature of big data will be briefly
deliberated before presenting the potential of big data in the present
days. Afterwards, the issue of privacy and data protection is
highlighted before discussing the challenges of implementing this
issue in big data. In conclusion, the paper will put forward the debate
on the adequacy of the existing legal framework in protecting
personal data in the era of big data.
Abstract: This paper presents an efficient fusion algorithm for
iris images to generate stable feature for recognition in unconstrained
environment. Recently, iris recognition systems are focused on real
scenarios in our daily life without the subject’s cooperation. Under
large variation in the environment, the objective of this paper is to
combine information from multiple images of the same iris. The
result of image fusion is a new image which is more stable for further
iris recognition than each original noise iris image. A wavelet-based
approach for multi-resolution image fusion is applied in the fusion
process. The detection of the iris image is based on Adaboost
algorithm and then local binary pattern (LBP) histogram is then
applied to texture classification with the weighting scheme.
Experiment showed that the generated features from the proposed
fusion algorithm can improve the performance for verification system
through iris recognition.
Abstract: This paper describes an analysis of Yacht Simulator
international trends and also explains about Yacht. The results are
summarized as follows. Attached to the cockpit are sensors that feed
-back information on rudder angle, boat heel angle and mainsheet
tension to the computer. Energy expenditure of the sailor measure
indirectly using expired gas analysis for the measurement of VO2 and
VCO2. At sea course configurations and wind conditions can be preset
to suit any level of sailor from complete beginner to advanced sailor.
Abstract: Every machine plays roles of client and server
simultaneously in a peer-to-peer (P2P) network. Though a P2P
network has many advantages over traditional client-server models
regarding efficiency and fault-tolerance, it also faces additional
security threats. Users/IT administrators should be aware of risks
from malicious code propagation, downloaded content legality, and
P2P software’s vulnerabilities. Security and preventative measures
are a must to protect networks from potential sensitive information
leakage and security breaches. Bit Torrent is a popular and scalable
P2P file distribution mechanism which successfully distributes large
files quickly and efficiently without problems for origin server. Bit
Torrent achieved excellent upload utilization according to
measurement studies, but it also raised many questions as regards
utilization in settings, than those measuring, fairness, and Bit
Torrent’s mechanisms choice. This work proposed a block selection
technique using Fuzzy ACO with optimal rules selected using ACO.
Abstract: Large-scale data stream analysis has become one of
the important business and research priorities lately. Social networks
like Twitter and other micro-blogging platforms hold an enormous
amount of data that is large in volume, velocity and variety.
Extracting valuable information and trends out of these data would
aid in a better understanding and decision-making. Multiple analysis
techniques are deployed for English content. Moreover, one of the
languages that produce a large amount of data over social networks
and is least analyzed is the Arabic language. The proposed paper is a
survey on the research efforts to analyze the Arabic content in
Twitter focusing on the tools and methods used to extract the
sentiments for the Arabic content on Twitter.
Abstract: A large amount of data is typically stored in relational
databases (DB). The latter can efficiently handle user queries which
intend to elicit the appropriate information from data sources.
However, direct access and use of this data requires the end users to
have an adequate technical background, while they should also cope
with the internal data structure and values presented. Consequently
the information retrieval is a quite difficult process even for IT or DB
experts, taking into account the limited contributions of relational
databases from the conceptual point of view. Ontologies enable users
to formally describe a domain of knowledge in terms of concepts and
relations among them and hence they can be used for unambiguously
specifying the information captured by the relational database.
However, accessing information residing in a database using
ontologies is feasible, provided that the users are keen on using
semantic web technologies. For enabling users form different
disciplines to retrieve the appropriate data, the design of a Graphical
User Interface is necessary. In this work, we will present an
interactive, ontology-based, semantically enable web tool that can be
used for information retrieval purposes. The tool is totally based on
the ontological representation of underlying database schema while it
provides a user friendly environment through which the users can
graphically form and execute their queries.
Abstract: Human motion capture has become one of the major
area of interest in the field of computer vision. Some of the major
application areas that have been rapidly evolving include the
advanced human interfaces, virtual reality and security/surveillance
systems. This study provides a brief overview of the techniques and
applications used for the markerless human motion capture, which
deals with analyzing the human motion in the form of mathematical
formulations. The major contribution of this research is that it
classifies the computer vision based techniques of human motion
capture based on the taxonomy, and then breaks its down into four
systematically different categories of tracking, initialization, pose
estimation and recognition. The detailed descriptions and the
relationships descriptions are given for the techniques of tracking and
pose estimation. The subcategories of each process are further
described. Various hypotheses have been used by the researchers in
this domain are surveyed and the evolution of these techniques have
been explained. It has been concluded in the survey that most
researchers have focused on using the mathematical body models for
the markerless motion capture.
Abstract: The web’s increased popularity has included a huge
amount of information, due to which automated web page
classification systems are essential to improve search engines’
performance. Web pages have many features like HTML or XML
tags, hyperlinks, URLs and text contents which can be considered
during an automated classification process. It is known that Webpage
classification is enhanced by hyperlinks as it reflects Web page
linkages. The aim of this study is to reduce the number of features to
be used to improve the accuracy of the classification of web pages. In
this paper, a novel feature selection method using an improved
Particle Swarm Optimization (PSO) using principle of evolution is
proposed. The extracted features were tested on the WebKB dataset
using a parallel Neural Network to reduce the computational cost.