Abstract: The web’s increased popularity has included a huge
amount of information, due to which automated web page
classification systems are essential to improve search engines’
performance. Web pages have many features like HTML or XML
tags, hyperlinks, URLs and text contents which can be considered
during an automated classification process. It is known that Webpage
classification is enhanced by hyperlinks as it reflects Web page
linkages. The aim of this study is to reduce the number of features to
be used to improve the accuracy of the classification of web pages. In
this paper, a novel feature selection method using an improved
Particle Swarm Optimization (PSO) using principle of evolution is
proposed. The extracted features were tested on the WebKB dataset
using a parallel Neural Network to reduce the computational cost.
Abstract: Human motion capture has become one of the major
area of interest in the field of computer vision. Some of the major
application areas that have been rapidly evolving include the
advanced human interfaces, virtual reality and security/surveillance
systems. This study provides a brief overview of the techniques and
applications used for the markerless human motion capture, which
deals with analyzing the human motion in the form of mathematical
formulations. The major contribution of this research is that it
classifies the computer vision based techniques of human motion
capture based on the taxonomy, and then breaks its down into four
systematically different categories of tracking, initialization, pose
estimation and recognition. The detailed descriptions and the
relationships descriptions are given for the techniques of tracking and
pose estimation. The subcategories of each process are further
described. Various hypotheses have been used by the researchers in
this domain are surveyed and the evolution of these techniques have
been explained. It has been concluded in the survey that most
researchers have focused on using the mathematical body models for
the markerless motion capture.
Abstract: With increasingly more mobile health applications
appearing due to the popularity of smartphones, the possibility arises
that these data can be used to improve the medical diagnostic process,
as well as the overall quality of healthcare, while at the same time
lowering costs. However, as of yet there have been no reports of a
successful combination of patient-generated data from smartphones
with data from clinical routine. In this paper we describe how these
two types of data can be combined in a secure way without
modification to hospital information systems, and how they can
together be used in a medical expert system for automatic nutritional
classification and triage.
Abstract: A large amount of data is typically stored in relational
databases (DB). The latter can efficiently handle user queries which
intend to elicit the appropriate information from data sources.
However, direct access and use of this data requires the end users to
have an adequate technical background, while they should also cope
with the internal data structure and values presented. Consequently
the information retrieval is a quite difficult process even for IT or DB
experts, taking into account the limited contributions of relational
databases from the conceptual point of view. Ontologies enable users
to formally describe a domain of knowledge in terms of concepts and
relations among them and hence they can be used for unambiguously
specifying the information captured by the relational database.
However, accessing information residing in a database using
ontologies is feasible, provided that the users are keen on using
semantic web technologies. For enabling users form different
disciplines to retrieve the appropriate data, the design of a Graphical
User Interface is necessary. In this work, we will present an
interactive, ontology-based, semantically enable web tool that can be
used for information retrieval purposes. The tool is totally based on
the ontological representation of underlying database schema while it
provides a user friendly environment through which the users can
graphically form and execute their queries.
Abstract: Every machine plays roles of client and server
simultaneously in a peer-to-peer (P2P) network. Though a P2P
network has many advantages over traditional client-server models
regarding efficiency and fault-tolerance, it also faces additional
security threats. Users/IT administrators should be aware of risks
from malicious code propagation, downloaded content legality, and
P2P software’s vulnerabilities. Security and preventative measures
are a must to protect networks from potential sensitive information
leakage and security breaches. Bit Torrent is a popular and scalable
P2P file distribution mechanism which successfully distributes large
files quickly and efficiently without problems for origin server. Bit
Torrent achieved excellent upload utilization according to
measurement studies, but it also raised many questions as regards
utilization in settings, than those measuring, fairness, and Bit
Torrent’s mechanisms choice. This work proposed a block selection
technique using Fuzzy ACO with optimal rules selected using ACO.
Abstract: This paper describes an analysis of Yacht Simulator
international trends and also explains about Yacht. The results are
summarized as follows. Attached to the cockpit are sensors that feed
-back information on rudder angle, boat heel angle and mainsheet
tension to the computer. Energy expenditure of the sailor measure
indirectly using expired gas analysis for the measurement of VO2 and
VCO2. At sea course configurations and wind conditions can be preset
to suit any level of sailor from complete beginner to advanced sailor.
Abstract: This paper presents an efficient fusion algorithm for
iris images to generate stable feature for recognition in unconstrained
environment. Recently, iris recognition systems are focused on real
scenarios in our daily life without the subject’s cooperation. Under
large variation in the environment, the objective of this paper is to
combine information from multiple images of the same iris. The
result of image fusion is a new image which is more stable for further
iris recognition than each original noise iris image. A wavelet-based
approach for multi-resolution image fusion is applied in the fusion
process. The detection of the iris image is based on Adaboost
algorithm and then local binary pattern (LBP) histogram is then
applied to texture classification with the weighting scheme.
Experiment showed that the generated features from the proposed
fusion algorithm can improve the performance for verification system
through iris recognition.
Abstract: It is important to take security measures to protect
your computer information, reduce identify theft, and prevent from
malicious cyber-attacks. With cyber-attacks on the continuous rise,
people need to understand and learn ways to prevent from these
attacks. Cyber-attack is an important factor to be considered if one is
to be able to protect oneself from malicious attacks. Without proper
security measures, most computer technology would hinder home
users more than such technologies would help. Knowledge of how
cyber-attacks operate and protective steps that can be taken to reduce
chances of its occurrence are key to increasing these security
measures. The purpose of this paper is to inform home users on the
importance of identifying and taking preventive steps to avoid cyberattacks.
Throughout this paper, many aspects of cyber-attacks will be
discuss: what a cyber-attack is, the affects of cyber-attack for home
users, different types of cyber-attacks, methodology to prevent such
attacks; home users can take to fortify security of their computer.
Abstract: Load modeling is one of the central functions in
power systems operations. Electricity cannot be stored, which means
that for electric utility, the estimate of the future demand is necessary
in managing the production and purchasing in an economically
reasonable way. A majority of the recently reported approaches are
based on neural network. The attraction of the methods lies in the
assumption that neural networks are able to learn properties of the
load. However, the development of the methods is not finished, and
the lack of comparative results on different model variations is a
problem. This paper presents a new approach in order to predict the
Tunisia daily peak load. The proposed method employs a
computational intelligence scheme based on the Fuzzy neural
network (FNN) and support vector regression (SVR). Experimental
results obtained indicate that our proposed FNN-SVR technique gives
significantly good prediction accuracy compared to some classical
techniques.
Abstract: The inherent skin patterns created at the joints in the
finger exterior are referred as finger knuckle-print. It is exploited to
identify a person in a unique manner because the finger knuckle print
is greatly affluent in textures. In biometric system, the region of
interest is utilized for the feature extraction algorithm. In this paper,
local and global features are extracted separately. Fast Discrete
Orthonormal Stockwell Transform is exploited to extract the local
features. Global feature is attained by escalating the size of Fast
Discrete Orthonormal Stockwell Transform to infinity. Two features
are fused to increase the recognition accuracy. A matching distance is
calculated for both the features individually. Then two distances are
merged mutually to acquire the final matching distance. The
proposed scheme gives the better performance in terms of equal error
rate and correct recognition rate.
Abstract: In this paper, we propose an intelligent system that is
used for monitoring the health conditions of patients. Monitoring the
health condition of patients is a complex problem that involves
different medical units and requires continuous monitoring especially
in rural areas because of inadequate number of available specialized
physicians. The proposed system will improve patient care and drive
costs down comparing to the existing system in Jordan. The proposed
system will be the start point to faster and improve the
communication between different units in the health system in
Jordan. Connecting patients and their physicians beyond hospital
doors regarding their geographical area is an important issue in
developing the health system in Jordan. The ability of making
medical decisions, the quality of medical is expected to be improved.
Abstract: Social Media (SM) is websites increasingly popular
and built to allow people to express themselves and to interact
socially with others. Most SMT are dominated by youth particularly
College students. The proliferation of popular social media tools,
which can accessed from any communication devices has become
pervasive in the lives of today’s student life. Connecting traditional
education to social media tools are a relatively new era and any
collaborative tool could be used for learning activities. This study
focuses (i) how the social media tools are useful for the learning
activities of the students of faculty of medicine in King Khalid
University (ii) whether the social media affects the collaborative
learning with interaction among students, among course instructor,
their engagement, perceived ease of use and perceived ease of
usefulness (TAM) (iii) overall, the students satisfy with this
collaborative learning through Social media.
Abstract: Graphical User Interface (GUI) is essential to
programming, as is any other characteristic or feature, due to the fact
that GUI components provide the fundamental interaction between
the user and the program. Thus, we must give more interest to GUI
during building and development of systems. Also, we must give a
greater attention to the user who is the basic corner in the dealing
with the GUI. This paper introduces an approach for designing GUI
from one of the models of business workflows which describe the
workflow behavior of a system, specifically through Activity
Diagrams (AD).
Abstract: The 3D body movement signals captured during
human-human conversation include clues not only to the content of
people’s communication but also to their culture and personality.
This paper is concerned with automatic extraction of this information
from body movement signals. For the purpose of this research, we
collected a novel corpus from 27 subjects, arranged them into groups
according to their culture. We arranged each group into pairs and
each pair communicated with each other about different topics.
A state-of-art recognition system is applied to the problems of
person, culture, and topic recognition. We borrowed modeling,
classification, and normalization techniques from speech recognition.
We used Gaussian Mixture Modeling (GMM) as the main technique
for building our three systems, obtaining 77.78%, 55.47%, and
39.06% from the person, culture, and topic recognition systems
respectively. In addition, we combined the above GMM systems with
Support Vector Machines (SVM) to obtain 85.42%, 62.50%, and
40.63% accuracy for person, culture, and topic recognition
respectively.
Although direct comparison among these three recognition
systems is difficult, it seems that our person recognition system
performs best for both GMM and GMM-SVM, suggesting that intersubject
differences (i.e. subject’s personality traits) are a major
source of variation. When removing these traits from culture and
topic recognition systems using the Nuisance Attribute Projection
(NAP) and the Intersession Variability Compensation (ISVC)
techniques, we obtained 73.44% and 46.09% accuracy from culture
and topic recognition systems respectively.
Abstract: Analyzing DNA microarray data sets is a great
challenge, which faces the bioinformaticians due to the complication
of using statistical and machine learning techniques. The challenge
will be doubled if the microarray data sets contain missing data,
which happens regularly because these techniques cannot deal with
missing data. One of the most important data analysis process on
the microarray data set is feature selection. This process finds the
most important genes that affect certain disease. In this paper, we
introduce a technique for imputing the missing data in microarray
data sets while performing feature selection.
Abstract: Currently, there is excessively growing information
about places on Facebook, which is the largest social network but
such information is not explicitly organized and ranked. Therefore
users cannot exploit such data to recommend places conveniently and
quickly. This paper proposes a Facebook application and an Android
application that recommend places based on the number of check-ins
of those places, the distance of those places from the current location,
the number of people who like Facebook page of those places, and
the number of talking about of those places. Related Facebook data is
gathered via Facebook API requests. The experimental results of the
developed applications show that the applications can recommend
places and rank interesting places from the most to the least. We have
found that the average satisfied score of the proposed Facebook
application is 4.8 out of 5. The users’ satisfaction can increase by
adding the app features that support personalization in terms of
interests and preferences.
Abstract: Modelling of the earth's surface and evaluation of
urban environment, with 3D models, is an important research topic.
New stereo capabilities of high resolution optical satellites images,
such as the tri-stereo mode of Pleiades, combined with new image
matching algorithms, are now available and can be applied in urban
area analysis. In addition, photogrammetry software packages gained
new, more efficient matching algorithms, such as SGM, as well as
improved filters to deal with shadow areas, can achieve more dense
and more precise results.
This paper describes a comparison between 3D data extracted
from tri-stereo and dual stereo satellite images, combined with pixel
based matching and Wallis filter. The aim was to improve the
accuracy of 3D models especially in urban areas, in order to assess if
satellite images are appropriate for a rapid evaluation of urban
environments.
The results showed that 3D models achieved by Pleiades tri-stereo
outperformed, both in terms of accuracy and detail, the result
obtained from a Geo-eye pair. The assessment was made with
reference digital surface models derived from high resolution aerial
photography. This could mean that tri-stereo images can be
successfully used for the proposed urban change analyses.
Abstract: One image is worth more than thousand words.
Images if analyzed can reveal useful information. Low level image
processing deals with the extraction of specific feature from a single
image. Now the question arises: What technique should be used to
extract patterns of very large and detailed image database? The
answer of the question is: “Image Mining”. Image Mining deals with
the extraction of image data relationship, implicit knowledge, and
another pattern from the collection of images or image database. It is
nothing but the extension of Data Mining. In the following paper, not
only we are going to scrutinize the current techniques of image
mining but also present a new technique for mining images using
Genetic Algorithm.
Abstract: The Great East Japan Earthquake occurred at 14:46 on Friday, March 11, 2011. It was the most powerful known earthquake to have hit Japan. The earthquake triggered extremely destructive tsunami waves of up to 40.5 meters in height. We focus on the ship’s evacuation from tsunami. Then we analyze about ships evacuation from tsunami using multi-agent simulation and we want to prepare for a coming earthquake. We developed a simulation model of ships that set sail from the port in order to evacuate from the tsunami considering the ship carrying dangerous goods.
Abstract: Verification and Validation of Simulated Process
Model is the most important phase of the simulator life cycle.
Evaluation of simulated process models based on Verification and
Validation techniques checks the closeness of each component model
(in a simulated network) with the real system/process with respect to
dynamic behaviour under steady state and transient conditions. The
process of Verification and Validation helps in qualifying the process
simulator for the intended purpose whether it is for providing
comprehensive training or design verification. In general, model
verification is carried out by comparison of simulated component
characteristics with the original requirement to ensure that each step
in the model development process completely incorporates all the
design requirements. Validation testing is performed by comparing
the simulated process parameters to the actual plant process
parameters either in standalone mode or integrated mode.
A Full Scope Replica Operator Training Simulator for PFBR -
Prototype Fast Breeder Reactor has been developed at IGCAR,
Kalpakkam, INDIA named KALBR-SIM (Kalpakkam Breeder
Reactor Simulator) where in the main participants are
engineers/experts belonging to Modeling Team, Process Design and
Instrumentation & Control design team. This paper discusses about
the Verification and Validation process in general, the evaluation
procedure adopted for PFBR operator training Simulator, the
methodology followed for verifying the models, the reference
documents and standards used etc. It details out the importance of
internal validation by design experts, subsequent validation by
external agency consisting of experts from various fields, model
improvement by tuning based on expert’s comments, final
qualification of the simulator for the intended purpose and the
difficulties faced while co-coordinating various activities.