Abstract: Devices in a pervasive computing system (PCS) are characterized by their context-awareness. It permits them to provide proactively adapted services to the user and applications. To do so, context must be well understood and modeled in an appropriate form which enhance its sharing between devices and provide a high level of abstraction. The most interesting methods for modeling context are those based on ontology however the majority of the proposed methods fail in proposing a generic ontology for context which limit their usability and keep them specific to a particular domain. The adaptation task must be done automatically and without an explicit intervention of the user. Devices of a PCS must acquire some intelligence which permits them to sense the current context and trigger the appropriate service or provide a service in a better suitable form. In this paper we will propose a generic service ontology for context modeling and a context-aware service adaptation based on a service oriented definition of context.
Abstract: Software Reusability is primary attribute of software
quality. There are metrics for identifying the quality of reusable
components but the function that makes use of these metrics to find
reusability of software components is still not clear. These metrics if
identified in the design phase or even in the coding phase can help us
to reduce the rework by improving quality of reuse of the component
and hence improve the productivity due to probabilistic increase in
the reuse level. In this paper, we have devised the framework of
metrics that uses McCabe-s Cyclometric Complexity Measure for
Complexity measurement, Regularity Metric, Halstead Software
Science Indicator for Volume indication, Reuse Frequency metric
and Coupling Metric values of the software component as input
attributes and calculated reusability of the software component. Here,
comparative analysis of the fuzzy, Neuro-fuzzy and Fuzzy-GA
approaches is performed to evaluate the reusability of software
components and Fuzzy-GA results outperform the other used
approaches. The developed reusability model has produced high
precision results as expected by the human experts.
Abstract: Mostly transforms are used for speech data
compressions which are lossy algorithms. Such algorithms are
tolerable for speech data compression since the loss in quality is not
perceived by the human ear. However the vector quantization (VQ)
has a potential to give more data compression maintaining the same
quality. In this paper we propose speech data compression algorithm
using vector quantization technique. We have used VQ algorithms
LBG, KPE and FCG. The results table shows computational
complexity of these three algorithms. Here we have introduced a new
performance parameter Average Fractional Change in Speech
Sample (AFCSS). Our FCG algorithm gives far better performance
considering mean absolute error, AFCSS and complexity as
compared to others.
Abstract: Metrics is the process by which numbers or symbols
are assigned to attributes of entities in the real world in such a way as
to describe them according to clearly defined rules. Software metrics
are instruments or ways to measuring all the aspect of software
product. These metrics are used throughout a software project to
assist in estimation, quality control, productivity assessment, and
project control. Object oriented software metrics focus on
measurements that are applied to the class and other characteristics.
These measurements convey the software engineer to the behavior of
the software and how changes can be made that will reduce
complexity and improve the continuing capability of the software.
Object oriented software metric can be classified in two types static
and dynamic. Static metrics are concerned with all the aspects of
measuring by static analysis of software and dynamic metrics are
concerned with all the measuring aspect of the software at run time.
Major work done before, was focusing on static metric. Also some
work has been done in the field of dynamic nature of the software
measurements. But research in this area is demanding for more work.
In this paper we give a set of dynamic metrics specifically for
polymorphism in object oriented system.
Abstract: This paper presents the novel Rao-Blackwellised
particle filter (RBPF) for mobile robot simultaneous localization and
mapping (SLAM) using monocular vision. The particle filter is
combined with unscented Kalman filter (UKF) to extending the path
posterior by sampling new poses that integrate the current observation
which drastically reduces the uncertainty about the robot pose. The
landmark position estimation and update is also implemented through
UKF. Furthermore, the number of resampling steps is determined
adaptively, which seriously reduces the particle depletion problem,
and introducing the evolution strategies (ES) for avoiding particle
impoverishment. The 3D natural point landmarks are structured with
matching Scale Invariant Feature Transform (SIFT) feature pairs. The
matching for multi-dimension SIFT features is implemented with a
KD-Tree in the time cost of O(log2
N). Experiment results on real robot
in our indoor environment show the advantages of our methods over
previous approaches.
Abstract: In this paper we propose a novel method for human
face segmentation using the elliptical structure of the human head. It
makes use of the information present in the edge map of the image.
In this approach we use the fact that the eigenvalues of covariance
matrix represent the elliptical structure. The large and small
eigenvalues of covariance matrix are associated with major and
minor axial lengths of an ellipse. The other elliptical parameters are
used to identify the centre and orientation of the face. Since an
Elliptical Hough Transform requires 5D Hough Space, the Circular
Hough Transform (CHT) is used to evaluate the elliptical parameters.
Sparse matrix technique is used to perform CHT, as it squeeze zero
elements, and have only a small number of non-zero elements,
thereby having an advantage of less storage space and computational
time. Neighborhood suppression scheme is used to identify the valid
Hough peaks. The accurate position of the circumference pixels for
occluded and distorted ellipses is identified using Bresenham-s
Raster Scan Algorithm which uses the geometrical symmetry
properties. This method does not require the evaluation of tangents
for curvature contours, which are very sensitive to noise. The method
has been evaluated on several images with different face orientations.
Abstract: In this work, a new approach is proposed to control
the manipulators for Humanoid robot. The kinematics of the
manipulators in terms of joint positions, velocity, acceleration and
torque of each joint is computed using the Denavit Hardenberg (D-H)
notations. These variables are used to design the manipulator control
system, which has been proposed in this work. In view of supporting
the development of a controller, a simulation of the manipulator is
designed for Humanoid robot. This simulation is developed through
the use of the Virtual Reality Toolbox and Simulink in Matlab. The
Virtual Reality Toolbox in Matlab provides the interfacing and
controls to an environment which is developed based on the Virtual
Reality Modeling Language (VRML). Chains of bones were used to
represent the robot.
Abstract: This paper summaries basic principles and concepts of
intelligent controls, implemented in humanoid robotics as well as
recent algorithms being devised for advanced control of humanoid
robots. Secondly, this paper presents a new approach neuro-fuzzy
system. We have included some simulating results from our
computational intelligence technique that will be applied to our
humanoid robot. Subsequently, we determine a relationship between
joint trajectories and located forces on robot-s foot through a
proposed neuro-fuzzy technique.
Abstract: A learning content management system (LCMS) is an
environment to support web-based learning content development.
Primary function of the system is to manage the learning process as
well as to generate content customized to meet a unique requirement
of each learner. Among the available supporting tools offered by
several vendors, we propose to enhance the LCMS functionality to
individualize the presented content with the induction ability. Our
induction technique is based on rough set theory. The induced rules
are intended to be the supportive knowledge for guiding the content
flow planning. They can also be used as decision rules to help
content developers on managing content delivered to individual
learner.
Abstract: Time series models have been used to make predictions of academic enrollments, weather, road accident, casualties and stock prices, etc. Based on the concepts of quartile regression models, we have developed a simple time variant quantile based fuzzy time series forecasting method. The proposed method bases the forecast using prediction of future trend of the data. In place of actual quantiles of the data at each point, we have converted the statistical concept into fuzzy concept by using fuzzy quantiles using fuzzy membership function ensemble. We have given a fuzzy metric to use the trend forecast and calculate the future value. The proposed model is applied for TAIFEX forecasting. It is shown that proposed method work best as compared to other models when compared with respect to model complexity and forecasting accuracy.
Abstract: Fuzzy C-means Clustering algorithm (FCM) is a
method that is frequently used in pattern recognition. It has the
advantage of giving good modeling results in many cases, although,
it is not capable of specifying the number of clusters by itself. In
FCM algorithm most researchers fix weighting exponent (m) to a
conventional value of 2 which might not be the appropriate for all
applications. Consequently, the main objective of this paper is to use
the subtractive clustering algorithm to provide the optimal number of
clusters needed by FCM algorithm by optimizing the parameters of
the subtractive clustering algorithm by an iterative search approach
and then to find an optimal weighting exponent (m) for the FCM
algorithm. In order to get an optimal number of clusters, the iterative
search approach is used to find the optimal single-output Sugenotype
Fuzzy Inference System (FIS) model by optimizing the
parameters of the subtractive clustering algorithm that give minimum
least square error between the actual data and the Sugeno fuzzy
model. Once the number of clusters is optimized, then two
approaches are proposed to optimize the weighting exponent (m) in
the FCM algorithm, namely, the iterative search approach and the
genetic algorithms. The above mentioned approach is tested on the
generated data from the original function and optimal fuzzy models
are obtained with minimum error between the real data and the
obtained fuzzy models.
Abstract: This paper shows possibility of extraction Social,
Group and Individual Mind from Multiple Agents Rule Bases. Types
those Rule bases are selected as two fuzzy systems, namely
Mambdani and Takagi-Sugeno fuzzy system. Their rule bases are
describing (modeling) agent behavior. Modifying of agent behavior
in the time varying environment will be provided by learning fuzzyneural
networks and optimization of their parameters with using
genetic algorithms in development system FUZNET. Finally,
extraction Social, Group and Individual Mind from Multiple Agents
Rule Bases are provided by Cognitive analysis and Matching
criterion.
Abstract: Over the past several years, there has been a
considerable amount of research within the field of Quality of
Service (QoS) support for distributed multimedia systems. One of the
key issues in providing end-to-end QoS guarantees in packet
networks is determining a feasible path that satisfies a number of
QoS constraints. The problem of finding a feasible path is NPComplete
if number of constraints is more than two and cannot be
exactly solved in polynomial time. We proposed Feasible Path
Selection Algorithm (FPSA) that addresses issues with pertain to
finding a feasible path subject to delay and cost constraints and it
offers higher success rate in finding feasible paths.
Abstract: In order to answer the general question: “What does a simple agent with a limited life-time require for constructing a useful representation of the environment?" we propose a robot platform including the simplest probabilistic sensory and motor layers. Then we use the platform as a test-bed for evaluation of the navigational capabilities of the robot with different “brains". We claim that a protocognitive behavior is not a consequence of highly sophisticated sensory–motor organs but instead emerges through an increment of the internal complexity and reutilization of the minimal sensory information. We show that the most fundamental robot element, the short-time memory, is essential in obstacle avoidance. However, in the simplest conditions of no obstacles the straightforward memoryless robot is usually superior. We also demonstrate how a low level action planning, involving essentially nonlinear dynamics, provides a considerable gain to the robot performance dynamically changing the robot strategy. Still, however, for very short life time the brainless robot is superior. Accordingly we suggest that small organisms (or agents) with short life-time does not require complex brains and even can benefit from simple brain-like (reflex) structures. To some extend this may mean that controlling blocks of modern robots are too complicated comparative to their life-time and mechanical abilities.
Abstract: Airbag deployment has been known to be responsible
for huge death, incidental injuries and broken bones due to low crash
severity and wrong deployment decisions. Therefore, the authorities
and industries have been looking for more innovative and intelligent
products to be realized for future enhancements in the vehicle safety
systems (VSSs). Although the VSSs technologies have advanced
considerably, they still face challenges such as how to avoid
unnecessary and untimely airbag deployments that can be hazardous
and fatal. Currently, most of the existing airbag systems deploy
without regard to occupant size and position. As such, this paper will
focus on the occupant and crash sensing performances due to frontal
collisions for the new breed of so called smart airbag systems. It
intends to provide a thorough discussion relating to the occupancy
detection, occupant size classification, occupant off-position
detection to determine safe distance zone for airbag deployment,
crash-severity analysis and airbag decision algorithms via a computer
modeling. The proposed system model consists of three main
modules namely, occupant sensing, crash severity analysis and
decision fusion. The occupant sensing system module utilizes the
weight sensor to determine occupancy, classify the occupant size,
and determine occupant off-position condition to compute safe
distance for airbag deployment. The crash severity analysis module is
used to generate relevant information pertinent to airbag deployment
decision. Outputs from these two modules are fused to the decision
module for correct and efficient airbag deployment action. Computer
modeling work is carried out using Simulink, Stateflow,
SimMechanics and Virtual Reality toolboxes.
Abstract: We present a hybrid architecture of recurrent neural
networks (RNNs) inspired by hidden Markov models (HMMs). We
train the hybrid architecture using genetic algorithms to learn and
represent dynamical systems. We train the hybrid architecture on a
set of deterministic finite-state automata strings and observe the
generalization performance of the hybrid architecture when presented
with a new set of strings which were not present in the training data
set. In this way, we show that the hybrid system of HMM and RNN
can learn and represent deterministic finite-state automata. We ran
experiments with different sets of population sizes in the genetic
algorithm; we also ran experiments to find out which weight
initializations were best for training the hybrid architecture. The
results show that the hybrid architecture of recurrent neural networks
inspired by hidden Markov models can train and represent dynamical
systems. The best training and generalization performance is
achieved when the hybrid architecture is initialized with random real
weight values of range -15 to 15.
Abstract: Recently many research has been conducted to
retrieve pertinent parameters and adequate models for automatic
music genre classification. In this paper, two measures based upon
information theory concepts are investigated for mapping the features
space to decision space. A Gaussian Mixture Model (GMM) is used
as a baseline and reference system. Various strategies are proposed
for training and testing sessions with matched or mismatched
conditions, long training and long testing, long training and short
testing. For all experiments, the file sections used for testing are
never been used during training. With matched conditions all
examined measures yield the best and similar scores (almost 100%).
With mismatched conditions, the proposed measures yield better
scores than the GMM baseline system, especially for the short testing
case. It is also observed that the average discrimination information
measure is most appropriate for music category classifications and on
the other hand the divergence measure is more suitable for music
subcategory classifications.
Abstract: The communication networks development and
advancement during two last decades has been toward a single goal
and that is gradual change from circuit-switched networks to packed
switched ones. Today a lot of networks operates are trying to
transform the public telephone networks to multipurpose packed
switch. This new achievement is generally called "next generation
networks". In fact, the next generation networks enable the operators
to transfer every kind of services (sound, data and video) on a
network. First, in this report the definition, characteristics and next
generation networks services and then ad-hoc networks role in the
next generation networks are studied.
Abstract: This paper presents an alternate approach that uses
artificial neural network to simulate the flood level dynamics in a
river basin. The algorithm was developed in a decision support
system environment in order to enable users to process the data. The
decision support system is found to be useful due to its interactive
nature, flexibility in approach and evolving graphical feature and can
be adopted for any similar situation to predict the flood level. The
main data processing includes the gauging station selection, input
generation, lead-time selection/generation, and length of prediction.
This program enables users to process the flood level data, to
train/test the model using various inputs and to visualize results. The
program code consists of a set of files, which can as well be modified
to match other purposes. This program may also serve as a tool for
real-time flood monitoring and process control. The running results
indicate that the decision support system applied to the flood level
seems to have reached encouraging results for the river basin under
examination. The comparison of the model predictions with the
observed data was satisfactory, where the model is able to forecast
the flood level up to 5 hours in advance with reasonable prediction
accuracy. Finally, this program may also serve as a tool for real-time
flood monitoring and process control.
Abstract: Logic based methods for learning from structured data
is limited w.r.t. handling large search spaces, preventing large-sized
substructures from being considered by the resulting classifiers. A
novel approach to learning from structured data is introduced that
employs a structure transformation method, called finger printing, for
addressing these limitations. The method, which generates features
corresponding to arbitrarily complex substructures, is implemented in
a system, called DIFFER. The method is demonstrated to perform
comparably to an existing state-of-art method on some benchmark
data sets without requiring restrictions on the search space.
Furthermore, learning from the union of features generated by finger
printing and the previous method outperforms learning from each
individual set of features on all benchmark data sets, demonstrating
the benefit of developing complementary, rather than competing,
methods for structure classification.