Abstract: Fuzzy Load forecasting plays a paramount role in the operation and management of power systems. Accurate estimation of future power demands for various lead times facilitates the task of generating power reliably and economically. The forecasting of future loads for a relatively large lead time (months to few years) is studied here (long term load forecasting). Among the various techniques used in forecasting load, artificial intelligence techniques provide greater accuracy to the forecasts as compared to conventional techniques. Fuzzy Logic, a very robust artificial intelligent technique, is described in this paper to forecast load on long term basis. The paper gives a general algorithm to forecast long term load. The algorithm is an Extension of Short term load forecasting method to Long term load forecasting and concentrates not only on the forecast values of load but also on the errors incorporated into the forecast. Hence, by correcting the errors in the forecast, forecasts with very high accuracy have been achieved. The algorithm, in the paper, is demonstrated with the help of data collected for residential sector (LT2 (a) type load: Domestic consumers). Load, is determined for three consecutive years (from April-06 to March-09) in order to demonstrate the efficiency of the algorithm and to forecast for the next two years (from April-09 to March-11).
Abstract: Finding the shortest path between two positions is a
fundamental problem in transportation, routing, and communications
applications. In robot motion planning, the robot should pass around
the obstacles touching none of them, i.e. the goal is to find a
collision-free path from a starting to a target position. This task has
many specific formulations depending on the shape of obstacles,
allowable directions of movements, knowledge of the scene, etc.
Research of path planning has yielded many fundamentally different
approaches to its solution, mainly based on various decomposition
and roadmap methods. In this paper, we show a possible use of
visibility graphs in point-to-point motion planning in the Euclidean
plane and an alternative approach using Voronoi diagrams that
decreases the probability of collisions with obstacles. The second
application area, investigated here, is focused on problems of finding
minimal networks connecting a set of given points in the plane using
either only straight connections between pairs of points (minimum
spanning tree) or allowing the addition of auxiliary points to the set
to obtain shorter spanning networks (minimum Steiner tree).
Abstract: The aim of this study was to evaluate the sensitivity
of a range of EEG indices to time-on-task effects and to a workload
manipulation (cueing), during performance of a resource-limited
vigilance task. Effects of task period and cueing on performance and
subjective state response were consistent with previous vigilance
studies and with resource theory. Two EEG indices – the Task Load
Index (TLI) and global lower frequency (LF) alpha power – showed
effects of task period and cueing similar to those seen with correct
detections. Across four successive task periods, the TLI declined and
LF alpha power increased. Cueing increased TLI and decreased LF
alpha. Other indices – the Engagement Index (EI), frontal theta and
upper frequency (UF) alpha failed to show these effects. However, EI
and frontal theta were sensitive to interactive effects of task period
and cueing, which may correspond to a stronger anxiety response to
the uncued task.
Abstract: Web services are pieces of software that can be invoked via a standardized protocol. They can be combined via formalized taskflow languages. The Open Knowledge system is a fully distributed system using P2P technology, that allows users to publish the setaskflows, and programmers to register their web services or publish implementations of them, for the roles described in these workflows.Besides this, the system offers the functionality to select a peer that could coordinate such an interaction model and inform web services when it is their 'turn'. In this paper we describe the architecture and implementation of the Open Knowledge Kernel which provides the core functionality of the Open Knowledge system.
Abstract: Nowadays data backup format doesn-t cease to appear raising so the anxiety on their accessibility and their perpetuity. XML is one of the most promising formats to guarantee the integrity of data. This article suggests while showing one thing man can do with XML. Indeed XML will help to create a data backup model. The main task will consist in defining an application in JAVA able to convert information of a database in XML format and restore them later.
Abstract: Heart sound is an acoustic signal and many techniques
used nowadays for human recognition tasks borrow speech recognition
techniques. One popular choice for feature extraction of accoustic
signals is the Mel Frequency Cepstral Coefficients (MFCC) which
maps the signal onto a non-linear Mel-Scale that mimics the human
hearing. However the Mel-Scale is almost linear in the frequency
region of heart sounds and thus should produce similar results with
the standard cepstral coefficients (CC). In this paper, MFCC is
investigated to see if it produces superior results for PCG based
human identification system compared to CC. Results show that the
MFCC system is still superior to CC despite linear filter-banks in
the lower frequency range, giving up to 95% correct recognition rate
for MFCC and 90% for CC. Further experiments show that the high
recognition rate is due to the implementation of filter-banks and not
from Mel-Scaling.
Abstract: Modeling and vibration of a flexible link manipulator
with tow flexible links and rigid joints are investigated which can
include an arbitrary number of flexible links. Hamilton principle and
finite element approach is proposed to model the dynamics of
flexible manipulators. The links are assumed to be deflection due to
bending. The association between elastic displacements of links is
investigated, took into account the coupling effects of elastic motion
and rigid motion. Flexible links are treated as Euler-Bernoulli beams
and the shear deformation is thus abandoned. The dynamic behavior
due to flexibility of links is well demonstrated through numerical
simulation. The rigid-body motion and elastic deformations are
separated by linearizing the equations of motion around the rigid
body reference path. Simulation results are shown on for both
position and force trajectory tracking tasks in the presence of varying
parameters and unknown dynamics remarkably well. The proposed
method can be used in both dynamic simulation and controller
design.
Abstract: In this paper three different approaches for person
verification and identification, i.e. by means of fingerprints, face and
voice recognition, are studied. Face recognition uses parts-based
representation methods and a manifold learning approach. The
assessment criterion is recognition accuracy. The techniques under
investigation are: a) Local Non-negative Matrix Factorization
(LNMF); b) Independent Components Analysis (ICA); c) NMF with
sparse constraints (NMFsc); d) Locality Preserving Projections
(Laplacianfaces). Fingerprint detection was approached by classical
minutiae (small graphical patterns) matching through image
segmentation by using a structural approach and a neural network as
decision block. As to voice / speaker recognition, melodic cepstral
and delta delta mel cepstral analysis were used as main methods, in
order to construct a supervised speaker-dependent voice recognition
system. The final decision (e.g. “accept-reject" for a verification
task) is taken by using a majority voting technique applied to the
three biometrics. The preliminary results, obtained for medium
databases of fingerprints, faces and voice recordings, indicate the
feasibility of our study and an overall recognition precision (about
92%) permitting the utilization of our system for a future complex
biometric card.
Abstract: Markov games are a generalization of Markov
decision process to a multi-agent setting. Two-player zero-sum
Markov game framework offers an effective platform for designing
robust controllers. This paper presents two novel controller design
algorithms that use ideas from game-theory literature to produce
reliable controllers that are able to maintain performance in presence
of noise and parameter variations. A more widely used approach for
controller design is the H∞ optimal control, which suffers from high
computational demand and at times, may be infeasible. Our approach
generates an optimal control policy for the agent (controller) via a
simple Linear Program enabling the controller to learn about the
unknown environment. The controller is facing an unknown
environment, and in our formulation this environment corresponds to
the behavior rules of the noise modeled as the opponent. Proposed
controller architectures attempt to improve controller reliability by a
gradual mixing of algorithmic approaches drawn from the game
theory literature and the Minimax-Q Markov game solution
approach, in a reinforcement-learning framework. We test the
proposed algorithms on a simulated Inverted Pendulum Swing-up
task and compare its performance against standard Q learning.
Abstract: Over the past decades, automatic face recognition has become a highly active research area, mainly due to the countless application possibilities in both the private as well as the public sector. Numerous algorithms have been proposed in the literature to cope with the problem of face recognition, nevertheless, a group of methods commonly referred to as appearance based have emerged as the dominant solution to the face recognition problem. Many comparative studies concerned with the performance of appearance based methods have already been presented in the literature, not rarely with inconclusive and often with contradictory results. No consent has been reached within the scientific community regarding the relative ranking of the efficiency of appearance based methods for the face recognition task, let alone regarding their susceptibility to appearance changes induced by various environmental factors. To tackle these open issues, this paper assess the performance of the three dominant appearance based methods: principal component analysis, linear discriminant analysis and independent component analysis, and compares them on equal footing (i.e., with the same preprocessing procedure, with optimized parameters for the best possible performance, etc.) in face verification experiments on the publicly available XM2VTS database. In addition to the comparative analysis on the XM2VTS database, ten degraded versions of the database are also employed in the experiments to evaluate the susceptibility of the appearance based methods on various image degradations which can occur in "real-life" operating conditions. Our experimental results suggest that linear discriminant analysis ensures the most consistent verification rates across the tested databases.
Abstract: This paper presents a digital engineering library – the
Digital Mechanism and Gear Library, DMG-Lib – providing a multimedia collection of e-books, pictures, videos and animations in the domain of mechanisms and machines. The specific characteristic
about DMG-Lib is the enrichment and cross-linking of the different
sources. DMG-Lib e-books not only present pages as pixel images
but also selected figures augmented with interactive animations. The
presentation of animations in e-books increases the clearness of the
information.
To present the multimedia e-books and make them available in the
DMG-Lib internet portal a special e-book reader called StreamBook
was developed for optimal presentation of digitized books and to
enable reading the e-books as well as working efficiently and individually with the enriched information. The objective is to support different user tasks ranging from information retrieval to
development and design of mechanisms.
Abstract: High Strength Concrete (HSC) is defined as concrete
that meets special combination of performance and uniformity
requirements that cannot be achieved routinely using conventional
constituents and normal mixing, placing, and curing procedures. It is
a highly complex material, which makes modeling its behavior a very
difficult task. This paper aimed to show possible applicability of
Neural Networks (NN) to predict the slump in High Strength
Concrete (HSC). Neural Network models is constructed, trained and
tested using the available test data of 349 different concrete mix
designs of High Strength Concrete (HSC) gathered from a particular
Ready Mix Concrete (RMC) batching plant. The most versatile
Neural Network model is selected to predict the slump in concrete.
The data used in the Neural Network models are arranged in a format
of eight input parameters that cover the Cement, Fly Ash, Sand,
Coarse Aggregate (10 mm), Coarse Aggregate (20 mm), Water,
Super-Plasticizer and Water/Binder ratio. Furthermore, to test the
accuracy for predicting slump in concrete, the final selected model is
further used to test the data of 40 different concrete mix designs of
High Strength Concrete (HSC) taken from the other batching plant.
The results are compared on the basis of error function (or
performance function).
Abstract: Tasks of an application program of an embedded system are managed by the scheduler of a real-time operating system
(RTOS). Most RTOSs adopt just fixed priority scheduling, which is not optimal in all cases. Some applications require earliest deadline
first (EDF) scheduling, which is an optimal scheduling algorithm.
In order to develop an efficient real-time embedded system, the
scheduling algorithm of the RTOS should be selectable. The paper presents a method to customize the scheduler using aspectoriented
programming. We define aspects to replace the fixed priority scheduling mechanism of an OSEK OS with an EDF scheduling
mechanism. By using the aspects, we can customize the scheduler
without modifying the original source code. We have applied the
aspects to an OSEK OS and get a customized operating system with
EDF scheduling. The evaluation results show that the overhead of
aspect-oriented programming is small enough.
Abstract: This paper presents the effect of driving a motor
vehicle on the stress levels of older drivers, indicated by monitoring
their hear rate increase whilst completing various everyday driving
tasks. Results suggest that whilst older female drivers heart rate varied
more significantly than males, the actual age of a participant did not
result in a significant change in heart rate due to stress, within the age
group tested. The analysis of the results indicates the most stressful
manoeuvres undertaken by the older drivers and highlights the tasks
which were found difficult with a view to implementing technologies
to aid the more senior driver in automotive travel.
Abstract: This article presents the developments of efficient
algorithms for tablet copies comparison. Image recognition has
specialized use in digital systems such as medical imaging,
computer vision, defense, communication etc. Comparison between
two images that look indistinguishable is a formidable task. Two
images taken from different sources might look identical but due to
different digitizing properties they are not. Whereas small variation
in image information such as cropping, rotation, and slight
photometric alteration are unsuitable for based matching
techniques. In this paper we introduce different matching
algorithms designed to facilitate, for art centers, identifying real
painting images from fake ones. Different vision algorithms for
local image features are implemented using MATLAB. In this
framework a Table Comparison Computer Tool “TCCT" is
designed to facilitate our research. The TCCT is a Graphical Unit
Interface (GUI) tool used to identify images by its shapes and
objects. Parameter of vision system is fully accessible to user
through this graphical unit interface. And then for matching, it
applies different description technique that can identify exact
figures of objects.
Abstract: The human head representations usually are based on
the morphological – structural components of a real model. Over the
time became more and more necessary to achieve full virtual models
that comply very rigorous with the specifications of the human
anatomy. Still, making and using a model perfectly fitted with the
real anatomy is a difficult task, because it requires large hardware
resources and significant times for processing. That is why it is
necessary to choose the best compromise solution, which keeps the
right balance between the details perfection and the resources
consumption, in order to obtain facial animations with real-time
rendering. We will present here the way in which we achieved such a
3D system that we intend to use as a base point in order to create
facial animations with real-time rendering, used in medicine to find
and to identify different types of pathologies.
Abstract: Little research has examined working memory
capacity (WMC) in signed language interpreters and deaf signers.
This paper presents the findings of a study that investigated WMC in
professional Australian Sign Language (Auslan)/English interpreters
and deaf signers. Thirty-one professional Auslan/English interpreters
(14 hearing native signers and 17 hearing non-native signers)
completed an English listening span task and then an Auslan working
memory span task, which tested their English WMC and their Auslan
WMC, respectively. Moreover, 26 deaf signers (6 deaf native signers
and 20 deaf non-native signers) completed the Auslan working
memory span task. The results revealed a non-significant difference
between the hearing native signers and the hearing non-native signers
in their English WMC, and a non-significant difference between the
hearing native signers and the hearing non-native signers in their
Auslan WMC. Moreover, the results yielded a non-significant
difference between the hearing native signers- English WMC and
their Auslan WMC, and a non-significant difference between the
hearing non-native signers- English WMC and their Auslan WMC.
Furthermore, a non-significant difference was found between the deaf
native signers and the deaf non-native signers in their Auslan WMC.
Abstract: Microarray data profiles gene expression on a whole
genome scale, therefore, it provides a good way to study associations
between gene expression and occurrence or progression of cancer.
More and more researchers realized that microarray data is helpful
to predict cancer sample. However, the high dimension of gene
expressions is much larger than the sample size, which makes this
task very difficult. Therefore, how to identify the significant genes
causing cancer becomes emergency and also a hot and hard research
topic. Many feature selection algorithms have been proposed in
the past focusing on improving cancer predictive accuracy at the
expense of ignoring the correlations between the features. In this
work, a novel framework (named by SGS) is presented for stable gene
selection and efficient cancer prediction . The proposed framework
first performs clustering algorithm to find the gene groups where
genes in each group have higher correlation coefficient, and then
selects the significant genes in each group with Bayesian Lasso and
important gene groups with group Lasso, and finally builds prediction
model based on the shrinkage gene space with efficient classification
algorithm (such as, SVM, 1NN, Regression and etc.). Experiment
results on real world data show that the proposed framework often
outperforms the existing feature selection and prediction methods,
say SAM, IG and Lasso-type prediction model.
Abstract: Developing an accurate classifier for high dimensional microarray datasets is a challenging task due to availability of small sample size. Therefore, it is important to determine a set of relevant genes that classify the data well. Traditionally, gene selection method often selects the top ranked genes according to their discriminatory power. Often these genes are correlated with each other resulting in redundancy. In this paper, we have proposed a hybrid method using feature ranking and wrapper method (Genetic Algorithm with multiclass SVM) to identify a set of relevant genes that classify the data more accurately. A new fitness function for genetic algorithm is defined that focuses on selecting the smallest set of genes that provides maximum accuracy. Experiments have been carried on four well-known datasets1. The proposed method provides better results in comparison to the results found in the literature in terms of both classification accuracy and number of genes selected.