Abstract: Finding the shortest path between two positions is a
fundamental problem in transportation, routing, and communications
applications. In robot motion planning, the robot should pass around
the obstacles touching none of them, i.e. the goal is to find a
collision-free path from a starting to a target position. This task has
many specific formulations depending on the shape of obstacles,
allowable directions of movements, knowledge of the scene, etc.
Research of path planning has yielded many fundamentally different
approaches to its solution, mainly based on various decomposition
and roadmap methods. In this paper, we show a possible use of
visibility graphs in point-to-point motion planning in the Euclidean
plane and an alternative approach using Voronoi diagrams that
decreases the probability of collisions with obstacles. The second
application area, investigated here, is focused on problems of finding
minimal networks connecting a set of given points in the plane using
either only straight connections between pairs of points (minimum
spanning tree) or allowing the addition of auxiliary points to the set
to obtain shorter spanning networks (minimum Steiner tree).
Abstract: Protein residue contact map is a compact
representation of secondary structure of protein. Due to the
information hold in the contact map, attentions from researchers in
related field were drawn and plenty of works have been done
throughout the past decade. Artificial intelligence approaches have
been widely adapted in related works such as neural networks,
genetic programming, and Hidden Markov model as well as support
vector machine. However, the performance of the prediction was not
generalized which probably depends on the data used to train and
generate the prediction model. This situation shown the importance
of the features or information used in affecting the prediction
performance. In this research, support vector machine was used to
predict protein residue contact map on different combination of
features in order to show and analyze the effectiveness of the
features.
Abstract: Revolutions Applications such as telecommunications, hands-free communications, recording, etc. which need at least one microphone, the signal is usually infected by noise and echo. The important application is the speech enhancement, which is done to remove suppressed noises and echoes taken by a microphone, beside preferred speech. Accordingly, the microphone signal has to be cleaned using digital signal processing DSP tools before it is played out, transmitted, or stored. Engineers have so far tried different approaches to improving the speech by get back the desired speech signal from the noisy observations. Especially Mobile communication, so in this paper will do reconstruction of the speech signal, observed in additive background noise, using the Kalman filter technique to estimate the parameters of the Autoregressive Process (AR) in the state space model and the output speech signal obtained by the MATLAB. The accurate estimation by Kalman filter on speech would enhance and reduce the noise then compare and discuss the results between actual values and estimated values which produce the reconstructed signals.
Abstract: The article presents the whole model of IS/IT
architecture exception governance. As first, the assumptions of
presented model are set. As next, there is defined a generic
governance model that serves as a basis for the architecture exception
governance. The architecture exception definition and its attributes
follow. The model respects well known approaches to the area that
are described in the text, but it adopts higher granularity in
description and expands the process view with all the next necessary
governance components as roles, principles and policies, tools to
enable the implementation of the model into organizations. The
architecture exception process is decomposed into a set of processes
related to the architecture exception lifecycle consisting of set of
phases and architecture exception states. Finally, there is information
about my future research related to this area.
Abstract: The effect of the discontinuity of the rail ends and the
presence of lower modulus insulation material at the gap to the
variations of stresses in the insulated rail joint (IRJ) is presented. A
three-dimensional wheel – rail contact model in the finite element
framework is used for the analysis. It is shown that the maximum stress
occurs in the subsurface of the railhead when the wheel contact occurs
far away from the rail end and migrates to the railhead surface as the
wheel approaches the rail end; under this condition, the interface
between the rail ends and the insulation material has suffered
significantly increased levels of stress concentration. The ratio of the
elastic modulus of the railhead and insulation material is found to alter
the levels of stress concentration. Numerical result indicates that a
higher elastic modulus insulating material can reduce the stress
concentration in the railhead but will generate higher stresses in the
insulation material, leading to earlier failure of the insulation material
Abstract: In this paper three different approaches for person
verification and identification, i.e. by means of fingerprints, face and
voice recognition, are studied. Face recognition uses parts-based
representation methods and a manifold learning approach. The
assessment criterion is recognition accuracy. The techniques under
investigation are: a) Local Non-negative Matrix Factorization
(LNMF); b) Independent Components Analysis (ICA); c) NMF with
sparse constraints (NMFsc); d) Locality Preserving Projections
(Laplacianfaces). Fingerprint detection was approached by classical
minutiae (small graphical patterns) matching through image
segmentation by using a structural approach and a neural network as
decision block. As to voice / speaker recognition, melodic cepstral
and delta delta mel cepstral analysis were used as main methods, in
order to construct a supervised speaker-dependent voice recognition
system. The final decision (e.g. “accept-reject" for a verification
task) is taken by using a majority voting technique applied to the
three biometrics. The preliminary results, obtained for medium
databases of fingerprints, faces and voice recordings, indicate the
feasibility of our study and an overall recognition precision (about
92%) permitting the utilization of our system for a future complex
biometric card.
Abstract: Markov games are a generalization of Markov
decision process to a multi-agent setting. Two-player zero-sum
Markov game framework offers an effective platform for designing
robust controllers. This paper presents two novel controller design
algorithms that use ideas from game-theory literature to produce
reliable controllers that are able to maintain performance in presence
of noise and parameter variations. A more widely used approach for
controller design is the H∞ optimal control, which suffers from high
computational demand and at times, may be infeasible. Our approach
generates an optimal control policy for the agent (controller) via a
simple Linear Program enabling the controller to learn about the
unknown environment. The controller is facing an unknown
environment, and in our formulation this environment corresponds to
the behavior rules of the noise modeled as the opponent. Proposed
controller architectures attempt to improve controller reliability by a
gradual mixing of algorithmic approaches drawn from the game
theory literature and the Minimax-Q Markov game solution
approach, in a reinforcement-learning framework. We test the
proposed algorithms on a simulated Inverted Pendulum Swing-up
task and compare its performance against standard Q learning.
Abstract: This article presents the results of a study conducted to identify operational risks for information systems (IS) with service-oriented architecture (SOA). Analysis of current approaches to risk and system error classifications revealed that the system error classes were never used for SOA risk estimation. Additionally system error classes are not normallyexperimentally supported with realenterprise error data. Through the study several categories of various existing error classifications systems are applied and three new error categories with sub-categories are identified. As a part of operational risks a new error classification scheme is proposed for SOA applications. It is based on errors of real information systems which are service providers for application with service-oriented architecture. The proposed classification approach has been used to classify SOA system errors for two different enterprises (oil and gas industry, metal and mining industry). In addition we have conducted a research to identify possible losses from operational risks.
Abstract: Using Dynamic Bayesian Networks (DBN) to model genetic regulatory networks from gene expression data is one of the major paradigms for inferring the interactions among genes. Averaging a collection of models for predicting network is desired, rather than relying on a single high scoring model. In this paper, two kinds of model searching approaches are compared, which are Greedy hill-climbing Search with Restarts (GSR) and Markov Chain Monte Carlo (MCMC) methods. The GSR is preferred in many papers, but there is no such comparison study about which one is better for DBN models. Different types of experiments have been carried out to try to give a benchmark test to these approaches. Our experimental results demonstrated that on average the MCMC methods outperform the GSR in accuracy of predicted network, and having the comparable performance in time efficiency. By proposing the different variations of MCMC and employing simulated annealing strategy, the MCMC methods become more efficient and stable. Apart from comparisons between these approaches, another objective of this study is to investigate the feasibility of using DBN modeling approaches for inferring gene networks from few snapshots of high dimensional gene profiles. Through synthetic data experiments as well as systematic data experiments, the experimental results revealed how the performances of these approaches can be influenced as the target gene network varies in the network size, data size, as well as system complexity.
Abstract: With the advance of information technology in the
new era the applications of Internet to access data resources has
steadily increased and huge amount of data have become accessible
in various forms. Obviously, the network providers and agencies,
look after to prevent electronic attacks that may be harmful or may
be related to terrorist applications. Thus, these have facilitated the
authorities to under take a variety of methods to protect the special
regions from harmful data. One of the most important approaches is
to use firewall in the network facilities. The main objectives of
firewalls are to stop the transfer of suspicious packets in several
ways. However because of its blind packet stopping, high process
power requirements and expensive prices some of the providers are
reluctant to use the firewall. In this paper we proposed a method to
find a discriminate function to distinguish between usual packets and
harmful ones by the statistical processing on the network router logs.
By discriminating these data, an administrator may take an approach
action against the user. This method is very fast and can be used
simply in adjacent with the Internet routers.
Abstract: Blind signatures enable users to obtain valid signatures for a message without revealing its content to the signer. This paper presents a new blind signature scheme, i.e. identity-based blind signature scheme with message recovery. Due to the message recovery property, the new scheme requires less bandwidth than the identitybased blind signatures with similar constructions. The scheme is based on modified Weil/Tate pairings over elliptic curves, and thus requires smaller key sizes for the same level of security compared to previous approaches not utilizing bilinear pairings. Security and efficiency analysis for the scheme is provided in this paper.
Abstract: The volume of XML data exchange is explosively increasing, and the need for efficient mechanisms of XML data management is vital. Many XML storage models have been proposed for storing XML DTD-independent documents in relational database systems. Benchmarking is the best way to highlight pros and cons of different approaches. In this study, we use a common benchmarking scheme, known as XMark to compare the most cited and newly proposed DTD-independent methods in terms of logical reads, physical I/O, CPU time and duration. We show the effect of Label Path, extracting values and storing in another table and type of join needed for each method's query answering.
Abstract: Academia-industry relationship is not like that of
technology donator-acceptor, but is of interactive and collaborative
nature, acknowledging and ensuring mutual respect for each other-s
role and contributions with an eye to attaining the true purpose of
such relationships, namely, bringing about research-outcome
synergy. Indeed, academia-industry interactions are a system that
requires active and collaborative participations of all the
stakeholders.
This paper examines various issues associated with academic
institutions and industry collaboration with special attention to the
nature of resources and potentialities of stakeholders in the context of
knowledge management. This paper also explores the barriers of
academia-industry interaction. It identifies potential areas where
industry-s participation with academia would be most effective for
synergism. Lastly, this paper proposes an integrated model of several
new collaborative approaches that are possible, mainly in the Indian
scenario to strengthen academia-industry interface.
Abstract: Current OCR technology does not allow to
accurately recognizing small text images, such as those found
in web images. Our goal is to investigate new approaches to
recognize very low resolution text images containing antialiased
character shapes.
This paper presents a preliminary study on the variability of
such characters and the feasibility to discriminate them by
using geometrical features. In a first stage we analyze the
distribution of these features. In a second stage we present a
study on the discriminative power for recognizing isolated
characters, using various rendering methods and font
properties. Finally we present interesting results of our
evaluation tests leading to our conclusion and future focus.
Abstract: The impact of noise upon live quality has become an
important aspect to make both urban and environmental policythroughout
Europe and in Turkey. Concern over the quality of urban
environments, including noise levels and declining quality of green
space, is over the past decade with increasing emphasis on designing
livable and sustainable communities. According to the World Health
Organization, noise pollution is the third most hazardous
environmental type of pollution which proceeded by only air (gas
emission) and water pollution. The research carried out in two
phases, the first stage of the research noise and plant types providing
the suction of noise was evaluated through literature study and at the
second stage, definite types (Juniperus horizontalis L., Spirea
vanhouetti Briot., Cotoneaster dammerii C.K., Berberis thunbergii
D.C., Pyracantha coccinea M. etc.) were selected for the city of
Konya. Trials were conducted on the highway of Konya. The biggest
value of noise reduction was 6.3 dB(A), 4.9 dB(A), 6.2 dB(A) value
with compared to the control which includes the group that formed
by the bushes at the distance of 7m, 11m, 20m from the source and
5m, 9m, 20m of plant width, respectively. In this paper, definitions
regarding to noise and its sources were made and the precautions
were taken against to noise that mentioned earlier with the adverse
effects of noise. Plantation design approaches and suggestions
concerning to the diversity to be used, which are peculiar to roadside,
were developed to discuss the role and the function of plant material
to reduce the noise of the traffic.
Abstract: With increasing complexity in electronic systems
there is a need for system level anomaly detection and fault isolation.
Anomaly detection based on vector similarity to a training set is used
in this paper through two approaches, one the preserves the original
information, Mahalanobis Distance (MD), and the other that
compresses the data into its principal components, Projection Pursuit
Analysis. These methods have been used to detect deviations in
system performance from normal operation and for critical parameter
isolation in multivariate environments. The study evaluates the
detection capability of each approach on a set of test data with known
faults against a baseline set of data representative of such “healthy"
systems.
Abstract: Information and Communication Technologies (ICT) in mathematical education is a very active field of research and innovation, where learning is understood to be meaningful and grasping multiple linked representation rather than rote memorization, a great amount of literature offering a wide range of theories, learning approaches, methodologies and interpretations, are generally stressing the potentialities for teaching and learning using ICT. Despite the utilization of new learning approaches with ICT, students experience difficulties in learning concepts relevant to understanding mathematics, much remains unclear about the relationship between the computer environment, the activities it might support, and the knowledge that might emerge from such activities. Many questions that might arise in this regard: to what extent does the use of ICT help students in the process of understanding and solving tasks or problems? Is it possible to identify what aspects or features of students' mathematical learning can be enhanced by the use of technology? This paper will highlight the interest of the integration of information and communication technologies (ICT) into the teaching and learning of mathematics (quadratic functions), it aims to investigate the effect of four instructional methods on students- mathematical understanding and problem solving. Quantitative and qualitative methods are used to report about 43 students in middle school. Results showed that mathematical thinking and problem solving evolves as students engage with ICT activities and learn cooperatively.
Abstract: Recently, a model multi-agent e-commerce system based on mobile buyer agents and transfer of strategy modules was proposed. In this paper a different approach to code mobility is introduced, where agent mobility is replaced by local agent creation supplemented by similar code mobility as in the original proposal. UML diagrams of agents involved in the new approach to mobility and the augmented system activity diagram are presented and discussed.
Abstract: The feature extraction method(s) used to recognize
hand-printed characters play an important role in ICR applications.
In order to achieve high recognition rate for a recognition system, the
choice of a feature that suits for the given script is certainly an
important task. Even if a new feature required to be designed for a
given script, it is essential to know the recognition ability of the
existing features for that script. Devanagari script is being used in
various Indian languages besides Hindi the mother tongue of majority
of Indians. This research examines a variety of feature extraction
approaches, which have been used in various ICR/OCR applications,
in context to Devanagari hand-printed script. The study is conducted
theoretically and experimentally on more that 10 feature extraction
methods. The various feature extraction methods have been evaluated
on Devanagari hand-printed database comprising more than 25000
characters belonging to 43 alphabets. The recognition ability of the
features have been evaluated using three classifiers i.e. k-NN, MLP
and SVM.
Abstract: One of the long standing challenging aspect in mobile robotics is the ability to navigate autonomously, avoiding modeled and unmodeled obstacles especially in crowded and unpredictably changing environment. A successful way of structuring the navigation task in order to deal with the problem is within behavior based navigation approaches. In this study, Issues of individual behavior design and action coordination of the behaviors will be addressed using fuzzy logic. A layered approach is employed in this work in which a supervision layer based on the context makes a decision as to which behavior(s) to process (activate) rather than processing all behavior(s) and then blending the appropriate ones, as a result time and computational resources are saved.