Abstract: This paper presents a sensing system for 3D sensing
and mapping by a tracked mobile robot with an arm-type sensor
movable unit and a laser range finder (LRF). The arm-type sensor
movable unit is mounted on the robot and the LRF is installed at the
end of the unit. This system enables the sensor to change position and
orientation so that it avoids occlusions according to terrain by this
mechanism. This sensing system is also able to change the height of
the LRF by keeping its orientation flat for efficient sensing. In this kind
of mapping, it may be difficult for moving robot to apply mapping
algorithms such as the iterative closest point (ICP) because sets of the
2D data at each sensor height may be distant in a common surface. In
order for this kind of mapping, the authors therefore applied
interpolation to generate plausible model data for ICP. The results of
several experiments provided validity of these kinds of sensing and
mapping in this sensing system.
Abstract: In recent years, Radio Frequency Identification (RFID)
is followed with interest by many researches, especially for the
purpose of indoor positioning as the innate properties of RFID are
profitable for achieving it. A lot of algorithms or schemes are proposed
to be used in the RFID-based positioning system, but most of them are
lack of environmental consideration and it induces inaccuracy of
application. In this research, a lot of algorithms and schemes of RFID
indoor positioning are discussed to see whether effective or not on
application, and some rules are summarized for achieving accurate
positioning. On the other hand, a new term “Noise Factor" is involved
to describe the signal loss between the target and the obstacle. As a
result, experimental data can be obtained but not only simulation; and
the performance of the positioning system can be expressed
substantially.
Abstract: Due to the coexistence of different Radio Access
Technologies (RATs), Next Generation Wireless Networks (NGWN)
are predicted to be heterogeneous in nature. The coexistence of
different RATs requires a need for Common Radio Resource
Management (CRRM) to support the provision of Quality of Service
(QoS) and the efficient utilization of radio resources. RAT selection
algorithms are part of the CRRM algorithms. Simply, their role is to
verify if an incoming call will be suitable to fit into a heterogeneous
wireless network, and to decide which of the available RATs is most
suitable to fit the need of the incoming call and admit it.
Guaranteeing the requirements of QoS for all accepted calls and at
the same time being able to provide the most efficient utilization of
the available radio resources is the goal of RAT selection algorithm.
The normal call admission control algorithms are designed for
homogeneous wireless networks and they do not provide a solution
to fit a heterogeneous wireless network which represents the NGWN.
Therefore, there is a need to develop RAT selection algorithm for
heterogeneous wireless network. In this paper, we propose an
approach for RAT selection which includes receiving different
criteria, assessing and making decisions, then selecting the most
suitable RAT for incoming calls. A comprehensive survey of
different RAT selection algorithms for a heterogeneous wireless
network is studied.
Abstract: A proposed small-signal model parameters for a pseudomorphic high electron mobility transistor (PHEMT) is presented. Both extrinsic and intrinsic circuit elements of a smallsignal model are determined using genetic algorithm (GA) as a stochastic global search and optimization tool. The parameters extraction of the small-signal model is performed on 200-μm gate width AlGaAs/InGaAs PHEMT. The equivalent circuit elements for a proposed 18 elements model are determined directly from the measured S- parameters. The GA is used to extract the parameters of the proposed small-signal model from 0.5 up to 18 GHz.
Abstract: Researchers have been applying artificial/ computational intelligence (AI/CI) methods to computer games. In this research field, further researchesare required to compare AI/CI methods with respect to each game application. In thispaper, we report our experimental result on the comparison of evolution strategy, genetic algorithm and their hybrids, applied to evolving controller agents for MarioAI. GA revealed its advantage in our experiment, whereas the expected ability of ES in exploiting (fine-tuning) solutions was not clearly observed. The blend crossover operator and the mutation operator of GA might contribute well to explore the vast search space.
Abstract: In this paper, we introduce an effective strategy for
subgoal division and ordering based upon recursive subgoals and
combine this strategy with a genetic-based planning approach. This
strategy can be applied to domains with conjunctive goals. The main
idea is to recursively decompose a goal into a set of serializable
subgoals and to specify a strict ordering among the subgoals.
Empirical results show that the recursive subgoal strategy reduces the
size of the search space and improves the quality of solutions to
planning problems.
Abstract: Markov games can be effectively used to design
controllers for nonlinear systems. The paper presents two novel
controller design algorithms by incorporating ideas from gametheory
literature that address safety and consistency issues of the
'learned' control strategy. A more widely used approach for
controller design is the H∞ optimal control, which suffers from high
computational demand and at times, may be infeasible. We generate
an optimal control policy for the agent (controller) via a simple
Linear Program enabling the controller to learn about the unknown
environment. The controller is facing an unknown environment and
in our formulation this environment corresponds to the behavior rules
of the noise modeled as the opponent. Proposed approaches aim to
achieve 'safe-consistent' and 'safe-universally consistent' controller
behavior by hybridizing 'min-max', 'fictitious play' and 'cautious
fictitious play' approaches drawn from game theory. We empirically
evaluate the approaches on a simulated Inverted Pendulum swing-up
task and compare its performance against standard Q learning.
Abstract: This work presents a methodology for the design and
manufacture of propellers oriented to the experimental verification of
theoretical results based on the combined model. The design process
begins by using algorithms in Matlab which output data contain the
coordinates of the points that define the blade airfoils, in this case the
NACA 6512 airfoil was used. The modeling for the propeller blade
was made in NX7, through the imported files in Matlab and with the
help of surfaces. Later, the hub and the clamps were also modeled.
Finally, NX 7 also made possible to create post-processed files to the
required machine. It is possible to find the block of numbers with G
& M codes about the type of driver on the machine. The file
extension is .ptp. These files made possible to manufacture the blade,
and the hub of the propeller.
Abstract: The genetic algorithm (GA) based solution techniques
are found suitable for optimization because of their ability of
simultaneous multidimensional search. Many GA-variants have been
tried in the past to solve optimal power flow (OPF), one of the
nonlinear problems of electric power system. The issues like
convergence speed and accuracy of the optimal solution obtained
after number of generations using GA techniques and handling
system constraints in OPF are subjects of discussion. The results
obtained for GA-Fuzzy OPF on various power systems have shown
faster convergence and lesser generation costs as compared to other
approaches. This paper presents an enhanced GA-Fuzzy OPF (EGAOPF)
using penalty factors to handle line flow constraints and load
bus voltage limits for both normal network and contingency case
with congestion. In addition to crossover and mutation rate
adaptation scheme that adapts crossover and mutation probabilities
for each generation based on fitness values of previous generations, a
block swap operator is also incorporated in proposed EGA-OPF. The
line flow limits and load bus voltage magnitude limits are handled by
incorporating line overflow and load voltage penalty factors
respectively in each chromosome fitness function. The effects of
different penalty factors settings are also analyzed under contingent
state.
Abstract: In the past few years, the use of wireless sensor networks (WSNs) potentially increased in applications such as intrusion detection, forest fire detection, disaster management and battle field. Sensor nodes are generally battery operated low cost devices. The key challenge in the design and operation of WSNs is to prolong the network life time by reducing the energy consumption among sensor nodes. Node clustering is one of the most promising techniques for energy conservation. This paper presents a novel clustering algorithm which maximizes the network lifetime by reducing the number of communication among sensor nodes. This approach also includes new distributed cluster formation technique that enables self-organization of large number of nodes, algorithm for maintaining constant number of clusters by prior selection of cluster head and rotating the role of cluster head to evenly distribute the energy load among all sensor nodes.
Abstract: This paper presents a design of source encoding
calculator software which applies the two famous algorithms in the
field of information theory- the Shannon-Fano and the Huffman
schemes. This design helps to easily realize the algorithms without
going into a cumbersome, tedious and prone to error manual
mechanism of encoding the signals during the transmission. The
work describes the design of the software, how it works, comparison
with related works, its efficiency, its usefulness in the field of
information technology studies and the future prospects of the
software to engineers, students, technicians and alike. The designed
“Encodia" software has been developed, tested and found to meet the
intended requirements. It is expected that this application will help
students and teaching staff in their daily doing of information theory
related tasks. The process is ongoing to modify this tool so that it can
also be more intensely useful in research activities on source coding.
Abstract: An accurate optimal design of laminated composite
structures may present considerable difficulties due to the complexity
and multi-modality of the functional design space. The Big Bang
– Big Crunch (BB-BC) optimization method is a relatively new
technique and has already proved to be a valuable tool for structural
optimization. In the present study the exceptional efficiency of the
method is demonstrated by an example of the lay-up optimization
of multilayered anisotropic cylinders based on a three-dimensional
elasticity solution. It is shown that, due to its simplicity and speed,
the BB-BC is much more efficient for this class of problems when
compared to the genetic algorithms.
Abstract: In this paper we present a Feed-Foward Neural
Networks Autoregressive (FFNN-AR) model with genetic algorithms
training optimization in order to predict the gross domestic product
growth of six countries. Specifically we propose a kind of weighted
regression, which can be used for econometric purposes, where the
initial inputs are multiplied by the neural networks final optimum
weights from input-hidden layer of the training process. The
forecasts are compared with those of the ordinary autoregressive
model and we conclude that the proposed regression-s forecasting
results outperform significant those of autoregressive model.
Moreover this technique can be used in Autoregressive-Moving
Average models, with and without exogenous inputs, as also the
training process with genetics algorithms optimization can be
replaced by the error back-propagation algorithm.
Abstract: Many real-world data sets consist of a very high dimensional feature space. Most clustering techniques use the distance or similarity between objects as a measure to build clusters. But in high dimensional spaces, distances between points become relatively uniform. In such cases, density based approaches may give better results. Subspace Clustering algorithms automatically identify lower dimensional subspaces of the higher dimensional feature space in which clusters exist. In this paper, we propose a new clustering algorithm, ISC – Intelligent Subspace Clustering, which tries to overcome three major limitations of the existing state-of-art techniques. ISC determines the input parameter such as є – distance at various levels of Subspace Clustering which helps in finding meaningful clusters. The uniform parameters approach is not suitable for different kind of databases. ISC implements dynamic and adaptive determination of Meaningful clustering parameters based on hierarchical filtering approach. Third and most important feature of ISC is the ability of incremental learning and dynamic inclusion and exclusions of subspaces which lead to better cluster formation.
Abstract: Mining frequent tree patterns have many useful
applications in XML mining, bioinformatics, network routing, etc.
Most of the frequent subtree mining algorithms (i.e. FREQT,
TreeMiner and CMTreeMiner) use anti-monotone property in the
phase of candidate subtree generation. However, none of these
algorithms have verified the correctness of this property in tree
structured data. In this research it is shown that anti-monotonicity
does not generally hold, when using weighed support in tree pattern
discovery. As a result, tree mining algorithms that are based on this
property would probably miss some of the valid frequent subtree
patterns in a collection of trees. In this paper, we investigate the
correctness of anti-monotone property for the problem of weighted
frequent subtree mining. In addition we propose W3-Miner, a new
algorithm for full extraction of frequent subtrees. The experimental
results confirm that W3-Miner finds some frequent subtrees that the
previously proposed algorithms are not able to discover.
Abstract: Checkpointing is one of the commonly used techniques to provide fault-tolerance in distributed systems so that the system can operate even if one or more components have failed. However, mobile computing systems are constrained by low bandwidth, mobility, lack of stable storage, frequent disconnections and limited battery life. Hence, checkpointing protocols having lesser number of synchronization messages and fewer checkpoints are preferred in mobile environment. There are two different approaches, although not orthogonal, to checkpoint mobile computing systems namely, time-based and index-based. Our protocol is a fusion of these two approaches, though not first of its kind. In the present exposition, an index-based checkpointing protocol has been developed, which uses time to indirectly coordinate the creation of consistent global checkpoints for mobile computing systems. The proposed algorithm is non-blocking, adaptive, and does not use any control message. Compared to other contemporary checkpointing algorithms, it is computationally more efficient because it takes lesser number of checkpoints and does not need to compute dependency relationships. A brief account of important and relevant works in both the fields, time-based and index-based, has also been included in the presentation.
Abstract: This paper describes a new algorithm of arrangement
in parallel, based on Odd-Even Mergesort, called division and
concurrent mixes. The main idea of the algorithm is to achieve that
each processor uses a sequential algorithm for ordering a part of the
vector, and after that, for making the processors work in pairs in
order to mix two of these sections ordered in a greater one, also
ordered; after several iterations, the vector will be completely
ordered. The paper describes the implementation of the new
algorithm on a Message Passing environment (such as MPI). Besides,
it compares the obtained experimental results with the quicksort
sequential algorithm and with the parallel implementations (also on
MPI) of the algorithms quicksort and bitonic sort. The comparison
has been realized in an 8 processors cluster under GNU/Linux which
is running on a unique PC processor.
Abstract: This paper summarizes the results of some experiments for finding the effective features for disambiguation of Turkish verbs. Word sense disambiguation is a current area of investigation in which verbs have the dominant role. Generally verbs have more senses than the other types of words in the average and detecting these features for verbs may lead to some improvements for other word types. In this paper we have considered only the syntactical features that can be obtained from the corpus and tested by using some famous machine learning algorithms.
Abstract: Sudoku is a kind of logic puzzles. Each puzzle consists
of a board, which is a 9×9 cells, divided into nine 3×3 subblocks
and a set of numbers from 1 to 9. The aim of this puzzle is to
fill in every cell of the board with a number from 1 to 9 such
that in every row, every column, and every subblock contains each
number exactly one. Sudoku puzzles belong to combinatorial problem
(NP complete). Sudoku puzzles can be solved by using a variety of
techniques/algorithms such as genetic algorithms, heuristics, integer
programming, and so on. In this paper, we propose a new approach for
solving Sudoku which is by modelling them as block-world problems.
In block-world problems, there are a number of boxes on the table
with a particular order or arrangement. The objective of this problem
is to change this arrangement into the targeted arrangement with the
help of two types of robots. In this paper, we present three models
for Sudoku. We modellized Sudoku as parameterized multi-agent
systems. A parameterized multi-agent system is a multi-agent system
which consists of several uniform/similar agents and the number of
the agents in the system is stated as the parameter of this system. We
use Temporal Logic of Actions (TLA) for formalizing our models.
Abstract: Artifact is one of the most important factors in
degrading the CT image quality and plays an important role in
diagnostic accuracy. In this paper, some artifacts typically appear in
Spiral CT are introduced. The different factors such as patient,
equipment and interpolation algorithm which cause the artifacts are
discussed and new developments and image processing algorithms to
prevent or reduce them are presented.