Abstract: This paper aims to present a survey of object
recognition/classification methods based on image moments. We
review various types of moments (geometric moments, complex
moments) and moment-based invariants with respect to various
image degradations and distortions (rotation, scaling, affine
transform, image blurring, etc.) which can be used as shape
descriptors for classification. We explain a general theory how to
construct these invariants and show also a few of them in explicit
forms. We review efficient numerical algorithms that can be used
for moment computation and demonstrate practical examples of
using moment invariants in real applications.
Abstract: Among neural models the Support Vector Machine
(SVM) solutions are attracting increasing attention, mostly because
they eliminate certain crucial questions involved by neural network
construction. The main drawback of standard SVM is its high
computational complexity, therefore recently a new technique, the
Least Squares SVM (LS–SVM) has been introduced. In this paper we
present an extended view of the Least Squares Support Vector
Regression (LS–SVR), which enables us to develop new
formulations and algorithms to this regression technique. Based on
manipulating the linear equation set -which embodies all information
about the regression in the learning process- some new methods are
introduced to simplify the formulations, speed up the calculations
and/or provide better results.
Abstract: In this paper two models using a functional network
were employed to solving classification problem. Functional networks
are generalized neural networks, which permit the specification of
their initial topology using knowledge about the problem at hand. In
this case, and after analyzing the available data and their relations, we
systematically discuss a numerical analysis method used for
functional network, and apply two functional network models to
solving XOR problem. The XOR problem that cannot be solved with
two-layered neural network can be solved by two-layered functional
network, which reveals a potent computational power of functional
networks, and the performance of the proposed model was validated
using classification problems.
Abstract: To increase precision and reliability of automatic control systems, we have to take into account of random factors affecting the control system. Thus, operational matrix technique is used for statistical analysis of first order plus time delay system with uniform random parameter. Examples with deterministic and stochastic disturbance are considered to demonstrate the validity of the method. Comparison with Monte Carlo method is made to show the computational effectiveness of the method.
Abstract: Due to heavy energy constraints in WSNs clustering is
an efficient way to manage the energy in sensors. There are many
methods already proposed in the area of clustering and research is
still going on to make clustering more energy efficient. In our paper
we are proposing a minimum spanning tree based clustering using
divide and conquer approach. The MST based clustering was first
proposed in 1970’s for large databases. Here we are taking divide and
conquer approach and implementing it for wireless sensor networks
with the constraints attached to the sensor networks. This Divide and
conquer approach is implemented in a way that we don’t have to
construct the whole MST before clustering but we just find the edge
which will be the part of the MST to a corresponding graph and
divide the graph in clusters there itself if that edge from the graph can
be removed judging on certain constraints and hence saving lot of
computation.
Abstract: Markov games are a generalization of Markov
decision process to a multi-agent setting. Two-player zero-sum
Markov game framework offers an effective platform for designing
robust controllers. This paper presents two novel controller design
algorithms that use ideas from game-theory literature to produce
reliable controllers that are able to maintain performance in presence
of noise and parameter variations. A more widely used approach for
controller design is the H∞ optimal control, which suffers from high
computational demand and at times, may be infeasible. Our approach
generates an optimal control policy for the agent (controller) via a
simple Linear Program enabling the controller to learn about the
unknown environment. The controller is facing an unknown
environment, and in our formulation this environment corresponds to
the behavior rules of the noise modeled as the opponent. Proposed
controller architectures attempt to improve controller reliability by a
gradual mixing of algorithmic approaches drawn from the game
theory literature and the Minimax-Q Markov game solution
approach, in a reinforcement-learning framework. We test the
proposed algorithms on a simulated Inverted Pendulum Swing-up
task and compare its performance against standard Q learning.
Abstract: We introduce a logic-based framework for database
updating under constraints. In our framework, the constraints are
represented as an instantiated extended logic program. When performing
an update, database consistency may be violated. We provide
an approach of maintaining database consistency, and study the
conditions under which the maintenance process is deterministic. We
show that the complexity of the computations and decision problems
presented in our framework is in each case polynomial time.
Abstract: A kind of singularly perturbed boundary value problems is under consideration. In order to obtain its approximation, simple upwind difference discretization is applied. We use a moving mesh iterative algorithm based on equi-distributing of the arc-length function of the current computed piecewise linear solution. First, a maximum norm a posteriori error estimate on an arbitrary mesh is derived using a different method from the one carried out by Chen [Advances in Computational Mathematics, 24(1-4) (2006), 197-212.]. Then, basing on the properties of discrete Green-s function and the presented posteriori error estimate, we theoretically prove that the discrete solutions computed by the algorithm are first-order uniformly convergent with respect to the perturbation parameter ε.
Abstract: This paper focuses on the development of bond graph
dynamic model of the mechanical dynamics of an excavating mechanism
previously designed to be used with small tractors, which are
fabricated in the Engineering Workshops of Jomo Kenyatta University
of Agriculture and Technology. To develop a mechanical dynamics
model of the manipulator, forward recursive equations similar to
those applied in iterative Newton-Euler method were used to obtain
kinematic relationships between the time rates of joint variables
and the generalized cartesian velocities for the centroids of the
links. Representing the obtained kinematic relationships in bondgraphic
form, while considering the link weights and momenta as
the elements led to a detailed bond graph model of the manipulator.
The bond graph method was found to reduce significantly the number
of recursive computations performed on a 3 DOF manipulator for a
mechanical dynamic model to result, hence indicating that bond graph
method is more computationally efficient than the Newton-Euler
method in developing dynamic models of 3 DOF planar manipulators.
The model was verified by comparing the joint torque expressions
of a two link planar manipulator to those obtained using Newton-
Euler and Lagrangian methods as analyzed in robotic textbooks. The
expressions were found to agree indicating that the model captures
the aspects of rigid body dynamics of the manipulator. Based on
the model developed, actuator sizing and valve sizing methodologies
were developed and used to obtain the optimal sizes of the pistons
and spool valve ports respectively. It was found that using the pump
with the sized flow rate capacity, the engine of the tractor is able to
power the excavating mechanism in digging a sandy-loom soil.
Abstract: Vector quantization is a powerful tool for speech
coding applications. This paper deals with LPC Coding of speech
signals which uses a new technique called Multi Switched Split
Vector Quantization, This is a hybrid of two product code vector
quantization techniques namely the Multi stage vector quantization
technique, and Switched split vector quantization technique,. Multi
Switched Split Vector Quantization technique quantizes the linear
predictive coefficients in terms of line spectral frequencies. From
results it is proved that Multi Switched Split Vector Quantization
provides better trade off between bitrate and spectral distortion
performance, computational complexity and memory requirements
when compared to Switched Split Vector Quantization, Multi stage
vector quantization, and Split Vector Quantization techniques. By
employing the switching technique at each stage of the vector
quantizer the spectral distortion, computational complexity and
memory requirements were greatly reduced. Spectral distortion was
measured in dB, Computational complexity was measured in
floating point operations (flops), and memory requirements was
measured in (floats).
Abstract: Droplet size distributions in the cold spray of a fuel
are important in observed combustion behavior. Specification of
droplet size and velocity distributions in the immediate downstream
of injectors is also essential as boundary conditions for advanced
computational fluid dynamics (CFD) and two-phase spray transport
calculations. This paper describes the development of a new model to
be incorporated into maximum entropy principle (MEP) formalism
for prediction of droplet size distribution in droplet formation region.
The MEP approach can predict the most likely droplet size and
velocity distributions under a set of constraints expressing the
available information related to the distribution.
In this article, by considering the mechanisms of turbulence
generation inside the nozzle and wave growth on jet surface, it is
attempted to provide a logical framework coupling the flow inside the
nozzle to the resulting atomization process. The purpose of this paper
is to describe the formulation of this new model and to incorporate it
into the maximum entropy principle (MEP) by coupling sub-models
together using source terms of momentum and energy. Comparison
between the model prediction and experimental data for a gas turbine
swirling nozzle and an annular spray indicate good agreement
between model and experiment.
Abstract: In this paper, a novel scheme is proposed for ownership identification and authentication using color images by deploying Cryptography and Digital Watermarking as underlaying technologies. The former is used to compute the contents based hash and the latter to embed the watermark. The host image that will claim to be the rightful owner is first transformed from RGB to YST color space exclusively designed for watermarking based applications. Geometrically YS ÔèÑ T and T channel corresponds to the chrominance component of color image, therefore suitable for embedding the watermark. The T channel is divided into 4×4 nonoverlapping blocks. The size of block is important for enhanced localization, security and low computation. Each block along with ownership information is then deployed by SHA160, a one way hash function to compute the content based hash, which is always unique and resistant against birthday attack instead of using MD5 that may raise the condition i.e. H(m)=H(m'). The watermark payload varies from block to block and computed by the variance factorα . The quality of watermarked images is quite high both subjectively and objectively. Our scheme is blind, computationally fast and exactly locates the tampered region.
Abstract: Recent developments in Soft computing techniques,
power electronic switches and low-cost computational hardware have
made it possible to design and implement sophisticated control
strategies for sensorless speed control of AC motor drives. Such an
attempt has been made in this work, for Sensorless Speed Control of
Induction Motor (IM) by means of Direct Torque Fuzzy Control
(DTFC), PI-type fuzzy speed regulator and MRAS speed estimator
strategy, which is absolutely nonlinear in its nature. Direct torque
control is known to produce quick and robust response in AC drive
system. However, during steady state, torque, flux and current ripple
occurs. So, the performance of conventional DTC with PI speed
regulator can be improved by implementing fuzzy logic techniques.
Certain important issues in design including the space vector
modulated (SVM) 3-Ф voltage source inverter, DTFC design,
generation of reference torque using PI-type fuzzy speed regulator
and sensor less speed estimator have been resolved. The proposed
scheme is validated through extensive numerical simulations on
MATLAB. The simulated results indicate the sensor less speed
control of IM with DTFC and PI-type fuzzy speed regulator provides
satisfactory high dynamic and static performance compare to
conventional DTC with PI speed regulator.
Abstract: The compression-absorption heat pump (C-A HP), one
of the promising heat recovery equipments that make process hot
water using low temperature heat of wastewater, was evaluated by
computer simulation. A simulation program was developed based on
the continuity and the first and second laws of thermodynamics. Both
the absorber and desorber were modeled using UA-LMTD method. In
order to prevent an unfeasible temperature profile and to reduce
calculation errors from the curved temperature profile of a mixture,
heat loads were divided into lots of segments. A single-stage
compressor was considered. A compressor cooling load was also
taken into account. An isentropic efficiency was computed from the
map data. Simulation conditions were given based on the system
consisting of ordinarily designed components. The simulation results
show that most of the total entropy generation occurs during the
compression and cooling process, thus suggesting the possibility that
system performance can be enhanced if a rectifier is introduced.
Abstract: The present microfluidic study is emphasizing the flow behavior within a Y shape micro-bifurcation in two similar flow configurations. We report here a numerical and experimental investigation on the velocity profiles evolution and secondary flows, manifested at different Reynolds numbers (Re) and for two different boundary conditions. The experiments are performed using special designed setup based on optical microscopic devices. With this setup, direct visualizations and quantitative measurements of the path-lines are obtained. A Micro-PIV measurement system is used to obtain velocity profiles distributions in a spatial evolution in the main flows domains. The experimental data is compared with numerical simulations performed with commercial computational code FLUENT in a 3D geometry with the same dimensions as the experimental one. The numerical flow patterns are found to be in good agreement with the experimental manifestations.
Abstract: In the proposed method for Web page-ranking, a
novel theoretic model is introduced and tested by examples of order
relationships among IP addresses. Ranking is induced using a
convexity feature, which is learned according to these examples
using a self-organizing procedure. We consider the problem of selforganizing
learning from IP data to be represented by a semi-random
convex polygon procedure, in which the vertices correspond to IP
addresses. Based on recent developments in our regularization
theory for convex polygons and corresponding Euclidean distance
based methods for classification, we develop an algorithmic
framework for learning ranking functions based on a Computational
Geometric Theory. We show that our algorithm is generic, and
present experimental results explaining the potential of our approach.
In addition, we explain the generality of our approach by showing its
possible use as a visualization tool for data obtained from diverse
domains, such as Public Administration and Education.
Abstract: Traditionally, VLSI implementations of spiking
neural nets have featured large neuron counts for fixed computations
or small exploratory, configurable nets. This paper presents the
system architecture of a large configurable neural net system
employing a dedicated mapping algorithm for projecting the targeted
biology-analog nets and dynamics onto the hardware with its
attendant constraints.
Abstract: An attempt in this paper proposes a re-modification to
the minimum moment approach of resource leveling which is a modified minimum moment approach to the traditional method by
Harris. The method is based on critical path method. The new approach suggests the difference between the methods in the
selection criteria of activity which needs to be shifted for leveling resource histogram. In traditional method, the improvement factor
found first to select the activity for each possible day of shifting. In
modified method maximum value of the product of Resources Rate
and Free Float was found first and improvement factor is then
calculated for that activity which needs to be shifted. In the proposed
method the activity to be selected first for shifting is based on the largest value of resource rate. The process is repeated for all the
remaining activities for possible shifting to get updated histogram.
The proposed method significantly reduces the number of iterations
and is easier for manual computations.
Abstract: Water vapour transport properties of gypsum block
are studied in dependence on relative humidity using inverse analysis
based on genetic algorithm. The computational inverse analysis is
performed for the relative humidity profiles measured along the
longitudinal axis of a rod sample. Within the performed transient
experiment, the studied sample is exposed to two environments with
different relative humidity, whereas the temperature is kept constant.
For the basic gypsum characterisation and for the assessment of input
material parameters necessary for computational application of
genetic algorithm, the basic material properties of gypsum are
measured as well as its thermal and water vapour storage parameters.
On the basis of application of genetic algorithm, the relative
humidity dependent water vapour diffusion coefficient and water
vapour diffusion resistance factor are calculated.
Abstract: Wireless channels are characterized by more serious
bursty and location-dependent errors. Many packet scheduling
algorithms have been proposed for wireless networks to guarantee
fairness and delay bounds. However, most existing schemes do not
consider the difference of traffic natures among packet flows. This
will cause the delay-weight coupling problem. In particular, serious
queuing delays may be incurred for real-time flows. In this paper, it
is proposed a scheduling algorithm that takes traffic types of flows
into consideration when scheduling packets and also it is provided
scheduling flexibility by trading off video quality to meet the
playback deadline.