Abstract: Support vector machines (SVMs) are considered to be
the best machine learning algorithms for minimizing the predictive
probability of misclassification. However, their drawback is that for
large data sets the computation of the optimal decision boundary is a
time consuming function of the size of the training set. Hence several
methods have been proposed to speed up the SVM algorithm. Here
three methods used to speed up the computation of the SVM
classifiers are compared experimentally using a musical genre
classification problem. The simplest method pre-selects a random
sample of the data before the application of the SVM algorithm. Two
additional methods use proximity graphs to pre-select data that are
near the decision boundary. One uses k-Nearest Neighbor graphs and
the other Relative Neighborhood Graphs to accomplish the task.
Abstract: The development of aid's systems for the medical
diagnosis is not easy thing because of presence of inhomogeneities in
the MRI, the variability of the data from a sequence to the other as
well as of other different source distortions that accentuate this
difficulty. A new automatic, contextual, adaptive and robust
segmentation procedure by MRI brain tissue classification is
described in this article. A first phase consists in estimating the
density of probability of the data by the Parzen-Rozenblatt method.
The classification procedure is completely automatic and doesn't
make any assumptions nor on the clusters number nor on the
prototypes of these clusters since these last are detected in an
automatic manner by an operator of mathematical morphology called
skeleton by influence zones detection (SKIZ). The problem of
initialization of the prototypes as well as their number is transformed
in an optimization problem; in more the procedure is adaptive since it
takes in consideration the contextual information presents in every
voxel by an adaptive and robust non parametric model by the
Markov fields (MF). The number of bad classifications is reduced by
the use of the criteria of MPM minimization (Maximum Posterior
Marginal).
Abstract: Power Spectral Density (PSD) computed by taking the Fourier transform of auto-correlation functions (Wiener-Khintchine Theorem) gives better result, in case of noisy data, as compared to the Periodogram approach. However, the computational complexity of Wiener-Khintchine approach is more than that of the Periodogram approach. For the computation of short time Fourier transform (STFT), this problem becomes even more prominent where computation of PSD is required after every shift in the window under analysis. In this paper, recursive version of the Wiener-Khintchine theorem has been derived by using the sliding DFT approach meant for computation of STFT. The computational complexity of the proposed recursive Wiener-Khintchine algorithm, for a window size of N, is O(N).
Abstract: A subjectively influenced router for vehicles in a fourjunction
traffic system is presented. The router is based on a 3-layer
Backpropagation Neural Network (BPNN) and a greedy routing
procedure. The BPNN detects priorities of vehicles based on the
subjective criteria. The subjective criteria and the routing procedure
depend on the routing plan towards vehicles depending on the user.
The routing procedure selects vehicles from their junctions based on
their priorities and route them concurrently to the traffic system. That
is, when the router is provided with a desired vehicles selection
criteria and routing procedure, it routes vehicles with a reasonable
junction clearing time. The cost evaluation of the router determines
its efficiency. In the case of a routing conflict, the router will route
the vehicles in a consecutive order and quarantine faulty vehicles.
The simulations presented indicate that the presented approach is an
effective strategy of structuring a subjective vehicle router.
Abstract: This paper focuses on a novel method for semantic
searching and retrieval of information about learning materials.
Metametadata encapsulate metadata instances by using the properties
and attributes provided by ontologies rather than describing learning
objects. A novel metametadata taxonomy has been developed which
provides the basis for a semantic search engine to extract, match and
map queries to retrieve relevant results. The use of ontological views
is a foundation for viewing the pedagogical content of metadata
extracted from learning objects by using the pedagogical attributes
from the metametadata taxonomy. Using the ontological approach
and metametadata (based on the metametadata taxonomy) we present
a novel semantic searching mechanism.These three strands – the
taxonomy, the ontological views, and the search algorithm – are
incorporated into a novel architecture (OMESCOD) which has been
implemented.
Abstract: In MPEG and H.26x standards, to eliminate the
temporal redundancy we use motion estimation. Given that the
motion estimation stage is very complex in terms of computational
effort, a hardware implementation on a re-configurable circuit is
crucial for the requirements of different real time multimedia
applications. In this paper, we present hardware architecture for
motion estimation based on "Full Search Block Matching" (FSBM)
algorithm. This architecture presents minimum latency, maximum
throughput, full utilization of hardware resources such as embedded
memory blocks, and combining both pipelining and parallel
processing techniques. Our design is described in VHDL language,
verified by simulation and implemented in a Stratix II
EP2S130F1020C4 FPGA circuit. The experiment result show that the
optimum operating clock frequency of the proposed design is 89MHz
which achieves 160M pixels/sec.
Abstract: In this note, the robust static output feedback
stabilisation of an induction machine is addressed. The machine is
described by a non homogenous bilinear model with structural
uncertainties, and the feedback gain is computed via an iterative LMI
(ILMI) algorithm.
Abstract: The quick training algorithms and accurate solution
procedure for incremental learning aim at improving the efficiency of
training of SVR, whereas there are some disadvantages for them, i.e.
the nonconvergence of the formers for changeable training set and
the inefficiency of the latter for a massive dataset. In order to handle
the problems, a new training algorithm for a changeable training
set, named Approximation Incremental Training Algorithm (AITA),
was proposed. This paper explored the reason of nonconvergence
theoretically and discussed the realization of AITA, and finally
demonstrated the benefits of AITA both on precision and efficiency.
Abstract: Although there have been many researches in cluster
analysis to consider on feature weights, little effort is made on sample
weights. Recently, Yu et al. (2011) considered a probability
distribution over a data set to represent its sample weights and then
proposed sample-weighted clustering algorithms. In this paper, we
give a sample-weighted version of generalized fuzzy clustering
regularization (GFCR), called the sample-weighted GFCR
(SW-GFCR). Some experiments are considered. These experimental
results and comparisons demonstrate that the proposed SW-GFCR is
more effective than the most clustering algorithms.
Abstract: Today, transport and logistic systems are often tightly
integrated in the production. Lean production and just-in-time delivering create multiple constraints that have to be fulfilled. As transport networks often have evolved over time they are very
expensive to change. This paper describes a discrete-event-simulation
system which simulates transportation models using real time
resource routing and collision avoidance. It allows for the
specification of own control algorithms and validation of new
strategies. The simulation is integrated into a virtual reality (VR)
environment and can be displayed in 3-D to show the progress.
Simulation elements can be selected through VR metaphors. All data
gathered during the simulation can be presented as a detailed summary afterwards. The included cost-benefit calculation can help to optimize the financial outcome. The operation of this approach is shown by the example of a timber harvest simulation.
Abstract: Salient points are frequently used to represent local
properties of the image in content-based image retrieval. In this paper,
we present a reduction algorithm that extracts the local most salient
points such that they not only give a satisfying representation of an
image, but also make the image retrieval process efficiently. This
algorithm recursively reduces the continuous point set by their
corresponding saliency values under a top-down approach. The
resulting salient points are evaluated with an image retrieval system
using Hausdoff distance. In this experiment, it shows that our method
is robust and the extracted salient points provide better retrieval
performance comparing with other point detectors.
Abstract: The projection methods, usually viewed as the methods
for computing eigenvalues, can also be used to estimate pseudospectra.
This paper proposes a kind of projection methods for computing
the pseudospectra of large scale matrices, including orthogonalization
projection method and oblique projection method respectively. This
possibility may be of practical importance in applications involving
large scale highly nonnormal matrices. Numerical algorithms are
given and some numerical experiments illustrate the efficiency of
the new algorithms.
Abstract: This paper is concerned with studying the forgetting factor of the recursive least square (RLS). A new dynamic forgetting factor (DFF) for RLS algorithm is presented. The proposed DFF-RLS is compared to other methods. Better performance at convergence and tracking of noisy chirp sinusoid is achieved. The control of the forgetting factor at DFF-RLS is based on the gradient of inverse correlation matrix. Compared with the gradient of mean square error algorithm, the proposed approach provides faster tracking and smaller mean square error. In low signal-to-noise ratios, the performance of the proposed method is superior to other approaches.
Abstract: Voltage collapse is instability of heavily loaded electric
power systems that cause to declining voltages and blackout. Power
systems are predicated to become more heavily loaded in the future
decade as the demand for electric power rises while economic and
environmental concerns limit the construction of new transmission
and generation capacity. Heavily loaded power systems are closer to
their stability limits and voltage collapse blackouts will occur if
suitable monitoring and control measures are not taken. To control
transmission lines, it can be used from FACTS devices.
In this paper Harmony search algorithm (HSA) and Genetic
Algorithm (GA) have applied to determine optimal location of
FACTS devices in a power system to improve power system stability.
Three types of FACTS devices (TCPAT, UPFS, and SVC) have been
introduced. Bus under voltage has been solved by controlling reactive
power of shunt compensator. Also a combined series-shunt
compensators has been also used to control transmission power flow
and bus voltage simultaneously.
Different scenarios have been considered. First TCPAT, UPFS, and
SVC are placed solely in transmission lines and indices have been
calculated. Then two types of above controller try to improve
parameters randomly. The last scenario tries to make better voltage
stability index and losses by implementation of three types controller
simultaneously. These scenarios are executed on typical 34-bus test
system and yields efficiency in improvement of voltage profile and
reduction of power losses; it also may permit an increase in power
transfer capacity, maximum loading, and voltage stability margin.
Abstract: In this paper, quantitative evaluation of ultrasonic Cscan
images through estimation of their Fractal Dimension (FD) is
discussed. Necessary algorithm for evaluation of FD of any 2-D
digitized image is implemented by developing a computer code. For
the evaluation purpose several C-scan images of the Kevlar
composite impacted by high speed bullet and glass fibre composite
having flaw in the form of inclusion is used. This analysis
automatically differentiates a C-scan image showing distinct damage
zone, from an image that contains no such damage.
Abstract: The scalar wave equation for a potential in a curved space time, i.e., the Laplace-Beltrami equation has been studied in this work. An action principle is used to derive a finite element algorithm for determining the modes of propagation inside a waveguide of arbitrary shape. Generalizing this idea, the Maxwell theory in a curved space time determines a set of linear partial differential equations for the four electromagnetic potentials given by the metric of space-time. Similar to the Einstein-s formulation of the field equations of gravitation, these equations are also derived from an action principle. In this paper, the expressions for the action functional of the electromagnetic field have been derived in the presence of gravitational field.
Abstract: This paper presents a new adaptive impedance control
strategy, based on Function Approximation Technique (FAT) to
compensate for unknown non-flat environment shape or time-varying
environment location. The target impedance in the force controllable
direction is modified by incorporating adaptive compensators and the
uncertainties are represented by FAT, allowing the update law to be
derived easily. The force error feedback is utilized in the estimation
and the accurate knowledge of the environment parameters are not
required by the algorithm. It is shown mathematically that the
stability of the controller is guaranteed based on Lyapunov theory.
Simulation results presented to demonstrate the validity of the
proposed controller.
Abstract: DNA microarrays allow the measurement of expression levels for a large number of genes, perhaps all genes of an organism, within a number of different experimental samples. It is very much important to extract biologically meaningful information from this huge amount of expression data to know the current state of the cell because most cellular processes are regulated by changes in gene expression. Association rule mining techniques are helpful to find association relationship between genes. Numerous association rule mining algorithms have been developed to analyze and associate this huge amount of gene expression data. This paper focuses on some of the popular association rule mining algorithms developed to analyze gene expression data.
Abstract: In this paper, we introduce a novel algorithm for object tracking in video sequence. In order to represent the object to be tracked, we propose a spatial color histogram model which encodes both the color distribution and spatial information. The object tracking from frame to frame is accomplished via center voting and back projection method. The center voting method has every pixel in the new frame to cast a vote on whereabouts the object center is. The back projection method segments the object from the background. The segmented foreground provides information on object size and orientation, omitting the need to estimate them separately. We do not put any assumption on camera motion; the proposed algorithm works equally well for object tracking in both static and moving camera videos.
Abstract: Hybrid algorithm is the hot issue in Computational
Intelligence (CI) study. From in-depth discussion on Simulation
Mechanism Based (SMB) classification method and composite patterns,
this paper presents the Mamdani model based Adaptive Neural
Fuzzy Inference System (M-ANFIS) and weight updating formula in
consideration with qualitative representation of inference consequent
parts in fuzzy neural networks. M-ANFIS model adopts Mamdani
fuzzy inference system which has advantages in consequent part.
Experiment results of applying M-ANFIS to evaluate traffic Level
of service show that M-ANFIS, as a new hybrid algorithm in computational
intelligence, has great advantages in non-linear modeling,
membership functions in consequent parts, scale of training data and
amount of adjusted parameters.