Abstract: In the present work, a numerical method for the estimation of the appropriate gradient magnetic fields for optimum driving of the particles into the desired area inside the human body is presented. The proposed method combines Computational Fluid Dynamics (CFD), Discrete Element Method (DEM) and Covariance Matrix Adaptation (CMA) evolution strategy for the magnetic navigation of nanoparticles. It is based on an iteration procedure that intents to eliminate the deviation of the nanoparticles from a desired path. Hence, the gradient magnetic field is constantly adjusted in a suitable way so that the particles’ follow as close as possible to a desired trajectory. Using the proposed method, it is obvious that the diameter of particles is crucial parameter for an efficient navigation. In addition, increase of particles' diameter decreases their deviation from the desired path. Moreover, the navigation method can navigate nanoparticles into the desired areas with efficiency approximately 99%.
Abstract: Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues of the noise subspace, improve the divergence of small eigenvalues in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.
Abstract: A method for object tracking in motion blurred images is proposed in this article. This paper shows that object tracking could be improved with this approach. We use mean shift algorithm to track different objects as a main tracker. But, the problem is that mean shift could not track the selected object accurately in blurred scenes. So, for better tracking result, and increasing the accuracy of tracking, wavelet transform is used. We use a feature named as blur extent, which could help us to get better results in tracking. For calculating of this feature, we should use Harr wavelet. We can look at this matter from two different angles which lead to determine whether an image is blurred or not and to what extent an image is blur. In fact, this feature left an impact on the covariance matrix of mean shift algorithm and cause to better performance of tracking. This method has been concentrated mostly on motion blur parameter. transform. The results reveal the ability of our method in order to reach more accurately tracking.
Abstract: This paper highlights a new approach to look at online
principle components analysis (OPCA). Given a data matrix X ∈
R,^m x n we characterise the online updates of its covariance as a
matrix perturbation problem. Up to the principle components, it
turns out that online updates of the batch PCA can be captured
by symmetric matrix perturbation of the batch covariance matrix.
We have shown that as n→ n0 >> 1, the batch covariance and
its update become almost similar. Finally, utilize our new setup of
online updates to find a bound on the angle distance of the principle
components of X and its update.
Abstract: This paper presents a method for steering velocity bounded mobile robots in environments with partially known stationary obstacles. The exact location of obstacles is unknown and only a probability distribution associated with the location of the obstacles is known. Kinematic model of a 2-wheeled differential drive robot is used as the model of mobile robot. The presented control strategy uses the Artificial Potential Field (APF) method for devising a desired direction of movement for the robot at each instant of time while the Constrained Directions Control (CDC) uses the generated direction to produce the control signals required for steering the robot. The location of each obstacle is considered to be the mean value of the 2D probability distribution and similarly, the magnitude of the electric charge in the APF is set as the trace of covariance matrix of the location probability distribution. The method not only captures the challenges of planning the path (i.e. probabilistic nature of the location of unknown obstacles), but it also addresses the output saturation which is considered to be an important issue from the control perspective. Moreover, velocity of the robot can be controlled during the steering. For example, the velocity of robot can be reduced in close vicinity of obstacles and target to ensure safety. Finally, the control strategy is simulated for different scenarios to show how the method can be put into practice.
Abstract: In this paper, we propose the variational EM inference
algorithm for the multi-class Gaussian process classification model
that can be used in the field of human behavior recognition. This
algorithm can drive simultaneously both a posterior distribution of a
latent function and estimators of hyper-parameters in a Gaussian
process classification model with multiclass. Our algorithm is based
on the Laplace approximation (LA) technique and variational EM
framework. This is performed in two steps: called expectation and
maximization steps. First, in the expectation step, using the Bayesian
formula and LA technique, we derive approximately the posterior
distribution of the latent function indicating the possibility that each
observation belongs to a certain class in the Gaussian process
classification model. Second, in the maximization step, using a derived
posterior distribution of latent function, we compute the maximum
likelihood estimator for hyper-parameters of a covariance matrix
necessary to define prior distribution for latent function. These two
steps iteratively repeat until a convergence condition satisfies.
Moreover, we apply the proposed algorithm with human action
classification problem using a public database, namely, the KTH
human action data set. Experimental results reveal that the proposed
algorithm shows good performance on this data set.
Abstract: Subspace channel estimation methods have been
studied widely, where the subspace of the covariance matrix is
decomposed to separate the signal subspace from noise subspace. The
decomposition is normally done by using either the eigenvalue
decomposition (EVD) or the singular value decomposition (SVD) of
the auto-correlation matrix (ACM). However, the subspace
decomposition process is computationally expensive. This paper
considers the estimation of the multipath slow frequency hopping
(FH) channel using noise space based method. In particular, an
efficient method is proposed to estimate the multipath time delays by
applying multiple signal classification (MUSIC) algorithm which is
based on the null space extracted by the rank revealing LU (RRLU)
factorization. As a result, precise information is provided by the
RRLU about the numerical null space and the rank, (i.e., important
tool in linear algebra). The simulation results demonstrate the
effectiveness of the proposed novel method by approximately
decreasing the computational complexity to the half as compared
with RRQR methods keeping the same performance.
Abstract: In this paper two approaches to joint signal detection,
time of arrival (ToA) and angle of arrival (AoA) estimation in
multi-element antenna array are investigated. Two scenarios were
considered: first one, when the waveform of the useful signal
is known a priori and, second one, when the waveform of the
desired signal is unknown. For first scenario, the antenna array
signal processing based on multi-element matched filtering (MF)
with the following non-coherent detection scheme and maximum
likelihood (ML) parameter estimation blocks is exploited. For second
scenario, the signal processing based on the antenna array elements
covariance matrix estimation with the following eigenvector analysis
and ML parameter estimation blocks is applied. The performance
characteristics of both signal processing schemes are thoroughly
investigated and compared for different useful signals and noise
parameters.
Abstract: This paper proposes a GLMM with spatial and
temporal effects for malaria data in Thailand. A Bayesian method is
used for parameter estimation via Gibbs sampling MCMC. A
conditional autoregressive (CAR) model is assumed to present the
spatial effects. The temporal correlation is presented through the
covariance matrix of the random effects. The malaria quarterly data
have been extracted from the Bureau of Epidemiology, Ministry of
Public Health of Thailand. The factors considered are rainfall and
temperature. The result shows that rainfall and temperature are
positively related to the malaria morbidity rate. The posterior means
of the estimated morbidity rates are used to construct the malaria
maps. The top 5 highest morbidity rates (per 100,000 population) are
in Trat (Q3, 111.70), Chiang Mai (Q3, 104.70), Narathiwat (Q4,
97.69), Chiang Mai (Q2, 88.51), and Chanthaburi (Q3, 86.82).
According to the DIC criterion, the proposed model has a better
performance than the GLMM with spatial effects but without
temporal terms.
Abstract: The traditional second order statistics approach of using only the hermitian covariance for non circular signals, does not take advantage of the information contained in the complementary covariance of these signals. Radar systems often use non circular signals such as Binary Phase Shift Keying (BPSK) signals. Their noncicular property can be exploited together with the dual centrosymmetry of the bistatic MIMO radar system to improve angle estimation performance. We construct an augmented matrix from the received data vectors using both the positive definite hermitian covariance matrix and the complementary covariance matrix. The Unitary ESPRIT technique is then applied to the signal subspace of the augmented covariance matrix for automatically paired Direction-of-arrival (DOA) and Direction-of-Departure (DOD) angle estimates. The number of targets that can be detected is twice that obtainable with the conventional ESPRIT approach. Simulation results show the effectiveness of this method in terms of increase in resolution and the number of targets that can be detected.
Abstract: Based on the sources- smoothed rank profile (SRP) and modified minimum description length (MMDL) principle, a method for estimation of the source coherency structure (SCS) and the number of wideband sources is proposed in this paper. Instead of focusing, we first use a spatial smoothing technique to pre-process the array covariance matrix of each frequency for de-correlating the sources and then use smoothed rank profile to determine the SCS and the number of wideband sources. We demonstrate the availability of the method by numerical simulations.
Abstract: In this paper we propose a novel method for human
face segmentation using the elliptical structure of the human head. It
makes use of the information present in the edge map of the image.
In this approach we use the fact that the eigenvalues of covariance
matrix represent the elliptical structure. The large and small
eigenvalues of covariance matrix are associated with major and
minor axial lengths of an ellipse. The other elliptical parameters are
used to identify the centre and orientation of the face. Since an
Elliptical Hough Transform requires 5D Hough Space, the Circular
Hough Transform (CHT) is used to evaluate the elliptical parameters.
Sparse matrix technique is used to perform CHT, as it squeeze zero
elements, and have only a small number of non-zero elements,
thereby having an advantage of less storage space and computational
time. Neighborhood suppression scheme is used to identify the valid
Hough peaks. The accurate position of the circumference pixels for
occluded and distorted ellipses is identified using Bresenham-s
Raster Scan Algorithm which uses the geometrical symmetry
properties. This method does not require the evaluation of tangents
for curvature contours, which are very sensitive to noise. The method
has been evaluated on several images with different face orientations.
Abstract: A clustering is process to identify a homogeneous
groups of object called as cluster. Clustering is one interesting topic
on data mining. A group or class behaves similarly characteristics.
This paper discusses a robust clustering process for data images with
two reduction dimension approaches; i.e. the two dimensional
principal component analysis (2DPCA) and principal component
analysis (PCA). A standard approach to overcome this problem is
dimension reduction, which transforms a high-dimensional data into
a lower-dimensional space with limited loss of information. One of
the most common forms of dimensionality reduction is the principal
components analysis (PCA). The 2DPCA is often called a variant of
principal component (PCA), the image matrices were directly treated
as 2D matrices; they do not need to be transformed into a vector so
that the covariance matrix of image can be constructed directly using
the original image matrices. The decomposed classical covariance
matrix is very sensitive to outlying observations. The objective of
paper is to compare the performance of robust minimizing vector
variance (MVV) in the two dimensional projection PCA (2DPCA)
and the PCA for clustering on an arbitrary data image when outliers
are hiden in the data set. The simulation aspects of robustness and
the illustration of clustering images are discussed in the end of
paper
Abstract: This paper presents unified theory for local (Savitzky-
Golay) and global polynomial smoothing. The algebraic framework
can represent any polynomial approximation and is seamless from
low degree local, to high degree global approximations. The representation
of the smoothing operator as a projection onto orthonormal
basis functions enables the computation of: the covariance matrix
for noise propagation through the filter; the noise gain and; the
frequency response of the polynomial filters. A virtually perfect Gram
polynomial basis is synthesized, whereby polynomials of degree
d = 1000 can be synthesized without significant errors. The perfect
basis ensures that the filters are strictly polynomial preserving. Given
n points and a support length ls = 2m + 1 then the smoothing
operator is strictly linear phase for the points xi, i = m+1. . . n-m.
The method is demonstrated on geometric surfaces data lying on an
invariant 2D lattice.
Abstract: In this paper, a Gaussian multiple input multiple output multiple eavesdropper (MIMOME) channel is considered where a transmitter communicates to a receiver in the presence of an eavesdropper. We present a technique for determining the secrecy capacity of the multiple input multiple output (MIMO) channel under Gaussian noise. We transform the degraded MIMOME channel into multiple single input multiple output (SIMO) Gaussian wire-tap channels and then use scalar approach to convert it into two equivalent multiple input single output (MISO) channels. The secrecy capacity model is then developed for the condition where the channel state information (CSI) for main channel only is known to the transmitter. The results show that the secret communication is possible when the eavesdropper channel noise is greater than a cutoff noise level. The outage probability is also analyzed of secrecy capacity is also analyzed. The effect of fading and outage probability is also analyzed.
Abstract: In order to accelerate the similarity search in highdimensional database, we propose a new hierarchical indexing method. It is composed of offline and online phases. Our contribution concerns both phases. In the offline phase, after gathering the whole of the data in clusters and constructing a hierarchical index, the main originality of our contribution consists to develop a method to construct bounding forms of clusters to avoid overlapping. For the online phase, our idea improves considerably performances of similarity search. However, for this second phase, we have also developed an adapted search algorithm. Our method baptized NOHIS (Non-Overlapping Hierarchical Index Structure) use the Principal Direction Divisive Partitioning (PDDP) as algorithm of clustering. The principle of the PDDP is to divide data recursively into two sub-clusters; division is done by using the hyper-plane orthogonal to the principal direction derived from the covariance matrix and passing through the centroid of the cluster to divide. Data of each two sub-clusters obtained are including by a minimum bounding rectangle (MBR). The two MBRs are directed according to the principal direction. Consequently, the nonoverlapping between the two forms is assured. Experiments use databases containing image descriptors. Results show that the proposed method outperforms sequential scan and SRtree in processing k-nearest neighbors.
Abstract: The batch nature limits the standard kernel principal component analysis (KPCA) methods in numerous applications, especially for dynamic or large-scale data. In this paper, an efficient adaptive approach is presented for online extraction of the kernel principal components (KPC). The contribution of this paper may be divided into two parts. First, kernel covariance matrix is correctly updated to adapt to the changing characteristics of data. Second, KPC are recursively formulated to overcome the batch nature of standard KPCA.This formulation is derived from the recursive eigen-decomposition of kernel covariance matrix and indicates the KPC variation caused by the new data. The proposed method not only alleviates sub-optimality of the KPCA method for non-stationary data, but also maintains constant update speed and memory usage as the data-size increases. Experiments for simulation data and real applications demonstrate that our approach yields improvements in terms of both computational speed and approximation accuracy.
Abstract: In this paper, we introduce a new method for elliptical
object identification. The proposed method adopts a hybrid scheme
which consists of Eigen values of covariance matrices, Circular
Hough transform and Bresenham-s raster scan algorithms. In this
approach we use the fact that the large Eigen values and small Eigen
values of covariance matrices are associated with the major and minor
axial lengths of the ellipse. The centre location of the ellipse can be
identified using circular Hough transform (CHT). Sparse matrix
technique is used to perform CHT. Since sparse matrices squeeze zero
elements and contain a small number of nonzero elements they
provide an advantage of matrix storage space and computational time.
Neighborhood suppression scheme is used to find the valid Hough
peaks. The accurate position of circumference pixels is identified
using raster scan algorithm which uses the geometrical symmetry
property. This method does not require the evaluation of tangents or
curvature of edge contours, which are generally very sensitive to
noise working conditions. The proposed method has the advantages of
small storage, high speed and accuracy in identifying the feature. The
new method has been tested on both synthetic and real images.
Several experiments have been conducted on various images with
considerable background noise to reveal the efficacy and robustness.
Experimental results about the accuracy of the proposed method,
comparisons with Hough transform and its variants and other
tangential based methods are reported.
Abstract: In this paper we present a new method for coin
identification. The proposed method adopts a hybrid scheme using
Eigenvalues of covariance matrix, Circular Hough Transform (CHT)
and Bresenham-s circle algorithm. The statistical and geometrical
properties of the small and large Eigenvalues of the covariance
matrix of a set of edge pixels over a connected region of support are
explored for the purpose of circular object detection. Sparse matrix
technique is used to perform CHT. Since sparse matrices squeeze
zero elements and contain only a small number of non-zero elements,
they provide an advantage of matrix storage space and computational
time. Neighborhood suppression scheme is used to find the valid
Hough peaks. The accurate position of the circumference pixels is
identified using Raster scan algorithm which uses geometrical
symmetry property. After finding circular objects, the proposed
method uses the texture on the surface of the coins called texton,
which are unique properties of coins, refers to the fundamental micro
structure in generic natural images. This method has been tested on
several real world images including coin and non-coin images. The
performance is also evaluated based on the noise withstanding
capability.
Abstract: We study the spatial design of experiment and we want to select a most informative subset, having prespecified size, from a set of correlated random variables. The problem arises in many applied domains, such as meteorology, environmental statistics, and statistical geology. In these applications, observations can be collected at different locations and possibly at different times. In spatial design, when the design region and the set of interest are discrete then the covariance matrix completely describe any objective function and our goal is to choose a feasible design that minimizes the resulting uncertainty. The problem is recast as that of maximizing the determinant of the covariance matrix of the chosen subset. This problem is NP-hard. For using these designs in computer experiments, in many cases, the design space is very large and it's not possible to calculate the exact optimal solution. Heuristic optimization methods can discover efficient experiment designs in situations where traditional designs cannot be applied, exchange methods are ineffective and exact solution not possible. We developed a GA algorithm to take advantage of the exploratory power of this algorithm. The successful application of this method is demonstrated in large design space. We consider a real case of design of experiment. In our problem, design space is very large and for solving the problem, we used proposed GA algorithm.