Abstract: Rough set theory is used to handle uncertainty and incomplete information by applying two accurate sets, Lower approximation and Upper approximation. In this paper, the rough clustering algorithms are improved by adopting the Similarity, Dissimilarity–Similarity and Entropy based initial centroids selection method on three different clustering algorithms namely Entropy based Rough K-Means (ERKM), Similarity based Rough K-Means (SRKM) and Dissimilarity-Similarity based Rough K-Means (DSRKM) were developed and executed by yeast dataset. The rough clustering algorithms are validated by cluster validity indexes namely Rand and Adjusted Rand indexes. An experimental result shows that the ERKM clustering algorithm perform effectively and delivers better results than other clustering methods. Outlier detection is an important task in data mining and very much different from the rest of the objects in the clusters. Entropy based Rough Outlier Factor (EROF) method is seemly to detect outlier effectively for yeast dataset. In rough K-Means method, by tuning the epsilon (ᶓ) value from 0.8 to 1.08 can detect outliers on boundary region and the RKM algorithm delivers better results, when choosing the value of epsilon (ᶓ) in the specified range. An experimental result shows that the EROF method on clustering algorithm performed very well and suitable for detecting outlier effectively for all datasets. Further, experimental readings show that the ERKM clustering method outperformed the other methods.
Abstract: This paper considers people’s driving skills
diagnosis under real driving conditions. In that sense, this research
presents an approach that uses GPS signals which have a direct
correlation with driving maneuvers. Besides, it is presented a novel
expert-driving-criteria approximation using fuzzy logic which
seeks to analyze GPS signals in order to issue an intelligent driving
diagnosis.
Based on above, this works presents in the first section the
intelligent driving diagnosis system approach in terms of its own
characteristics properties, explaining in detail significant
considerations about how an expert-driving-criteria approximation
must be developed. In the next section, the implementation of our
developed system based on the proposed fuzzy logic approach is
explained. Here, a proposed set of rules which corresponds to a
quantitative abstraction of some traffics laws and driving secure
techniques seeking to approach an expert-driving- criteria
approximation is presented.
Experimental testing has been performed in real driving
conditions. The testing results show that the intelligent driving
diagnosis system qualifies driver’s performance quantitatively with
a high degree of reliability.
Abstract: Vertex Enumeration Algorithms explore the methods and procedures of generating the vertices of general polyhedra formed by system of equations or inequalities. These problems of enumerating the extreme points (vertices) of general polyhedra are shown to be NP-Hard. This lead to exploring how to count the vertices of general polyhedra without listing them. This is also shown to be #P-Complete. Some fully polynomial randomized approximation schemes (fpras) of counting the vertices of some special classes of polyhedra associated with Down-Sets, Independent Sets, 2-Knapsack problems and 2 x n transportation problems are presented together with some discovered open problems.
Abstract: High double excitation of two-electron atoms has been
investigated using hyperpherical coordinates within a modified
adiabatic expansion technique. This modification creates a novel
fictitious force leading to a spontaneous exchange symmetry breaking
at high double excitation. The Pauli principle must therefore be
regarded as approximation valid only at low excitation energy.
Threshold electron scattering from high Rydberg states shows an
unexpected time reversal symmetry breaking. At threshold for double
escape we discover a broad (few eV) Cooper pair.
Abstract: In this paper, autonomous performance of a small
manufactured unmanned helicopter is tried to be increased. For this
purpose, a small unmanned helicopter is manufactured in Erciyes
University, Faculty of Aeronautics and Astronautics. It is called as
ZANKA-Heli-I. For performance maximization, autopilot parameters
are determined via minimizing a cost function consisting of flight
performance parameters such as settling time, rise time, overshoot
during trajectory tracking. For this purpose, a stochastic optimization
method named as simultaneous perturbation stochastic approximation
is benefited. Using this approach, considerable autonomous
performance increase (around %23) is obtained.
Abstract: Model updating is an inverse eigenvalue problem which
concerns the modification of an existing but inaccurate model with
measured modal data. In this paper, an efficient gradient based
iterative method for updating the mass, damping and stiffness
matrices simultaneously using a few of complex measured modal
data is developed. Convergence analysis indicates that the iterative
solutions always converge to the unique minimum Frobenius norm
symmetric solution of the model updating problem by choosing a
special kind of initial matrices.
Abstract: Digital images are widely used in computer
applications. To store or transmit the uncompressed images
requires considerable storage capacity and transmission bandwidth.
Image compression is a means to perform transmission or storage of
visual data in the most economical way. This paper explains about
how images can be encoded to be transmitted in a multiplexing
time-frequency domain channel. Multiplexing involves packing
signals together whose representations are compact in the working
domain. In order to optimize transmission resources each 4 × 4
pixel block of the image is transformed by a suitable polynomial
approximation, into a minimal number of coefficients. Less than
4 × 4 coefficients in one block spares a significant amount of
transmitted information, but some information is lost. Different
approximations for image transformation have been evaluated as
polynomial representation (Vandermonde matrix), least squares +
gradient descent, 1-D Chebyshev polynomials, 2-D Chebyshev
polynomials or singular value decomposition (SVD). Results have
been compared in terms of nominal compression rate (NCR),
compression ratio (CR) and peak signal-to-noise ratio (PSNR)
in order to minimize the error function defined as the difference
between the original pixel gray levels and the approximated
polynomial output. Polynomial coefficients have been later encoded
and handled for generating chirps in a target rate of about two
chirps per 4 × 4 pixel block and then submitted to a transmission
multiplexing operation in the time-frequency domain.
Abstract: The aim of this work is to study the numerical
implementation of the Hilbert Uniqueness Method for the exact
boundary controllability of Euler-Bernoulli beam equation. This study
may be difficult. This will depend on the problem under consideration
(geometry, control and dimension) and the numerical method used.
Knowledge of the asymptotic behaviour of the control governing the
system at time T may be useful for its calculation. This idea will
be developed in this study. We have characterized as a first step, the
solution by a minimization principle and proposed secondly a method
for its resolution to approximate the control steering the considered
system to rest at time T.
Abstract: Bezier curves have useful properties for path
generation problem, for instance, it can generate the reference
trajectory for vehicles to satisfy the path constraints. Both algorithms
join cubic Bezier curve segment smoothly to generate the path. Some
of the useful properties of Bezier are curvature. In mathematics,
curvature is the amount by which a geometric object deviates from
being flat, or straight in the case of a line. Another extrinsic example
of curvature is a circle, where the curvature is equal to the reciprocal
of its radius at any point on the circle. The smaller the radius, the
higher the curvature thus the vehicle needs to bend sharply. In this
study, we use Bezier curve to fit highway-like curve. We use
different approach to find the best approximation for the curve so that
it will resembles highway-like curve. We compute curvature value by
analytical differentiation of the Bezier Curve. We will then compute
the maximum speed for driving using the curvature information
obtained. Our research works on some assumptions; first, the Bezier
curve estimates the real shape of the curve which can be verified
visually. Even though, fitting process of Bezier curve does not
interpolate exactly on the curve of interest, we believe that the
estimation of speed are acceptable. We verified our result with the
manual calculation of the curvature from the map.
Abstract: We present probabilistic multinomial Dirichlet
classification model for multidimensional data and Gaussian process
priors. Here, we have considered efficient computational method that
can be used to obtain the approximate posteriors for latent variables
and parameters needed to define the multiclass Gaussian process
classification model. We first investigated the process of inducing a
posterior distribution for various parameters and latent function by
using the variational Bayesian approximations and important sampling
method, and next we derived a predictive distribution of latent
function needed to classify new samples. The proposed model is
applied to classify the synthetic multivariate dataset in order to verify
the performance of our model. Experiment result shows that our model
is more accurate than the other approximation methods.
Abstract: In this paper, we propose the variational EM inference
algorithm for the multi-class Gaussian process classification model
that can be used in the field of human behavior recognition. This
algorithm can drive simultaneously both a posterior distribution of a
latent function and estimators of hyper-parameters in a Gaussian
process classification model with multiclass. Our algorithm is based
on the Laplace approximation (LA) technique and variational EM
framework. This is performed in two steps: called expectation and
maximization steps. First, in the expectation step, using the Bayesian
formula and LA technique, we derive approximately the posterior
distribution of the latent function indicating the possibility that each
observation belongs to a certain class in the Gaussian process
classification model. Second, in the maximization step, using a derived
posterior distribution of latent function, we compute the maximum
likelihood estimator for hyper-parameters of a covariance matrix
necessary to define prior distribution for latent function. These two
steps iteratively repeat until a convergence condition satisfies.
Moreover, we apply the proposed algorithm with human action
classification problem using a public database, namely, the KTH
human action data set. Experimental results reveal that the proposed
algorithm shows good performance on this data set.
Abstract: Wireless Sensor Networks (WSNs), which sense
environmental data with battery-powered nodes, require multi-hop
communication. This power-demanding task adds an extra workload
that is unfairly distributed across the network. As a result, nodes run
out of battery at different times: this requires an impractical
individual node maintenance scheme. Therefore we investigate a new
Cooperative Sensing approach that extends the WSN operational life
and allows a more practical network maintenance scheme (where all
nodes deplete their batteries almost at the same time). We propose a
novel cooperative algorithm that derives a piecewise representation
of the sensed signal while controlling approximation accuracy.
Simulations show that our algorithm increases WSN operational life
and spreads communication workload evenly. Results convey a
counterintuitive conclusion: distributing workload fairly amongst
nodes may not decrease the network power consumption and yet
extend the WSN operational life. This is achieved as our cooperative
approach decreases the workload of the most burdened cluster in the
network.
Abstract: In this article, we used the residual correction method
to deal with transient thermoelastic problems with a hollow spherical
region when the continuum medium possesses spherically isotropic
thermoelastic properties. Based on linear thermoelastic theory, the
equations of hyperbolic heat conduction and thermoelastic motion
were combined to establish the thermoelastic dynamic model with
consideration of the deformation acceleration effect and non-Fourier
effect under the condition of transient thermal shock. The approximate
solutions of temperature and displacement distributions are obtained
using the residual correction method based on the maximum principle
in combination with the finite difference method, making it easier and
faster to obtain upper and lower approximations of exact solutions.
The proposed method is found to be an effective numerical method
with satisfactory accuracy. Moreover, the result shows that the effect
of transient thermal shock induced by deformation acceleration is
enhanced by non-Fourier heat conduction with increased peak stress.
The influence on the stress increases with the thermal relaxation time.
Abstract: This work deals with the problem of MHD mixed
convection in a completely porous and differentially heated vertical
channel. The model of Darcy-Brinkman-Forchheimer with the
Boussinesq approximation is adopted and the governing equations are
solved by the finite volume method. The effects of magnetic field and
buoyancy force intensities are given by the Hartmann and Richardson
numbers respectively, as well as the Joule heating represented by
Eckert number on the velocity and temperature fields, are examined.
The main results show an augmentation of heat transfer rate with the
decrease of Darcy number and the increase of Ri and Ha when Joule
heating is neglected.
Abstract: DNA Barcode provides good sources of needed
information to classify living species. The classification problem has
to be supported with reliable methods and algorithms. To analyze
species regions or entire genomes, it becomes necessary to use the
similarity sequence methods. A large set of sequences can be
simultaneously compared using Multiple Sequence Alignment which
is known to be NP-complete. However, all the used methods are still
computationally very expensive and require significant computational
infrastructure. Our goal is to build predictive models that are highly
accurate and interpretable. In fact, our method permits to avoid the
complex problem of form and structure in different classes of
organisms. The empirical data and their classification performances
are compared with other methods. Evenly, in this study, we present
our system which is consisted of three phases. The first one, is called
transformation, is composed of three sub steps; Electron-Ion
Interaction Pseudopotential (EIIP) for the codification of DNA
Barcodes, Fourier Transform and Power Spectrum Signal Processing.
Moreover, the second phase step is an approximation; it is
empowered by the use of Multi Library Wavelet Neural Networks
(MLWNN). Finally, the third one, is called the classification of DNA
Barcodes, is realized by applying the algorithm of hierarchical
classification.
Abstract: This work presents an improved single fiber pull-out
test for fiber/matrix interface characterization. This test has been
used to study the Inter-Facial Shear Strength ‘IFSS’ of hemp fibers
reinforced polypropylene (PP). For this aim, the fiber diameter
has been carefully measured using a tomography inspired method.
The fiber section contour can then be approximated by a circle
or a polygon. The results show that the IFSS is overestimated if
the circular approximation is used. The Influence of the molding
temperature on the IFSS has also been studied. We find that a molding
temperature of 183◦C leads to better interfacial properties. Above or
below this temperature the interface strength is reduced.
Abstract: With the growing of computer and network, digital
data can be spread to anywhere in the world quickly. In addition,
digital data can also be copied or tampered easily so that the security
issue becomes an important topic in the protection of digital data.
Digital watermark is a method to protect the ownership of digital data.
Embedding the watermark will influence the quality certainly. In this
paper, Vector Quantization (VQ) is used to embed the watermark into
the image to fulfill the goal of data hiding. This kind of watermarking
is invisible which means that the users will not conscious the existing
of embedded watermark even though the embedded image has tiny
difference compared to the original image. Meanwhile, VQ needs a lot
of computation burden so that we adopt a fast VQ encoding scheme by
partial distortion searching (PDS) and mean approximation scheme to
speed up the data hiding process.
The watermarks we hide to the image could be gray, bi-level and
color images. Texts are also can be regarded as watermark to embed.
In order to test the robustness of the system, we adopt Photoshop to
fulfill sharpen, cropping and altering to check if the extracted
watermark is still recognizable. Experimental results demonstrate that
the proposed system can resist the above three kinds of tampering in
general cases.
Abstract: The capability of exploiting the electronic charge and
spin properties simultaneously in a single material has made diluted
magnetic semiconductors (DMS) remarkable in the field of
spintronics. We report the designing of DMS based on zinc-blend
ZnO doped with Cr impurity. The full potential linearized augmented
plane wave plus local orbital FP-L(APW+lo) method in density
functional theory (DFT) has been adapted to carry out these
investigations. For treatment of exchange and correlation energy,
generalized gradient approximations have been used. Introducing Cr
atoms in the matrix of ZnO has induced strong magnetic moment
with ferromagnetic ordering at stable ground state. Cr:ZnO was found
to favor the short range magnetic interaction that
reflect tendency of Cr clustering. The electronic structure of ZnO is
strongly influenced in the presence of Cr impurity atoms where
impurity bands appear in the band gap.
Abstract: Reliability allocation is quite important during early
design and development stages for a system to apportion its specified
reliability goal to subsystems. This paper improves the reliability
fuzzy allocation method, and gives concrete processes on determining
the factor and sub-factor sets, weight sets, judgment set, and
multi-stage fuzzy evaluation. To determine the weight of factor and
sub-factor sets, the modified trapezoidal numbers are proposed to
reduce errors caused by subjective factors. To decrease the fuzziness
in fuzzy division, an approximation method based on linear
programming is employed. To compute the explicit values of fuzzy
numbers, centroid method of defuzzification is considered. An
example is provided to illustrate the application of the proposed
reliability allocation method based on fuzzy arithmetic.
Abstract: In this paper, we regard as a coded transmission over a
frequency-selective channel. We plan to study analytically the
convergence of the turbo-detector using a maximum a posteriori
(MAP) equalizer and a MAP decoder. We demonstrate that the
densities of the maximum likelihood (ML) exchanged during the
iterations are e-symmetric and output-symmetric. Under the Gaussian
approximation, this property allows to execute a one-dimensional
scrutiny of the turbo-detector. By deriving the analytical terminology
of the ML distributions under the Gaussian approximation, we confirm
that the bit error rate (BER) performance of the turbo-detector
converges to the BER performance of the coded additive white
Gaussian noise (AWGN) channel at high signal to noise ratio (SNR),
for any frequency selective channel.