Abstract: The error diffusion method generates worm artifacts,
and weakens the edge of the halftone image when the continuous gray
scale image is reproduced by a binary image. First, to enhance the
edges, we propose the edge-enhancing filter by considering the
quantization error information and gradient of the neighboring pixels.
Furthermore, to remove worm artifacts often appearing in a halftone
image, we add adaptively random noise into the weights of an error
filter.
Abstract: Hexapod Machine Tool (HMT) is a parallel robot
mostly based on Stewart platform. Identification of kinematic
parameters of HMT is an important step of calibration procedure. In
this paper an algorithm is presented for identifying the kinematic
parameters of HMT using inverse kinematics error model. Based on
this algorithm, the calibration procedure is simulated. Measurement
configurations with maximum observability are decided as the first
step of this algorithm for a robust calibration. The errors occurring in
various configurations are illustrated graphically. It has been shown
that the boundaries of the workspace should be searched for the
maximum observability of errors. The importance of using
configurations with sufficient observability in calibrating hexapod
machine tools is verified by trial calibration with two different
groups of randomly selected configurations. One group is selected to
have sufficient observability and the other is in disregard of the
observability criterion. Simulation results confirm the validity of the
proposed identification algorithm.
Abstract: In the visual servoing systems, the data obtained by
Visionary is used for controlling robots. In this project, at first the
simulator which was proposed for simulating the performance of a
6R robot before, was examined in terms of software and test, and in
the proposed simulator, existing defects were obviated. In the first
version of simulation, the robot was directed toward the target object only in a Position-based method using two cameras in the
environment. In the new version of the software, three cameras were used simultaneously. The camera which is installed as eye-inhand on the end-effector of the robot is used for visual servoing in a
Feature-based method. The target object is recognized according to
its characteristics and the robot is directed toward the object in compliance with an algorithm similar to the function of human-s
eyes. Then, the function and accuracy of the operation of the robot are examined through Position-based visual servoing method using
two cameras installed as eye-to-hand in the environment. Finally, the obtained results are tested under ANSI-RIA R15.05-2 standard.
Abstract: Wireless sensor networks (WSN) consists of many sensor nodes that are placed on unattended environments such as military sites in order to collect important information. Implementing a secure protocol that can prevent forwarding forged data and modifying content of aggregated data and has low delay and overhead of communication, computing and storage is very important. This paper presents a new protocol for concealed data aggregation (CDA). In this protocol, the network is divided to virtual cells, nodes within each cell produce a shared key to send and receive of concealed data with each other. Considering to data aggregation in each cell is locally and implementing a secure authentication mechanism, data aggregation delay is very low and producing false data in the network by malicious nodes is not possible. To evaluate the performance of our proposed protocol, we have presented computational models that show the performance and low overhead in our protocol.
Abstract: Conception is the primordial part in the realization of
a computer system. Several tools have been used to help inventors to
describe their software. These tools knew a big success in the
relational databases domain since they permit to generate SQL script
modeling the database from an Entity/Association model. However,
with the evolution of the computer domain, the relational databases
proved their limits and object-relational model became used more
and more. Tools of present conception don't support all new concepts
introduced by this model and the syntax of the SQL3 language. We
propose in this paper a tool of help to the conception and
implementation of object-relational databases called «NAVIGTOOLS"
that allows the user to generate script modeling its database
in SQL3 language. This tool bases itself on the Entity/Association
and navigational model for modeling the object-relational databases.
Abstract: In many data mining applications, it is a priori known
that the target function should satisfy certain constraints imposed
by, for example, economic theory or a human-decision maker. In this
paper we consider partially monotone prediction problems, where the
target variable depends monotonically on some of the input variables
but not on all. We propose a novel method to construct prediction
models, where monotone dependences with respect to some of
the input variables are preserved by virtue of construction. Our
method belongs to the class of mixture models. The basic idea is to
convolute monotone neural networks with weight (kernel) functions
to make predictions. By using simulation and real case studies,
we demonstrate the application of our method. To obtain sound
assessment for the performance of our approach, we use standard
neural networks with weight decay and partially monotone linear
models as benchmark methods for comparison. The results show that
our approach outperforms partially monotone linear models in terms
of accuracy. Furthermore, the incorporation of partial monotonicity
constraints not only leads to models that are in accordance with the
decision maker's expertise, but also reduces considerably the model
variance in comparison to standard neural networks with weight
decay.
Abstract: Image processing for capsule endoscopy requires large
memory and it takes hours for diagnosis since operation time is
normally more than 8 hours. A real-time analysis algorithm of capsule
images can be clinically very useful. It can differentiate abnormal
tissue from health structure and provide with correlation information
among the images. Bleeding is our interest in this regard and we
propose a method of detecting frames with potential bleeding in
real-time. Our detection algorithm is based on statistical analysis and
the shapes of bleeding spots. We tested our algorithm with 30 cases of
capsule endoscopy in the digestive track. Results were excellent where
a sensitivity of 99% and a specificity of 97% were achieved in
detecting the image frames with bleeding spots.
Abstract: An improved topology of a voltage-fed quasi-resonant
soft switching LCrCdc series-parallel half bridge inverter with a constant-frequency for electronic ballast applications is proposed in this paper. This new topology introduces a low-cost solution to
reduce switching losses and circuit rating to achieve high-efficiency
ballast. Switching losses effect on ballast efficiency is discussed
through experimental point of view. In this discussion, an improved
topology in which accomplishes soft switching operation over a wide
power regulation range is proposed. The proposed structure uses reverse recovery diode to provide better operation for the ballast system. A symmetrical pulse wide modulation (PWM) control scheme is implemented to regulate a wide range of out-put power.
Simulation results are kindly verified with the experimental
measurements obtained by ballast-lamp laboratory prototype. Different load conditions are provided in order to clarify the
performance of the proposed converter.
Abstract: The H.264/AVC standard is a highly efficient video
codec providing high-quality videos at low bit-rates. As employing
advanced techniques, the computational complexity has been
increased. The complexity brings about the major problem in the
implementation of a real-time encoder and decoder. Parallelism is the
one of approaches which can be implemented by multi-core system.
We analyze macroblock-level parallelism which ensures the same bit
rate with high concurrency of processors. In order to reduce the
encoding time, dynamic data partition based on macroblock region is
proposed. The data partition has the advantages in load balancing and
data communication overhead. Using the data partition, the encoder
obtains more than 3.59x speed-up on a four-processor system. This
work can be applied to other multimedia processing applications.
Abstract: Emerging Bio-engineering fields such as Brain
Computer Interfaces, neuroprothesis devices and modeling and
simulation of neural networks have led to increased research activity
in algorithms for the detection, isolation and classification of Action
Potentials (AP) from noisy data trains. Current techniques in the field
of 'unsupervised no-prior knowledge' biosignal processing include
energy operators, wavelet detection and adaptive thresholding. These
tend to bias towards larger AP waveforms, AP may be missed due to
deviations in spike shape and frequency and correlated noise
spectrums can cause false detection. Also, such algorithms tend to
suffer from large computational expense.
A new signal detection technique based upon the ideas of phasespace
diagrams and trajectories is proposed based upon the use of a
delayed copy of the AP to highlight discontinuities relative to
background noise. This idea has been used to create algorithms that
are computationally inexpensive and address the above problems.
Distinct AP have been picked out and manually classified from
real physiological data recorded from a cockroach. To facilitate
testing of the new technique, an Auto Regressive Moving Average
(ARMA) noise model has been constructed bases upon background
noise of the recordings. Along with the AP classification means this
model enables generation of realistic neuronal data sets at arbitrary
signal to noise ratio (SNR).
Abstract: In this paper, a fast motion compensation algorithm is
proposed that improves coding efficiency for video sequences with
brightness variations. We also propose a cross entropy measure
between histograms of two frames to detect brightness variations. The
framewise brightness variation parameters, a multiplier and an offset
field for image intensity, are estimated and compensated. Simulation
results show that the proposed method yields a higher peak signal to
noise ratio (PSNR) compared with the conventional method, with a
greatly reduced computational load, when the video scene contains
illumination changes.
Abstract: This paper presents the applications of computational intelligence techniques to economic load dispatch problems. The fuel cost equation of a thermal plant is generally expressed as continuous quadratic equation. In real situations the fuel cost equations can be discontinuous. In view of the above, both continuous and discontinuous fuel cost equations are considered in the present paper. First, genetic algorithm optimization technique is applied to a 6- generator 26-bus test system having continuous fuel cost equations. Results are compared to conventional quadratic programming method to show the superiority of the proposed computational intelligence technique. Further, a 10-generator system each with three fuel options distributed in three areas is considered and particle swarm optimization algorithm is employed to minimize the cost of generation. To show the superiority of the proposed approach, the results are compared with other published methods.
Abstract: In this paper, a recursive algorithm for the
computation of 2-D DCT using Ramanujan Numbers is proposed.
With this algorithm, the floating-point multiplication is completely
eliminated and hence the multiplierless algorithm can be
implemented using shifts and additions only. The orthogonality of
the recursive kernel is well maintained through matrix factorization
to reduce the computational complexity. The inherent parallel
structure yields simpler programming and hardware implementation
and provides
log 1
2
3
2 N N-N+
additions and
N N
2 log
2 shifts which is
very much less complex when compared to other recent multiplierless
algorithms.
Abstract: Grid computing is a high performance computing
environment to solve larger scale computational applications. Grid
computing contains resource management, job scheduling, security
problems, information management and so on. Job scheduling is a
fundamental and important issue in achieving high performance in
grid computing systems. However, it is a big challenge to design an
efficient scheduler and its implementation. In Grid Computing, there
is a need of further improvement in Job Scheduling algorithm to
schedule the light-weight or small jobs into a coarse-grained or
group of jobs, which will reduce the communication time,
processing time and enhance resource utilization. This Grouping
strategy considers the processing power, memory-size and
bandwidth requirements of each job to realize the real grid system.
The experimental results demonstrate that the proposed scheduling
algorithm efficiently reduces the processing time of jobs in
comparison to others.
Abstract: In this paper, delay-dependent stability analysis for
neutral type neural networks with uncertain paramters and
time-varying delay is studied. By constructing new
Lyapunov-Krasovskii functional and dividing the delay interval into
multiple segments, a novel sufficient condition is established to
guarantee the globally asymptotically stability of the considered
system. Finally, a numerical example is provided to illustrate the
usefulness of the proposed main results.
Abstract: The composition, vapour pressure, and heat capacity
of nine biodiesel fuels from different sources were measured. The
vapour pressure of the biodiesel fuels is modeled assuming an ideal
liquid phase of the fatty acid methyl esters constituting the fuel. New
methodologies to calculate the vapour pressure and ideal gas and
liquid heat capacities of the biodiesel fuel constituents are proposed.
Two alternative optimization scenarios are evaluated: 1) vapour
pressure only; 2) vapour pressure constrained with liquid heat
capacity. Without physical constraints, significant errors in liquid
heat capacity predictions were found whereas the constrained
correlation accurately fit both vapour pressure and liquid heat
capacity.
Abstract: In order to define a new model of Tunisian foot
sizes and for building the most comfortable shoes, Tunisian
industrialists must be able to offer for their customers products able
to put on and adjust the majority of the target population concerned.
Moreover, the use of models of shoes, mainly from others
country, causes a mismatch between the foot and comfort of the
Tunisian shoes.
But every foot is unique; these models become uncomfortable for
the Tunisian foot. We have a set of measures produced from a
3D scan of the feet of a diverse population (women, men ...) and we
try to analyze this data to define a model of foot specific to the
Tunisian footwear design.
In this paper we propose tow new approaches to modeling a new
foot sizes model. We used, indeed, the neural networks, and specially
the Kohonen network.
Next, we combine neural networks with the concept of half-foot
size to improve the models already found. Finally, it was necessary to
compare the results obtained by applying each approach and we
decide what-s the best approach that give us the most model of foot
improving more comfortable shoes.
Abstract: A theory for optimal filtering of infinite sets of random
signals is presented. There are several new distinctive features of the
proposed approach. First, a single optimal filter for processing any
signal from a given infinite signal set is provided. Second, the filter is
presented in the special form of a sum with p terms where each term
is represented as a combination of three operations. Each operation
is a special stage of the filtering aimed at facilitating the associated
numerical work. Third, an iterative scheme is implemented into the
filter structure to provide an improvement in the filter performance at
each step of the scheme. The final step of the scheme concerns signal
compression and decompression. This step is based on the solution of
a new rank-constrained matrix approximation problem. The solution
to the matrix problem is described in this paper. A rigorous error
analysis is given for the new filter.
Abstract: Longitudinal data typically have the characteristics of
changes over time, nonlinear growth patterns, between-subjects
variability, and the within errors exhibiting heteroscedasticity and
dependence. The data exploration is more complicated than that of
cross-sectional data. The purpose of this paper is to organize/integrate
of various visual-graphical techniques to explore longitudinal data.
From the application of the proposed methods, investigators can
answer the research questions include characterizing or describing the
growth patterns at both group and individual level, identifying the time
points where important changes occur and unusual subjects, selecting
suitable statistical models, and suggesting possible within-error
variance.
Abstract: A 2.4GHz (RF) down conversion Gilbert Cell mixer,
implemented in a 0.18-μm CMOS technology with a 1.8V supply, is
presented. Current bleeding (charge injection) technique has been
used to increase the conversion gain and the linearity of the mixer.
The proposed mixer provides 10.75 dB conversion gain ( C G ) with
14.3mw total power consumption. The IIP3 and 1-dB compression
point of the mixer are 8dbm and -4.6dbm respectively, at 300 MHz
IF frequencies. Comparing the current design against the
conventional mixer design, demonstrates better performance in the
conversion gain, linearity, noise figure and port-to-port isolation.