Abstract: In this article we present a change point detection algorithm based on the continuous wavelet transform. At the beginning of the article we describe a necessary transformation of a signal which has to be made for the purpose of change detection. Then case study related to iron ore sinter production which can be solved using our proposed technique is discussed. After that we describe a probabilistic algorithm which can be used to find changes using our transformed signal. It is shown that our algorithm works well with the presence of some noise and abnormal random bursts.
Abstract: In this paper a class of analog algorithms based on the
concept of Cellular Neural Network (CNN) is applied in some
processing operations of some important medical images, namely
retina images, for detecting various symptoms connected with
diabetic retinopathy. Some specific processing tasks like
morphological operations, linear filtering and thresholding are
proposed, the corresponding template values are given and
simulations on real retina images are provided.
Abstract: Data Structures and Algorithms is a module in most
Computer Science or Information Technology curricula. It is one of
the modules most students identify as being difficult. This paper
demonstrates how programming a solution for Sudoku can make
abstract concepts more concrete. The paper relates concepts of a
typical Data Structures and Algorithms module to a step by step
solution for Sudoku in a human type as opposed to a computer
oriented solution.
Abstract: Background noise is particularly damaging to speech
intelligibility for people with hearing loss especially for sensorineural
loss patients. Several investigations on speech intelligibility have
demonstrated sensorineural loss patients need 5-15 dB higher SNR
than the normal hearing subjects. This paper describes Discrete
Cosine Transform Power Normalized Least Mean Square algorithm
to improve the SNR and to reduce the convergence rate of the LMS
for Sensory neural loss patients. Since it requires only real arithmetic,
it establishes the faster convergence rate as compare to time domain
LMS and also this transformation improves the eigenvalue
distribution of the input autocorrelation matrix of the LMS filter.
The DCT has good ortho-normal, separable, and energy compaction
property. Although the DCT does not separate frequencies, it is a
powerful signal decorrelator. It is a real valued function and thus
can be effectively used in real-time operation. The advantages of
DCT-LMS as compared to standard LMS algorithm are shown via
SNR and eigenvalue ratio computations. . Exploiting the symmetry
of the basis functions, the DCT transform matrix [AN] can be
factored into a series of ±1 butterflies and rotation angles. This
factorization results in one of the fastest DCT implementation. There
are different ways to obtain factorizations. This work uses the fast
factored DCT algorithm developed by Chen and company. The
computer simulations results show superior convergence
characteristics of the proposed algorithm by improving the SNR at
least 10 dB for input SNR less than and equal to 0 dB, faster
convergence speed and better time and frequency characteristics.
Abstract: The Programmable Logic Controller (PLC) plays a
vital role in automation and process control. Grafcet is used for
representing the control logic, and traditional programming
languages are used for describing the pure algorithms. Grafcet is used
for dividing the process to be automated in elementary sequences that
can be easily implemented. Each sequence represent a step that has
associated actions programmed using textual or graphical languages
after case. The programming task is simplified by using a set of
subroutines that are used in several steps. The paper presents an
example of implementation for a punching machine for sheets and
plates. The use the graphical languages the programming of a
complex sequential process is a necessary solution. The state of
Grafcet can be used for debugging and malfunction determination.
The use of the method combined with a set of knowledge acquisition
for process application reduces the downtime of the machine and
improve the productivity.
Abstract: In the era of great competition, understanding and satisfying
customers- requirements are the critical tasks for a company
to make a profits. Customer relationship management (CRM) thus
becomes an important business issue at present. With the help of
the data mining techniques, the manager can explore and analyze
from a large quantity of data to discover meaningful patterns and
rules. Among all methods, well-known association rule is most
commonly seen. This paper is based on Apriori algorithm and uses
genetic algorithms combining a data mining method to discover fuzzy
classification rules. The mined results can be applied in CRM to
help decision marker make correct business decisions for marketing
strategies.
Abstract: Different methods containing biometric algorithms are
presented for the representation of eigenfaces detection including
face recognition, are identification and verification. Our theme of this
research is to manage the critical processing stages (accuracy, speed,
security and monitoring) of face activities with the flexibility of
searching and edit the secure authorized database. In this paper we
implement different techniques such as eigenfaces vector reduction
by using texture and shape vector phenomenon for complexity
removal, while density matching score with Face Boundary Fixation
(FBF) extracted the most likelihood characteristics in this media
processing contents. We examine the development and performance
efficiency of the database by applying our creative algorithms in both
recognition and detection phenomenon. Our results show the
performance accuracy and security gain with better achievement than
a number of previous approaches in all the above processes in an
encouraging mode.
Abstract: This study presents a hybrid neural network and Gravitational Search Algorithm (HNGSA) method to solve well known Wessinger's equation. To aim this purpose, gravitational search algorithm (GSA) technique is applied to train a multi-layer perceptron neural network, which is used as approximation solution of the Wessinger's equation. A trial solution of the differential equation is written as sum of two parts. The first part satisfies the initial/ boundary conditions and does not contain any adjustable parameters and the second part which is constructed so as not to affect the initial/boundary conditions. The second part involves adjustable parameters (the weights and biases) for a multi-layer perceptron neural network. In order to demonstrate the presented method, the obtained results of the proposed method are compared with some known numerical methods. The given results show that presented method can introduce a closer form to the analytic solution than other numerical methods. Present method can be easily extended to solve a wide range of problems.
Abstract: This paper presents an application of particle swarm
optimization (PSO) to the grounding grid planning which compares to
the application of genetic algorithm (GA). Firstly, based on IEEE
Std.80, the cost function of the grounding grid and the constraints of
ground potential rise, step voltage and touch voltage are constructed
for formulating the optimization problem of grounding grid planning.
Secondly, GA and PSO algorithms for obtaining optimal solution of
grounding grid are developed. Finally, a case of grounding grid
planning is shown the superiority and availability of the PSO
algorithm and proposal planning results of grounding grid in cost and
computational time.
Abstract: A method of dynamic mesh based airfoil optimization is proposed according to the drawbacks of surrogate model based airfoil optimization. Programs are designed to achieve the dynamic mesh. Boundary condition is add by integrating commercial software Pointwise, meanwhile the CFD calculation is carried out by commercial software Fluent. The data exchange and communication between the software and programs referred above have been accomplished, and the whole optimization process is performed in iSIGHT platform. A simplified airfoil optimization study case is brought out to show that aerodynamic performances of airfoil have been significantly improved, even save massive repeat operations and increase the robustness and credibility of the optimization result. The case above proclaims that dynamic mesh based airfoil optimization is an effective and high efficient method.
Abstract: The rapid growth of e-Commerce services is
significantly observed in the past decade. However, the method to
verify the authenticated users still widely depends on numeric
approaches. A new search on other verification methods suitable for
online e-Commerce is an interesting issue. In this paper, a new online
signature-verification method using angular transformation is
presented. Delay shifts existing in online signatures are estimated by
the estimation method relying on angle representation. In the
proposed signature-verification algorithm, all components of input
signature are extracted by considering the discontinuous break points
on the stream of angular values. Then the estimated delay shift is
captured by comparing with the selected reference signature and the
error matching can be computed as a main feature used for verifying
process. The threshold offsets are calculated by two types of error
characteristics of the signature verification problem, False Rejection
Rate (FRR) and False Acceptance Rate (FAR). The level of these two
error rates depends on the decision threshold chosen whose value is
such as to realize the Equal Error Rate (EER; FAR = FRR). The
experimental results show that through the simple programming,
employed on Internet for demonstrating e-Commerce services, the
proposed method can provide 95.39% correct verifications and 7%
better than DP matching based signature-verification method. In
addition, the signature verification with extracting components
provides more reliable results than using a whole decision making.
Abstract: The decisions made by admission control algorithms are
based on the availability of network resources viz. bandwidth, energy,
memory buffers, etc., without degrading the Quality-of-Service (QoS)
requirement of applications that are admitted. In this paper, we
present an energy-aware admission control (EAAC) scheme which
provides admission control for flows in an ad hoc network based
on the knowledge of the present and future residual energy of the
intermediate nodes along the routing path. The aim of EAAC is to
quantify the energy that the new flow will consume so that it can
be decided whether the future residual energy of the nodes along
the routing path can satisfy the energy requirement. In other words,
this energy-aware routing admits a new flow iff any node in the
routing path does not run out of its energy during the transmission
of packets. The future residual energy of a node is predicted using
the Multi-layer Neural Network (MNN) model. Simulation results
shows that the proposed scheme increases the network lifetime. Also
the performance of the MNN model is presented.
Abstract: Search for a tertiary substructure that geometrically
matches the 3D pattern of the binding site of a well-studied protein provides a solution to predict protein functions. In our previous work,
a web server has been built to predict protein-ligand binding sites
based on automatically extracted templates. However, a drawback of such templates is that the web server was prone to resulting in many
false positive matches. In this study, we present a sequence-order constraint to reduce the false positive matches of using automatically
extracted templates to predict protein-ligand binding sites. The binding site predictor comprises i) an automatically constructed template library and ii) a local structure alignment algorithm for
querying the library. The sequence-order constraint is employed to
identify the inconsistency between the local regions of the query protein and the templates. Experimental results reveal that the sequence-order constraint can largely reduce the false positive matches and is effective for template-based binding site prediction.
Abstract: The solvated electron is self-trapped (polaron) owing
to strong interaction with the quantum polarization field. If the
electron and quantum field are strongly coupled then the collective
localized state of the field and quasi-particle is formed. In such a
formation the electron motion is rather intricate. On the one hand the
electron oscillated within a rather deep polarization potential well
and undergoes the optical transitions, and on the other, it moves
together with the center of inertia of the system and participates in
the thermal random walk. The problem is to separate these motions
correctly, rigorously taking into account the conservation laws. This
can be conveniently done using Bogolyubov-Tyablikov method of
canonical transformation to the collective coordinates. This
transformation removes the translational degeneracy and allows one
to develop the successive approximation algorithm for the energy and
wave function while simultaneously fulfilling the law of conservation
of total momentum of the system. The resulting equations determine
the electron transitions and depend explicitly on the translational
velocity of the quasi-particle as whole. The frequency of optical
transition is calculated for the solvated electron in ammonia, and an
estimate is made for the thermal-induced spectral bandwidth.
Abstract: Hexapod Machine Tool (HMT) is a parallel robot
mostly based on Stewart platform. Identification of kinematic
parameters of HMT is an important step of calibration procedure. In
this paper an algorithm is presented for identifying the kinematic
parameters of HMT using inverse kinematics error model. Based on
this algorithm, the calibration procedure is simulated. Measurement
configurations with maximum observability are decided as the first
step of this algorithm for a robust calibration. The errors occurring in
various configurations are illustrated graphically. It has been shown
that the boundaries of the workspace should be searched for the
maximum observability of errors. The importance of using
configurations with sufficient observability in calibrating hexapod
machine tools is verified by trial calibration with two different
groups of randomly selected configurations. One group is selected to
have sufficient observability and the other is in disregard of the
observability criterion. Simulation results confirm the validity of the
proposed identification algorithm.
Abstract: In this paper, we evaluate the performance of some wavelet based coding algorithms such as 3D QT-L, 3D SPIHT and JPEG2K. In the first step we achieve an objective comparison between three coders, namely 3D SPIHT, 3D QT-L and JPEG2K. For this purpose, eight MRI head scan test sets of 256 x 256x124 voxels have been used. Results show superior performance of 3D SPIHT algorithm, whereas 3D QT-L outperforms JPEG2K. The second step consists of evaluating the robustness of 3D SPIHT and JPEG2K coding algorithm over wireless transmission. Compressed dataset images are then transmitted over AWGN wireless channel or over Rayleigh wireless channel. Results show the superiority of JPEG2K over these two models. In fact, it has been deduced that JPEG2K is more robust regarding coding errors. Thus we may conclude the necessity of using corrector codes in order to protect the transmitted medical information.
Abstract: In the visual servoing systems, the data obtained by
Visionary is used for controlling robots. In this project, at first the
simulator which was proposed for simulating the performance of a
6R robot before, was examined in terms of software and test, and in
the proposed simulator, existing defects were obviated. In the first
version of simulation, the robot was directed toward the target object only in a Position-based method using two cameras in the
environment. In the new version of the software, three cameras were used simultaneously. The camera which is installed as eye-inhand on the end-effector of the robot is used for visual servoing in a
Feature-based method. The target object is recognized according to
its characteristics and the robot is directed toward the object in compliance with an algorithm similar to the function of human-s
eyes. Then, the function and accuracy of the operation of the robot are examined through Position-based visual servoing method using
two cameras installed as eye-to-hand in the environment. Finally, the obtained results are tested under ANSI-RIA R15.05-2 standard.
Abstract: Sensory nerves in the foot play an important part in the diagnosis of various neuropathydisorders, especially in diabetes mellitus.However, a detailed description of the anatomical distribution of the nerves is currently lacking. A computationalmodel of the afferent nerves inthe foot may bea useful tool for the study of diabetic neuropathy. In this study, we present the development of an anatomically-based model of various major sensory nerves of the sole and dorsal sidesof the foot. In addition, we presentan algorithm for generating synthetic somatosensory nerve networks in the big-toe region of a right foot model. The algorithm was based on a modified version of the Monte Carlo algorithm, with the capability of being able to vary the intra-epidermal nerve fiber density in differentregionsof the foot model. Preliminary results from the combinedmodel show the realistic anatomical structure of the major nerves as well as the smaller somatosensory nerves of the foot. The model may now be developed to investigate the functional outcomes of structural neuropathyindiabetic patients.
Abstract: The new idea of this research is application of a new fault detection and isolation (FDI) technique for supervision of sensor networks in transportation system. In measurement systems, it is necessary to detect all types of faults and failures, based on predefined algorithm. Last improvements in artificial neural network studies (ANN) led to using them for some FDI purposes. In this paper, application of new probabilistic neural network features for data approximation and data classification are considered for plausibility check in temperature measurement. For this purpose, two-phase FDI mechanism was considered for residual generation and evaluation.
Abstract: Image processing for capsule endoscopy requires large
memory and it takes hours for diagnosis since operation time is
normally more than 8 hours. A real-time analysis algorithm of capsule
images can be clinically very useful. It can differentiate abnormal
tissue from health structure and provide with correlation information
among the images. Bleeding is our interest in this regard and we
propose a method of detecting frames with potential bleeding in
real-time. Our detection algorithm is based on statistical analysis and
the shapes of bleeding spots. We tested our algorithm with 30 cases of
capsule endoscopy in the digestive track. Results were excellent where
a sensitivity of 99% and a specificity of 97% were achieved in
detecting the image frames with bleeding spots.