Abstract: Vibration during machining process is crucial since it affects cutting tool, machine, and workpiece leading to a tool wear, tool breakage, and an unacceptable surface roughness. This paper applies a nonparametric statistical method, single decision tree (SDT), to identify factors affecting on vibration in machining process. Workpiece material (AISI 1045 Steel, AA2024 Aluminum alloy, A48-class30 Gray Cast Iron), cutting tool (conventional, cutting tool with holes in toolholder, cutting tool filled up with epoxy-granite), tool overhang (41-65 mm), spindle speed (630-1000 rpm), feed rate (0.05-0.075 mm/rev) and depth of cut (0.05-0.15 mm) were used as input variables, while vibration was the output parameter. It is concluded that workpiece material is the most important parameters for natural frequency followed by cutting tool and overhang.
Abstract: In this paper, dynamic programming is used to determine the optimal management of financial resources in company. Solution of the problem by consider into simpler substructures is constructed. The optimal management of internal capital of company are simulated. The tools applied in this development are based on graph theory. The software of given problems is built by using greedy algorithm. The obtained model and program maintenance enable us to define the optimal version of management of proper financial flows by using visual diagram on each level of investment.
Abstract: This paper presents the method of designing the type 2 fuzzy PID controllers in order to solve the problem of Load Frequency Control (LFC). The Harmony Search (HS) algorithm is used to regulate the measurement factors and the effect of uncertainty of membership functions of Interval Type 2 Fuzzy Proportional Integral Differential (IT2FPID) controllers in order to reduce the frequency deviation resulted from the load oscillations. The simulation results implicitly show that the performance of the proposed IT2FPID LFC in terms of error, settling time and resistance against different load oscillations is more appropriate and preferred than PID and Type 1 Fuzzy Proportional Integral Differential (T1FPID) controllers.
Abstract: This paper presents the three optimization models, namely New Binary Artificial Bee Colony (NBABC) algorithm, NBABC with Local Search (NBABC-LS), and NBABC with Genetic Crossover (NBABC-GC) for solving the Wind-Thermal Unit Commitment (WTUC) problem. The uncertain nature of the wind power is incorporated using the Weibull probability density function, which is used to calculate the overestimation and underestimation costs associated with the wind power fluctuation. The NBABC algorithm utilizes a mechanism based on the dissimilarity measure between binary strings for generating the binary solutions in WTUC problem. In NBABC algorithm, an intelligent scout bee phase is proposed that replaces the abandoned solution with the global best solution. The local search operator exploits the neighboring region of the current solutions, whereas the integration of genetic crossover with the NBABC algorithm increases the diversity in the search space and thus avoids the problem of local trappings encountered with the NBABC algorithm. These models are then used to decide the units on/off status, whereas the lambda iteration method is used to dispatch the hourly load demand among the committed units. The effectiveness of the proposed models is validated on an IEEE 10-unit thermal system combined with a wind farm over the planning period of 24 hours.
Abstract: Optimization is necessary for finding appropriate solutions to a range of real-life problems. In particular, genetic (or more generally, evolutionary) algorithms have proved very useful in solving many problems for which analytical solutions are not available. In this paper, we present an optimization algorithm called Dynamic Schema with Dissimilarity and Similarity of Chromosomes (DSDSC) which is a variant of the classical genetic algorithm. This approach constructs new chromosomes from a schema and pairs of existing ones by exploring their dissimilarities and similarities. To show the effectiveness of the algorithm, it is tested and compared with the classical GA, on 15 two-dimensional optimization problems taken from literature. We have found that, in most cases, our method is better than the classical genetic algorithm.
Abstract: A two wheel inverted pendulum (TWIP) vehicle is built with two hub DC motors for motion control evaluation. Arduino Nano micro-processor is chosen as the control kernel for this electric test plant. Accelerometer and gyroscope sensors are built in to measure the tilt angle and angular velocity of the inverted pendulum vehicle. Since the TWIP has significantly hub motor dead zone and nonlinear system dynamics characteristics, the vehicle system is difficult to control by traditional model based controller. The intelligent model-free fuzzy sliding mode controller (FSMC) was employed as the main control algorithm. Then, intelligent controllers are designed for TWIP balance control, and two wheels synchronization control purposes.
Abstract: Quantitative measurement of myocardium perfusion is possible with single photon emission computed tomography (SPECT) using a semiconductor detector. However, accumulation of 99mTc-tetrofosmin in the liver may make it difficult to assess that accurately in the inferior myocardium. Our idea is to reduce the high accumulation in the liver by using dynamic SPECT imaging and a technique called time subtraction. We evaluated the performance of a new SPECT system with a cadmium-zinc-telluride solid-state semi- conductor detector (Discovery NM 530c; GE Healthcare). Our system acquired list-mode raw data over 10 minutes for a typical patient. From the data, ten SPECT images were reconstructed, one for every minute of acquired data. Reconstruction with the semiconductor detector was based on an implementation of a 3-D iterative Bayesian reconstruction algorithm. We studied 20 patients with coronary artery disease (mean age 75.4 ± 12.1 years; range 42-86; 16 males and 4 females). In each subject, 259 MBq of 99mTc-tetrofosmin was injected intravenously. We performed both a phantom and a clinical study using dynamic SPECT. An approximation to a liver-only image is obtained by reconstructing an image from the early projections during which time the liver accumulation dominates (0.5~2.5 minutes SPECT image-5~10 minutes SPECT image). The extracted liver-only image is then subtracted from a later SPECT image that shows both the liver and the myocardial uptake (5~10 minutes SPECT image-liver-only image). The time subtraction of liver was possible in both a phantom and the clinical study. The visualization of the inferior myocardium was improved. In past reports, higher accumulation in the myocardium due to the overlap of the liver is un-diagnosable. Using our time subtraction method, the image quality of the 99mTc-tetorofosmin myocardial SPECT image is considerably improved.
Abstract: This paper presents the three optimization models, namely New Binary Artificial Bee Colony (NBABC) algorithm, NBABC with Local Search (NBABC-LS), and NBABC with Genetic Crossover (NBABC-GC) for solving the Wind-Thermal Unit Commitment (WTUC) problem. The uncertain nature of the wind power is incorporated using the Weibull probability density function, which is used to calculate the overestimation and underestimation costs associated with the wind power fluctuation. The NBABC algorithm utilizes a mechanism based on the dissimilarity measure between binary strings for generating the binary solutions in WTUC problem. In NBABC algorithm, an intelligent scout bee phase is proposed that replaces the abandoned solution with the global best solution. The local search operator exploits the neighboring region of the current solutions, whereas the integration of genetic crossover with the NBABC algorithm increases the diversity in the search space and thus avoids the problem of local trappings encountered with the NBABC algorithm. These models are then used to decide the units on/off status, whereas the lambda iteration method is used to dispatch the hourly load demand among the committed units. The effectiveness of the proposed models is validated on an IEEE 10-unit thermal system combined with a wind farm over the planning period of 24 hours.
Abstract: In this study, a black box modeling of the coupled-tank system is obtained by using fuzzy sets. The derived model is tested via adaptive neuro fuzzy inference system (ANFIS). In order to achieve a better control performance, the parameters of three different controller types, classical proportional integral controller (PID), fuzzy PID and function tuner method, are tuned by one of the evolutionary computation method, genetic algorithm. All tuned controllers are applied to the fuzzy model of the coupled-tank experimental setup and analyzed under the different reference input values. According to the results, it is seen that function tuner method demonstrates better robust control performance and guarantees the closed loop stability.
Abstract: Self-driving vehicle require a high level of situational
awareness in order to maneuver safely when driving in real world
condition. This paper presents a LiDAR based real time perception
system that is able to process sensor raw data for multiple target
detection and tracking in dynamic environment. The proposed
algorithm is nonparametric and deterministic that is no assumptions
and priori knowledge are needed from the input data and no
initializations are required. Additionally, the proposed method is
working on the three-dimensional data directly generated by LiDAR
while not scarifying the rich information contained in the domain of
3D. Moreover, a fast and efficient for real time clustering algorithm
is applied based on a radially bounded nearest neighbor (RBNN).
Hungarian algorithm procedure and adaptive Kalman filtering are
used for data association and tracking algorithm. The proposed
algorithm is able to run in real time with average run time of 70ms
per frame.
Abstract: Ubiquity of natural disasters during last few decades
have risen serious questions towards the prediction of such events
and human safety. Every disaster regardless its proportion has a
precursor which is manifested as a disruption of some environmental
parameter such as temperature, humidity, pressure, vibrations and
etc. In order to anticipate and monitor those changes, in this paper
we propose an overall system for disaster prediction and monitoring,
based on wireless sensor network (WSN). Furthermore, we introduce
a modified and simplified WSN routing protocol built on the top
of the trickle routing algorithm. Routing algorithm was deployed
using the bluetooth low energy protocol in order to achieve low
power consumption. Performance of the WSN network was analyzed
using a real life system implementation. Estimates of the WSN
parameters such as battery life time, network size and packet delay are
determined. Based on the performance of the WSN network, proposed
system can be utilized for disaster monitoring and prediction due to
its low power profile and mesh routing feature.
Abstract: There are about 1% of the world population suffering
from the hidden disability known as epilepsy and major developing
countries are not fully equipped to counter this problem. In order to
reduce the inconvenience and danger of epilepsy, different methods
have been researched by using a artificial neural network (ANN)
classification to distinguish epileptic waveforms from normal brain
waveforms. This paper outlines the aim of achieving massive
ANN parallelization through a dedicated hardware using bit-serial
processing. The design of this bit-serial Neural Processing Element
(NPE) is presented which implements the functionality of a complete
neuron using variable accuracy. The proposed design has been tested
taking into consideration non-idealities of a hardware ANN. The NPE
consists of a bit-serial multiplier which uses only 16 logic elements
on an Altera Cyclone IV FPGA and a bit-serial ALU as well as a
look-up table. Arrays of NPEs can be driven by a single controller
which executes the neural processing algorithm. In conclusion, the
proposed compact NPE design allows the construction of complex
hardware ANNs that can be implemented in a portable equipment
that suits the needs of a single epileptic patient in his or her daily
activities to predict the occurrences of impending tonic conic seizures.
Abstract: In order to analyze the status of a diesel engine, as well as conduct fault prediction, a new prediction model based on a gray system is proposed in this paper, which takes advantage of the neural network and the genetic algorithm. The proposed GBPGA prediction model builds on the GM (1.5) model and uses a neural network, which is optimized by a genetic algorithm to construct the error compensator. We verify our proposed model on the diesel faulty simulation data and the experimental results show that GBPGA has the potential to employ fault prediction on diesel.
Abstract: Today, insurers may use the yield curve as an indicator
evaluation of the profit or the performance of their portfolios;
therefore, they modeled it by one class of model that has the ability
to fit and forecast the future term structure of interest rates. This class
of model is the Nelson-Siegel-Svensson model. Unfortunately, many
authors have reported a lot of difficulties when they want to calibrate
the model because the optimization problem is not convex and has
multiple local optima. In this context, we implement a hybrid Particle
Swarm optimization and Nelder Mead algorithm in order to minimize
by least squares method, the difference between the zero-coupon
curve and the NSS curve.
Abstract: This paper presents the trajectory tracking control of a
spatial redundant hybrid manipulator. This manipulator consists of
two parallel manipulators which are a variable geometry truss (VGT)
module. In fact, each VGT module with 3-degress of freedom (DOF)
is a planar parallel manipulator and their operational planes of these
VGT modules are arranged to be orthogonal to each other. Also, the
manipulator contains a twist motion part attached to the top of the
second VGT module to supply the missing orientation of the endeffector.
These three modules constitute totally 7-DOF hybrid
(parallel-parallel) redundant spatial manipulator. The forward
kinematics equations of this manipulator are obtained, then,
according to these equations, the inverse kinematics is solved based
on an optimization with the joint limit avoidance. The dynamic
equations are formed by using virtual work method. In order to test
the performance of the redundant manipulator and the controllers
presented, two different desired trajectories are followed by using the
computed force control method and a switching control method. The
switching control method is combined with the computed force
control method and genetic algorithm. In the switching control
method, the genetic algorithm is only used for fine tuning in the
compensation of the trajectory tracking errors.
Abstract: Edge detection is one of the most important tasks in image processing. Medical image edge detection plays an important role in segmentation and object recognition of the human organs. It refers to the process of identifying and locating sharp discontinuities in medical images. In this paper, a neuro-fuzzy based approach is introduced to detect the edges for noisy medical images. This approach uses desired number of neuro-fuzzy subdetectors with a postprocessor for detecting the edges of medical images. The internal parameters of the approach are optimized by training pattern using artificial images. The performance of the approach is evaluated on different medical images and compared with popular edge detection algorithm. From the experimental results, it is clear that this approach has better performance than those of other competing edge detection algorithms for noisy medical images.
Abstract: In this paper, we study the data collection problem in
Wireless Sensor Networks (WSNs) adopting the two interference
models: The graph model and the more realistic physical interference
model known as Signal-to-Interference-Noise-Ratio (SINR). The
main issue of the problem is to compute schedules with the minimum
number of timeslots, that is, to compute the minimum latency
schedules, such that data from every node can be collected without
any collision or interference to a sink node. While existing works
studied the problem with unit-sized and unbounded-sized message
models, we investigate the problem with the bounded-sized message
model, and introduce a constant factor approximation algorithm.
To the best known of our knowledge, our result is the first result
of the data collection problem with bounded-sized model in both
interference models.
Abstract: Real-time shadow generation in virtual environments
and Augmented Reality (AR) was always a hot topic in the last
three decades. Lots of calculation for shadow generation among AR
needs a fast algorithm to overcome this issue and to be capable of
implementing in any real-time rendering. In this paper, a silhouette
detection algorithm is presented to generate shadows for AR systems.
Δ+ algorithm is presented based on extending edges of occluders
to recognize which edges are silhouettes in the case of real-time
rendering. An accurate comparison between the proposed algorithm
and current algorithms in silhouette detection is done to show the
reduction calculation by presented algorithm. The algorithm is tested
in both virtual environments and AR systems. We think that this
algorithm has the potential to be a fundamental algorithm for shadow
generation in all complex environments.
Abstract: Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.
Abstract: In recent days, there is a change and the ongoing development of the telecommunications sector in the global market. In this sector, churn analysis techniques are commonly used for analysing why some customers terminate their service subscriptions prematurely. In addition, customer churn is utmost significant in this sector since it causes to important business loss. Many companies make various researches in order to prevent losses while increasing customer loyalty. Although a large quantity of accumulated data is available in this sector, their usefulness is limited by data quality and relevance. In this paper, a cost-sensitive feature selection framework is developed aiming to obtain the feature reducts to predict customer churn. The framework is a cost based optional pre-processing stage to remove redundant features for churn management. In addition, this cost-based feature selection algorithm is applied in a telecommunication company in Turkey and the results obtained with this algorithm.