Abstract: Electrophysiological signals were recorded from primary cultures of dissociated rat cortical neurons coupled to Micro-Electrode Arrays (MEAs). The neuronal discharge patterns may change under varying physiological and pathological conditions. For this reason, we developed a new burst detection method able to identify bursts with peculiar features in different experimental conditions (i.e. spontaneous activity and under the effect of specific drugs). The main feature of our algorithm (i.e. Burst On Hurst), based on the auto-similarity or fractal property of the recorded signal, is the independence from the chosen spike detection method since it works directly on the raw data.
Abstract: Many supervised induction algorithms require discrete
data, even while real data often comes in a discrete
and continuous formats. Quality discretization of continuous
attributes is an important problem that has effects on speed,
accuracy and understandability of the induction models. Usually,
discretization and other types of statistical processes are applied
to subsets of the population as the entire population is practically
inaccessible. For this reason we argue that the discretization
performed on a sample of the population is only an estimate of
the entire population. Most of the existing discretization methods,
partition the attribute range into two or several intervals using
a single or a set of cut points. In this paper, we introduce a
technique by using resampling (such as bootstrap) to generate
a set of candidate discretization points and thus, improving the
discretization quality by providing a better estimation towards
the entire population. Thus, the goal of this paper is to observe
whether the resampling technique can lead to better discretization
points, which opens up a new paradigm to construction of
soft decision trees.
Abstract: The Chiu-s method which generates a Takagi-Sugeno Fuzzy Inference System (FIS) is a method of fuzzy rules extraction. The rules output is a linear function of inputs. In addition, these rules are not explicit for the expert. In this paper, we develop a method which generates Mamdani FIS, where the rules output is fuzzy. The method proceeds in two steps: first, it uses the subtractive clustering principle to estimate both the number of clusters and the initial locations of a cluster centers. Each obtained cluster corresponds to a Mamdani fuzzy rule. Then, it optimizes the fuzzy model parameters by applying a genetic algorithm. This method is illustrated on a traffic network management application. We suggest also a Mamdani fuzzy rules generation method, where the expert wants to classify the output variables in some fuzzy predefined classes.
Abstract: This paper presents parametric probability density
models for call holding times (CHTs) into emergency call center
based on the actual data collected for over a week in the public
Emergency Information Network (EIN) in Mongolia. When the set of
chosen candidates of Gamma distribution family is fitted to the call
holding time data, it is observed that the whole area in the CHT
empirical histogram is underestimated due to spikes of higher
probability and long tails of lower probability in the histogram.
Therefore, we provide the Gaussian parametric model of a mixture of
lognormal distributions with explicit analytical expressions for the
modeling of CHTs of PSNs. Finally, we show that the CHTs for
PSNs are fitted reasonably by a mixture of lognormal distributions
via the simulation of expectation maximization algorithm. This result
is significant as it expresses a useful mathematical tool in an explicit
manner of a mixture of lognormal distributions.
Abstract: The shortest path question is in a graph theory model
question, and it is applied in many fields. The most short-path
question may divide into two kinds: Single sources most short-path,
all apexes to most short-path. This article mainly introduces the
problem of all apexes to most short-path, and gives a new parallel
algorithm of all apexes to most short-path according to the Dijkstra
algorithm. At last this paper realizes the parallel algorithms in the
technology of C # multithreading.
Abstract: Space Vector Pulse Width Modulation SVPWM is
one of the most used techniques to generate sinusoidal voltage and
current due to its facility and efficiency with low harmonics
distortion. This algorithm is specially used in power electronic
applications. This paper describes simulation algorithm of SVPWM
& SPWM using MatLab/simulink environment. It also implements a
closed loop three phases DC-AC converter controlling its outputs
voltages amplitude and frequency using MatLab. Also comparison
between SVPWM & SPWM results is given.
Abstract: This paper introduces an automatic voice classification
system for the diagnosis of individual constitution based on Sasang
Constitutional Medicine (SCM) in Traditional Korean Medicine
(TKM). For the developing of this algorithm, we used the voices of
309 female speakers and extracted a total of 134 speech features from
the voice data consisting of 5 sustained vowels and one sentence. The
classification system, based on a rule-based algorithm that is derived
from a non parametric statistical method, presents 3 types of decisions:
reserved, positive and negative decisions. In conclusion, 71.5% of the
voice data were diagnosed by this system, of which 47.7% were
correct positive decisions and 69.7% were correct negative decisions.
Abstract: The rapid advance of communication technology is
evolving the network environment into the broadband convergence
network. Likewise, the IT services operated in the individual network
are also being quickly converged in the broadband convergence
network environment. VoIP and IPTV are two examples of such new
services. Efforts are being made to develop the video phone service,
which is an advanced form of the voice-oriented VoIP service.
However, the new IT services will be subject to stability and reliability
vulnerabilities if the relevant security issues are not answered during
the convergence of the existing IT services currently being operated in
individual networks within the wider broadband network
environment. To resolve such problems, this paper attempts to analyze
the possible threats and identify the necessary security measures
before the deployment of the new IT services. Furthermore, it
measures the quality of the encryption algorithm application example
to describe the appropriate algorithm in order to present security
technology that will have no negative impact on the quality of the
video phone service.
Abstract: Dealing with hundreds of features in character
recognition systems is not unusual. This large number of features
leads to the increase of computational workload of recognition
process. There have been many methods which try to remove
unnecessary or redundant features and reduce feature dimensionality.
Besides because of the characteristics of Farsi scripts, it-s not
possible to apply other languages algorithms to Farsi directly. In this
paper some methods for feature subset selection using genetic
algorithms are applied on a Farsi optical character recognition (OCR)
system. Experimental results show that application of genetic
algorithms (GA) to feature subset selection in a Farsi OCR results in
lower computational complexity and enhanced recognition rate.
Abstract: Two algorithms are proposed to reduce the storage requirements for mammogram images. The input image goes through a shrinking process that converts the 16-bit images to 8-bits by using pixel-depth conversion algorithm followed by enhancement process. The performance of the algorithms is evaluated objectively and subjectively. A 50% reduction in size is obtained with no loss of significant data at the breast region.
Abstract: In this paper, the Tabu search algorithm is used to
solve a transportation problem which consists of determining the
shortest routes with the appropriate vehicle capacity to facilitate the
travel of the students attending the University of Mauritius. The aim
of this work is to minimize the total cost of the distance travelled by
the vehicles in serving all the customers. An initial solution is
obtained by the TOUR algorithm which basically constructs a giant
tour containing all the customers and partitions it in an optimal way
so as to produce a set of feasible routes. The Tabu search algorithm
then makes use of a search procedure, a swapping procedure and the
intensification and diversification mechanism to find the best set of
feasible routes.
Abstract: Recommender systems are usually regarded as an
important marketing tool in the e-commerce. They use important
information about users to facilitate accurate recommendation. The
information includes user context such as location, time and interest
for personalization of mobile users. We can easily collect information
about location and time because mobile devices communicate with the
base station of the service provider. However, information about user
interest can-t be easily collected because user interest can not be
captured automatically without user-s approval process. User interest
usually represented as a need. In this study, we classify needs into two
types according to prior research. This study investigates the
usefulness of data mining techniques for classifying user need type for
recommendation systems. We employ several data mining techniques
including artificial neural networks, decision trees, case-based
reasoning, and multivariate discriminant analysis. Experimental
results show that CHAID algorithm outperforms other models for
classifying user need type. This study performs McNemar test to
examine the statistical significance of the differences of classification
results. The results of McNemar test also show that CHAID performs
better than the other models with statistical significance.
Abstract: Inter-symbol interference if not taken care off may cause severe error at the receiver and the detection of signal becomes difficult. An adaptive equalizer employing Recursive Least Squares algorithm can be a good compensation for the ISI problem. In this paper performance of communication link in presence of Least Mean Square and Recursive Least Squares equalizer algorithm is analyzed. A Model of communication system having Quadrature amplitude modulation and Rician fading channel is implemented using MATLAB communication block set. Bit error rate and number of errors is evaluated for RLS and LMS equalizer algorithm, due to change in Signal to Noise Ratio (SNR) and fading component gain in Rician fading Channel.
Abstract: In this paper a novel algorithm is proposed that integrates the process of fuzzy hierarchy generation and rule discovery for automated discovery of Production Rules with Fuzzy Hierarchy (PRFH) in large databases.A concept of frequency matrix (Freq) introduced to summarize large database that helps in minimizing the number of database accesses, identification and removal of irrelevant attribute values and weak classes during the fuzzy hierarchy generation.Experimental results have established the effectiveness of the proposed algorithm.
Abstract: This paper presents a vocoder to obtain high quality synthetic speech at 600 bps. To reduce the bit rate, the algorithm is based on a sinusoidally excited linear prediction model which extracts few coding parameters, and three consecutive frames are grouped into a superframe and jointly vector quantization is used to obtain high coding efficiency. The inter-frame redundancy is exploited with distinct quantization schemes for different unvoiced/voiced frame combinations in the superframe. Experimental results show that the quality of the proposed coder is better than that of 2.4kbps LPC10e and achieves approximately the same as that of 2.4kbps MELP and with high robustness.
Abstract: With constraints on data availability and for study of power system stability it is adequate to model the synchronous generator with field circuit and one equivalent damper on q-axis known as the model 1.1. This paper presents a systematic procedure for modelling and simulation of a single-machine infinite-bus power system installed with a thyristor controlled series compensator (TCSC) where the synchronous generator is represented by model 1.1, so that impact of TCSC on power system stability can be more reasonably evaluated. The model of the example power system is developed using MATLAB/SIMULINK which can be can be used for teaching the power system stability phenomena, and also for research works especially to develop generator controllers using advanced technologies. Further, the parameters of the TCSC controller are optimized using genetic algorithm. The non-linear simulation results are presented to validate the effectiveness of the proposed approach.
Abstract: This paper presents the identification of the impact
force acting on a simply supported beam. The force identification is
an inverse problem in which the measured response of the structure is
used to determine the applied force. The identification problem is
formulated as an optimization problem and the genetic algorithm is
utilized to solve the optimization problem. The objective function is
calculated on the difference between analytical and measured
responses and the decision variables are the location and magnitude
of the applied force. The results from simulation show the
effectiveness of the approach and its robustness vs. the measurement
noise and sensor location.
Abstract: The batch nature limits the standard kernel principal component analysis (KPCA) methods in numerous applications, especially for dynamic or large-scale data. In this paper, an efficient adaptive approach is presented for online extraction of the kernel principal components (KPC). The contribution of this paper may be divided into two parts. First, kernel covariance matrix is correctly updated to adapt to the changing characteristics of data. Second, KPC are recursively formulated to overcome the batch nature of standard KPCA.This formulation is derived from the recursive eigen-decomposition of kernel covariance matrix and indicates the KPC variation caused by the new data. The proposed method not only alleviates sub-optimality of the KPCA method for non-stationary data, but also maintains constant update speed and memory usage as the data-size increases. Experiments for simulation data and real applications demonstrate that our approach yields improvements in terms of both computational speed and approximation accuracy.
Abstract: Collateralized Debt Obligations are not as widely used
nowadays as they were before 2007 Subprime crisis. Nonetheless
there remains an enthralling challenge to optimize cash flows
associated with synthetic CDOs. A Gaussian-based model is used
here in which default correlation and unconditional probabilities of
default are highlighted. Then numerous simulations are performed
based on this model for different scenarios in order to evaluate the
associated cash flows given a specific number of defaults at different
periods of time. Cash flows are not solely calculated on a single
bought or sold tranche but rather on a combination of bought and
sold tranches. With some assumptions, the simplex algorithm gives
a way to find the maximum cash flow according to correlation of
defaults and maturities. The used Gaussian model is not realistic in
crisis situations. Besides present system does not handle buying or
selling a portion of a tranche but only the whole tranche. However the
work provides the investor with relevant elements on how to know
what and when to buy and sell.
Abstract: CDMA cellular networks support soft handover,
which guarantees the continuity of wireless services and enhanced
communication quality. Cellular networks support multimedia
services under varied propagation environmental conditions. In this
paper, we have shown the effect of characteristic parameters of the
cellular environments on the soft handover performance. We
consider path loss exponent, standard deviation of shadow fading and
correlation coefficient of shadow fading as the characteristic
parameters of the radio propagation environment. A very useful
statistical measure for characterizing the performance of mobile radio
system is the probability of outage. It is shown through numerical
results that above parameters have decisive effect on the probability
of outage and hence the overall performance of the soft handover
algorithm.