Abstract: A lot of Scientific and Engineering problems require the solution of large systems of linear equations of the form bAx in an effective manner. LU-Decomposition offers good choices for solving this problem. Our approach is to find the lower bound of processing elements needed for this purpose. Here is used the so called Omega calculus, as a computational method for solving problems via their corresponding Diophantine relation. From the corresponding algorithm is formed a system of linear diophantine equalities using the domain of computation which is given by the set of lattice points inside the polyhedron. Then is run the Mathematica program DiophantineGF.m. This program calculates the generating function from which is possible to find the number of solutions to the system of Diophantine equalities, which in fact gives the lower bound for the number of processors needed for the corresponding algorithm. There is given a mathematical explanation of the problem as well. Keywordsgenerating function, lattice points in polyhedron, lower bound of processor elements, system of Diophantine equationsand : calculus.
Abstract: A new distance-adjusted approach is proposed in
which static square contours are defined around an estimated
symbol in a QAM constellation, which create regions that
correspond to fixed step sizes and weighting factors. As a
result, the equalizer tap adjustment consists of a linearly
weighted sum of adaptation criteria that is scaled by a variable
step size. This approach is the basis of two new algorithms: the
Variable step size Square Contour Algorithm (VSCA) and the
Variable step size Square Contour Decision-Directed
Algorithm (VSDA). The proposed schemes are compared with
existing blind equalization algorithms in the SCA family in
terms of convergence speed, constellation eye opening and
residual ISI suppression. Simulation results for 64-QAM
signaling over empirically derived microwave radio channels
confirm the efficacy of the proposed algorithms. An RTL
implementation of the blind adaptive equalizer based on the
proposed schemes is presented and the system is configured to
operate in VSCA error signal mode, for square QAM signals
up to 64-QAM.
Abstract: The Chiu-s method which generates a Takagi-Sugeno Fuzzy Inference System (FIS) is a method of fuzzy rules extraction. The rules output is a linear function of inputs. In addition, these rules are not explicit for the expert. In this paper, we develop a method which generates Mamdani FIS, where the rules output is fuzzy. The method proceeds in two steps: first, it uses the subtractive clustering principle to estimate both the number of clusters and the initial locations of a cluster centers. Each obtained cluster corresponds to a Mamdani fuzzy rule. Then, it optimizes the fuzzy model parameters by applying a genetic algorithm. This method is illustrated on a traffic network management application. We suggest also a Mamdani fuzzy rules generation method, where the expert wants to classify the output variables in some fuzzy predefined classes.
Abstract: This paper presents parametric probability density
models for call holding times (CHTs) into emergency call center
based on the actual data collected for over a week in the public
Emergency Information Network (EIN) in Mongolia. When the set of
chosen candidates of Gamma distribution family is fitted to the call
holding time data, it is observed that the whole area in the CHT
empirical histogram is underestimated due to spikes of higher
probability and long tails of lower probability in the histogram.
Therefore, we provide the Gaussian parametric model of a mixture of
lognormal distributions with explicit analytical expressions for the
modeling of CHTs of PSNs. Finally, we show that the CHTs for
PSNs are fitted reasonably by a mixture of lognormal distributions
via the simulation of expectation maximization algorithm. This result
is significant as it expresses a useful mathematical tool in an explicit
manner of a mixture of lognormal distributions.
Abstract: The shortest path question is in a graph theory model
question, and it is applied in many fields. The most short-path
question may divide into two kinds: Single sources most short-path,
all apexes to most short-path. This article mainly introduces the
problem of all apexes to most short-path, and gives a new parallel
algorithm of all apexes to most short-path according to the Dijkstra
algorithm. At last this paper realizes the parallel algorithms in the
technology of C # multithreading.
Abstract: In this paper a novel algorithm is proposed that integrates the process of fuzzy hierarchy generation and rule discovery for automated discovery of Production Rules with Fuzzy Hierarchy (PRFH) in large databases.A concept of frequency matrix (Freq) introduced to summarize large database that helps in minimizing the number of database accesses, identification and removal of irrelevant attribute values and weak classes during the fuzzy hierarchy generation.Experimental results have established the effectiveness of the proposed algorithm.
Abstract: With constraints on data availability and for study of power system stability it is adequate to model the synchronous generator with field circuit and one equivalent damper on q-axis known as the model 1.1. This paper presents a systematic procedure for modelling and simulation of a single-machine infinite-bus power system installed with a thyristor controlled series compensator (TCSC) where the synchronous generator is represented by model 1.1, so that impact of TCSC on power system stability can be more reasonably evaluated. The model of the example power system is developed using MATLAB/SIMULINK which can be can be used for teaching the power system stability phenomena, and also for research works especially to develop generator controllers using advanced technologies. Further, the parameters of the TCSC controller are optimized using genetic algorithm. The non-linear simulation results are presented to validate the effectiveness of the proposed approach.
Abstract: CDMA cellular networks support soft handover,
which guarantees the continuity of wireless services and enhanced
communication quality. Cellular networks support multimedia
services under varied propagation environmental conditions. In this
paper, we have shown the effect of characteristic parameters of the
cellular environments on the soft handover performance. We
consider path loss exponent, standard deviation of shadow fading and
correlation coefficient of shadow fading as the characteristic
parameters of the radio propagation environment. A very useful
statistical measure for characterizing the performance of mobile radio
system is the probability of outage. It is shown through numerical
results that above parameters have decisive effect on the probability
of outage and hence the overall performance of the soft handover
algorithm.
Abstract: Phylogenetic tree is a graphical representation of the
evolutionary relationship among three or more genes or organisms.
These trees show relatedness of data sets, species or genes
divergence time and nature of their common ancestors. Quality of a
phylogenetic tree requires parsimony criterion. Various approaches
have been proposed for constructing most parsimonious trees. This
paper is concerned about calculating and optimizing the changes of
state that are needed called Small Parsimony Algorithms. This paper
has proposed enhanced small parsimony algorithm to give better
score based on number of evolutionary changes needed to produce
the observed sequence changes tree and also give the ancestor of the
given input.
Abstract: High Speed PM Generators driven by micro-turbines
are widely used in Smart Grid System. So, this paper proposes
comparative study among six classical, optimized and genetic
analytical design cases for 400 kW output power at tip speed 200
m/s. These six design trials of High Speed Permanent Magnet
Synchronous Generators (HSPMSGs) are: Classical Sizing;
Unconstrained optimization for total losses and its minimization;
Constrained optimized total mass with bounded constraints are
introduced in the problem formulation. Then a genetic algorithm is
formulated for obtaining maximum efficiency and minimizing
machine size. In the second genetic problem formulation, we attempt
to obtain minimum mass, the machine sizing that is constrained by
the non-linear constraint function of machine losses. Finally, an
optimum torque per ampere genetic sizing is predicted. All results are
simulated with MATLAB, Optimization Toolbox and its Genetic
Algorithm. Finally, six analytical design examples comparisons are
introduced with study of machines waveforms, THD and rotor losses.
Abstract: This paper proposes a method of adaptively generating a gait pattern of biped robot. The gait synthesis is based on human's gait pattern analysis. The proposed method can easily be applied to generate the natural and stable gait pattern of any biped robot. To analyze the human's gait pattern, sequential images of the human's gait on the sagittal plane are acquired from which the gait control values are extracted. The gait pattern of biped robot on the sagittal plane is adaptively generated by a genetic algorithm using the human's gait control values. However, gait trajectories of the biped robot on the sagittal plane are not enough to construct the complete gait pattern because the biped robot moves on 3-dimension space. Therefore, the gait pattern on the frontal plane, generated from Zero Moment Point (ZMP), is added to the gait one acquired on the sagittal plane. Consequently, the natural and stable walking pattern for the biped robot is obtained.
Abstract: Applicability of tuning the controller gains for Stewart manipulator using genetic algorithm as an efficient search technique is investigated. Kinematics and dynamics models were introduced in detail for simulation purpose. A PD task space control scheme was used. For demonstrating technique feasibility, a Stewart manipulator numerical-model was built. A genetic algorithm was then employed to search for optimal controller gains. The controller was tested onsite a generic circular mission. The simulation results show that the technique is highly convergent with superior performance operating for different payloads.
Abstract: The speech signal conveys information about the
identity of the speaker. The area of speaker identification is
concerned with extracting the identity of the person speaking the
utterance. As speech interaction with computers becomes more
pervasive in activities such as the telephone, financial transactions
and information retrieval from speech databases, the utility of
automatically identifying a speaker is based solely on vocal
characteristic. This paper emphasizes on text dependent speaker
identification, which deals with detecting a particular speaker from a
known population. The system prompts the user to provide speech
utterance. System identifies the user by comparing the codebook of
speech utterance with those of the stored in the database and lists,
which contain the most likely speakers, could have given that speech
utterance. The speech signal is recorded for N speakers further the
features are extracted. Feature extraction is done by means of LPC
coefficients, calculating AMDF, and DFT. The neural network is
trained by applying these features as input parameters. The features
are stored in templates for further comparison. The features for the
speaker who has to be identified are extracted and compared with the
stored templates using Back Propogation Algorithm. Here, the
trained network corresponds to the output; the input is the extracted
features of the speaker to be identified. The network does the weight
adjustment and the best match is found to identify the speaker. The
number of epochs required to get the target decides the network
performance.
Abstract: Wireless sensor networks (WSN) are currently
receiving significant attention due to their unlimited potential. These
networks are used for various applications, such as habitat
monitoring, automation, agriculture, and security. The efficient nodeenergy
utilization is one of important performance factors in wireless
sensor networks because sensor nodes operate with limited battery
power. In this paper, we proposed the MiSense hierarchical cluster
based routing algorithm (MiCRA) to extend the lifetime of sensor
networks and to maintain a balanced energy consumption of nodes.
MiCRA is an extension of the HEED algorithm with two levels of
cluster heads. The performance of the proposed protocol has been
examined and evaluated through a simulation study. The simulation
results clearly show that MiCRA has a better performance in terms of
lifetime than HEED. Indeed, MiCRA our proposed protocol can
effectively extend the network lifetime without other critical
overheads and performance degradation. It has been noted that there
is about 35% of energy saving for MiCRA during the clustering
process and 65% energy savings during the routing process compared
to the HEED algorithm.
Abstract: Finite impulse response (FIR) filters have the advantage of linear phase, guaranteed stability, fewer finite precision errors, and efficient implementation. In contrast, they have a major disadvantage of high order need (more coefficients) than IIR counterpart with comparable performance. The high order demand imposes more hardware requirements, arithmetic operations, area usage, and power consumption when designing and fabricating the filter. Therefore, minimizing or reducing these parameters, is a major goal or target in digital filter design task. This paper presents an algorithm proposed for modifying values and the number of non-zero coefficients used to represent the FIR digital pulse shaping filter response. With this algorithm, the FIR filter frequency and phase response can be represented with a minimum number of non-zero coefficients. Therefore, reducing the arithmetic complexity needed to get the filter output. Consequently, the system characteristic i.e. power consumption, area usage, and processing time are also reduced. The proposed algorithm is more powerful when integrated with multiplierless algorithms such as distributed arithmetic (DA) in designing high order digital FIR filters. Here the DA usage eliminates the need for multipliers when implementing the multiply and accumulate unit (MAC) and the proposed algorithm will reduce the number of adders and addition operations needed through the minimization of the non-zero values coefficients to get the filter output.
Abstract: This paper presents a complete procedure for tool path
planning and blade machining in 5-axis manufacturing. The actual
cutting contact and cutter locations can be determined by lead and tilt
angles. The tool path generation is implemented by piecewise curved
approximation and chordal deviation detection. An application about
drive surface method promotes flexibility of tool control and stability
of machine motion. A real manufacturing process is proposed to
separate the operation into three regions with five stages and to modify
the local tool orientation with an interactive algorithm.
Abstract: The Prediction of aerodynamic characteristics and
shape optimization of airfoil under the ground effect have been carried
out by integration of computational fluid dynamics and the multiobjective
Pareto-based genetic algorithm. The main flow
characteristics around an airfoil of WIG craft are lift force, lift-to-drag
ratio and static height stability (H.S). However, they show a strong
trade-off phenomenon so that it is not easy to satisfy the design
requirements simultaneously. This difficulty can be resolved by the
optimal design. The above mentioned three characteristics are chosen
as the objective functions and NACA0015 airfoil is considered as a
baseline model in the present study. The profile of airfoil is
constructed by Bezier curves with fourteen control points and these
control points are adopted as the design variables. For multi-objective
optimization problems, the optimal solutions are not unique but a set
of non-dominated optima and they are called Pareto frontiers or Pareto
sets. As the results of optimization, forty numbers of non- dominated
Pareto optima can be obtained at thirty evolutions.
Abstract: This paper explores university course timetabling
problem. There are several characteristics that make scheduling and
timetabling problems particularly difficult to solve: they have huge
search spaces, they are often highly constrained, they require
sophisticated solution representation schemes, and they usually
require very time-consuming fitness evaluation routines. Thus
standard evolutionary algorithms lack of efficiency to deal with
them. In this paper we have proposed a memetic algorithm that
incorporates the problem specific knowledge such that most of
chromosomes generated are decoded into feasible solutions.
Generating vast amount of feasible chromosomes makes the progress
of search process possible in a time efficient manner. Experimental
results exhibit the advantages of the developed Hybrid Genetic
Algorithm than the standard Genetic Algorithm.
Abstract: In the present work, we propose a new technique to
enhance the learning capabilities and reduce the computation
intensity of a competitive learning multi-layered neural network
using the K-means clustering algorithm. The proposed model use
multi-layered network architecture with a back propagation learning
mechanism. The K-means algorithm is first applied to the training
dataset to reduce the amount of samples to be presented to the neural
network, by automatically selecting an optimal set of samples. The
obtained results demonstrate that the proposed technique performs
exceptionally in terms of both accuracy and computation time when
applied to the KDD99 dataset compared to a standard learning
schema that use the full dataset.
Abstract: Finding the minimal logical functions has important applications in the design of logical circuits. This task is solved by many different methods but, frequently, they are not suitable for a computer implementation. We briefly summarise the well-known Quine-McCluskey method, which gives a unique procedure of computing and thus can be simply implemented, but, even for simple examples, does not guarantee an optimal solution. Since the Petrick extension of the Quine-McCluskey method does not give a generally usable method for finding an optimum for logical functions with a high number of values, we focus on interpretation of the result of the Quine-McCluskey method and show that it represents a set covering problem that, unfortunately, is an NP-hard combinatorial problem. Therefore it must be solved by heuristic or approximation methods. We propose an approach based on genetic algorithms and show suitable parameter settings.