Abstract: This paper presents a comparative study between two
neural network models namely General Regression Neural Network
(GRNN) and Back Propagation Neural Network (BPNN) are used
to estimate radial overcut produced during Electrical Discharge
Machining (EDM). Four input parameters have been employed:
discharge current (Ip), pulse on time (Ton), Duty fraction (Tau) and
discharge voltage (V). Recently, artificial intelligence techniques, as
it is emerged as an effective tool that could be used to replace
time consuming procedures in various scientific or engineering
applications, explicitly in prediction and estimation of the complex
and nonlinear process. The both networks are trained, and the
prediction results are tested with the unseen validation set of the
experiment and analysed. It is found that the performance of both the
networks are found to be in good agreement with average percentage
error less than 11% and the correlation coefficient obtained for the
validation data set for GRNN and BPNN is more than 91%. However,
it is much faster to train GRNN network than a BPNN and GRNN is
often more accurate than BPNN. GRNN requires more memory space
to store the model, GRNN features fast learning that does not require
an iterative procedure, and highly parallel structure. GRNN networks
are slower than multilayer perceptron networks at classifying new
cases.
Abstract: This paper presents dynamic voltage collapse prediction on an actual power system using support vector machines.
Dynamic voltage collapse prediction is first determined based on the PTSI calculated from information in dynamic simulation output. Simulations were carried out on a practical 87 bus test system by considering load increase as the contingency. The data collected from the time domain simulation is then used as input to the SVM in which support vector regression is used as a predictor to determine the
dynamic voltage collapse indices of the power system. To reduce training time and improve accuracy of the SVM, the Kernel function type and Kernel parameter are considered. To verify the
effectiveness of the proposed SVM method, its performance is compared with the multi layer perceptron neural network (MLPNN). Studies show that the SVM gives faster and more accurate results for dynamic voltage collapse prediction compared with the MLPNN.
Abstract: By the application of an improved back-propagation
neural network (BPNN), a model of current densities for a solid oxide
fuel cell (SOFC) with 10 layers is established in this study. To build
the learning data of BPNN, Taguchi orthogonal array is applied to
arrange the conditions of operating parameters, which totally 7 factors
act as the inputs of BPNN. Also, the average current densities
achieved by numerical method acts as the outputs of BPNN.
Comparing with the direct solution, the learning errors for all learning
data are smaller than 0.117%, and the predicting errors for 27
forecasting cases are less than 0.231%. The results show that the
presented model effectively builds a mathematical algorithm to predict
performance of a SOFC stack immediately in real time.
Also, the calculating algorithms are applied to proceed with the
optimization of the average current density for a SOFC stack. The
operating performance window of a SOFC stack is found to be
between 41137.11 and 53907.89. Furthermore, an inverse predicting
model of operating parameters of a SOFC stack is developed here by
the calculating algorithms of the improved BPNN, which is proved to
effectively predict operating parameters to achieve a desired
performance output of a SOFC stack.
Abstract: The ElectroEncephaloGram (EEG) is useful for
clinical diagnosis and biomedical research. EEG signals often
contain strong ElectroOculoGram (EOG) artifacts produced
by eye movements and eye blinks especially in EEG recorded
from frontal channels. These artifacts obscure the underlying
brain activity, making its visual or automated inspection
difficult. The goal of ocular artifact removal is to remove
ocular artifacts from the recorded EEG, leaving the underlying
background signals due to brain activity. In recent times,
Independent Component Analysis (ICA) algorithms have
demonstrated superior potential in obtaining the least
dependent source components. In this paper, the independent
components are obtained by using the JADE algorithm (best
separating algorithm) and are classified into either artifact
component or neural component. Neural Network is used for
the classification of the obtained independent components.
Neural Network requires input features that exactly represent
the true character of the input signals so that the neural
network could classify the signals based on those key
characters that differentiate between various signals. In this
work, Auto Regressive (AR) coefficients are used as the input
features for classification. Two neural network approaches
are used to learn classification rules from EEG data. First, a
Polynomial Neural Network (PNN) trained by GMDH (Group
Method of Data Handling) algorithm is used and secondly,
feed-forward neural network classifier trained by a standard
back-propagation algorithm is used for classification and the
results show that JADE-FNN performs better than JADEPNN.
Abstract: Recently, a lot of attention has been devoted to
advanced techniques of system modeling. PNN(polynomial neural
network) is a GMDH-type algorithm (Group Method of Data
Handling) which is one of the useful method for modeling nonlinear
systems but PNN performance depends strongly on the number of
input variables and the order of polynomial which are determined by
trial and error. In this paper, we introduce GPNN (genetic
polynomial neural network) to improve the performance of PNN.
GPNN determines the number of input variables and the order of all
neurons with GA (genetic algorithm). We use GA to search between
all possible values for the number of input variables and the order of
polynomial. GPNN performance is obtained by two nonlinear
systems. the quadratic equation and the time series Dow Jones stock
index are two case studies for obtaining the GPNN performance.
Abstract: In this paper, the processing of sonar signals has been
carried out using Minimal Resource Allocation Network (MRAN)
and a Probabilistic Neural Network (PNN) in differentiation of
commonly encountered features in indoor environments. The
stability-plasticity behaviors of both networks have been
investigated. The experimental result shows that MRAN possesses
lower network complexity but experiences higher plasticity than
PNN. An enhanced version called parallel MRAN (pMRAN) is
proposed to solve this problem and is proven to be stable in
prediction and also outperformed the original MRAN.
Abstract: The behavior of Radial Basis Function (RBF) Networks greatly depends on how the center points of the basis functions are selected. In this work we investigate the use of instance reduction techniques, originally developed to reduce the storage requirements of instance based learners, for this purpose. Five Instance-Based Reduction Techniques were used to determine the set of center points, and RBF networks were trained using these sets of centers. The performance of the RBF networks is studied in terms of classification accuracy and training time. The results obtained were compared with two Radial Basis Function Networks: RBF networks that use all instances of the training set as center points (RBF-ALL) and Probabilistic Neural Networks (PNN). The former achieves high classification accuracies and the latter requires smaller training time. Results showed that RBF networks trained using sets of centers located by noise-filtering techniques (ALLKNN and ENN) rather than pure reduction techniques produce the best results in terms of classification accuracy. The results show that these networks require smaller training time than that of RBF-ALL and higher classification accuracy than that of PNN. Thus, using ALLKNN and ENN to select center points gives better combination of classification accuracy and training time. Our experiments also show that using the reduced sets to train the networks is beneficial especially in the presence of noise in the original training sets.