Abstract: Back propagation algorithm (BP) is a widely used
technique in artificial neural network and has been used as a tool
for solving the time series problems, such as decreasing training
time, maximizing the ability to fall into local minima, and optimizing
sensitivity of the initial weights and bias. This paper proposes an
improvement of a BP technique which is called IM-COH algorithm
(IM-COH). By combining IM-COH algorithm with cuckoo search
algorithm (CS), the result is cuckoo search improved control output
hidden layer algorithm (CS-IM-COH). This new algorithm has a
better ability in optimizing sensitivity of the initial weights and bias
than the original BP algorithm. In this research, the algorithm of
CS-IM-COH is compared with the original BP, the IM-COH, and the
original BP with CS (CS-BP). Furthermore, the selected benchmarks,
four time series samples, are shown in this research for illustration.
The research shows that the CS-IM-COH algorithm give the best
forecasting results compared with the selected samples.
Abstract: The conjugate gradient optimization algorithm
usually used for nonlinear least squares is presented and is
combined with the modified back propagation algorithm yielding
a new fast training multilayer perceptron (MLP) algorithm
(CGFR/AG). The approaches presented in the paper consist of
three steps: (1) Modification on standard back propagation
algorithm by introducing gain variation term of the activation
function, (2) Calculating the gradient descent on error with
respect to the weights and gains values and (3) the determination
of the new search direction by exploiting the information
calculated by gradient descent in step (2) as well as the previous
search direction. The proposed method improved the training
efficiency of back propagation algorithm by adaptively modifying
the initial search direction. Performance of the proposed method
is demonstrated by comparing to the conjugate gradient algorithm
from neural network toolbox for the chosen benchmark. The
results show that the number of iterations required by the
proposed method to converge is less than 20% of what is required
by the standard conjugate gradient and neural network toolbox
algorithm.