Abstract: Artificial Neural Network (ANN) can be trained using
back propagation (BP). It is the most widely used algorithm for
supervised learning with multi-layered feed-forward networks.
Efficient learning by the BP algorithm is required for many practical
applications. The BP algorithm calculates the weight changes of
artificial neural networks, and a common approach is to use a twoterm
algorithm consisting of a learning rate (LR) and a momentum
factor (MF). The major drawbacks of the two-term BP learning
algorithm are the problems of local minima and slow convergence
speeds, which limit the scope for real-time applications. Recently the
addition of an extra term, called a proportional factor (PF), to the
two-term BP algorithm was proposed. The third increases the speed
of the BP algorithm. However, the PF term also reduces the
convergence of the BP algorithm, and criteria for evaluating
convergence are required to facilitate the application of the three
terms BP algorithm. Although these two seem to be closely related,
as described later, we summarize various improvements to overcome
the drawbacks. Here we compare the different methods of
convergence of the new three-term BP algorithm.
Abstract: The back-propagation algorithm calculates the weight
changes of an artificial neural network, and a two-term algorithm
with a dynamically optimal learning rate and a momentum factor
is commonly used. Recently the addition of an extra term, called a
proportional factor (PF), to the two-term BP algorithm was proposed.
The third term increases the speed of the BP algorithm. However,
the PF term also reduces the convergence of the BP algorithm, and
optimization approaches for evaluating the learning parameters are
required to facilitate the application of the three terms BP algorithm.
This paper considers the optimization of the new back-propagation
algorithm by using derivative information. A family of approaches
exploiting the derivatives with respect to the learning rate, momentum
factor and proportional factor is presented. These autonomously
compute the derivatives in the weight space, by using information
gathered from the forward and backward procedures. The three-term
BP algorithm and the optimization approaches are evaluated using
the benchmark XOR problem.
Abstract: In this work, bending fatigue life of notched
specimens with various notch geometries and dimensions is
investigated by experiment and Manson-Caffin theoretical method. In
this theoretical method, fatigue life of notched specimens is
calculated using the fatigue life obtained from the experiments for
plain specimens (without notch). Three notch geometries including
∪-shape, ∨-shape and C -shape notches are considered in this
investigation. The experiments are conducted on a rotary bending
Moore machine. The specimens are made of a low carbon steel alloy,
which has wide application in industry. The stress- life curves are
captured for all notched specimen by experiment. The results indicate
that Manson-Caffin analytical method cannot adequately predict
the fatigue life of notched specimen. However, it seems that the
difference between the experiments and Manson-Caffin predictions
can be compensated by a proportional factor.
Abstract: The back propagation algorithm calculates the weight
changes of artificial neural networks, and a common approach is to
use a training algorithm consisting of a learning rate and a
momentum factor. The major drawbacks of above learning algorithm
are the problems of local minima and slow convergence speeds. The
addition of an extra term, called a proportional factor reduces the
convergence of the back propagation algorithm. We have applied the
three term back propagation to multiplicative neural network
learning. The algorithm is tested on XOR and parity problem and
compared with the standard back propagation training algorithm.