Abstract: The number of Ground Motion Prediction Equations
(GMPEs) used for predicting peak ground acceleration (PGA) and
the number of earthquake recordings that have been used for fitting
these equations has increased in the past decades. The current PF-L
database contains 3550 recordings. Since the GMPEs frequently
model the peak ground acceleration the goal of the present study was
to refit a selection of 44 of the existing equation models for PGA in
light of the latest data. The algorithm Levenberg-Marquardt was used
for fitting the coefficients of the equations and the results are
evaluated both quantitatively by presenting the root mean squared
error (RMSE) and qualitatively by drawing graphs of the five best
fitted equations. The RMSE was found to be as low as 0.08 for the
best equation models. The newly estimated coefficients vary from the
values published in the original works.
Abstract: In this study four Holstein steers with rumen fistula
fed 7 kg of dry matter (DM) of diets differing in concentrate to
alfalfa hay ratios as 60:40, 70:30, 80:20, and 90:10 in a 4 × 4 latin
square design. The pH of the ruminal fluid was measured before
the morning feeding (0.0 h) to 8 h post feeding. In this study, a
two-layered feed-forward neural network trained by the
Levenberg-Marquardt algorithm was used for modelling of ruminal
pH. The input variables of the network were time, concentrate to
alfalfa hay ratios (C/F), non fiber carbohydrate (NFC) and neutral
detergent fiber (NDF). The output variable was the ruminal pH.
The modeling results showed that there was excellent agreement
between the experimental data and predicted values, with a high
determination coefficient (R2 >0.96). Therefore, we suggest using
these model-derived biological values to summarize continuously
recorded pH data.
Abstract: Minimization methods for training feed-forward networks with Backpropagation are compared. Feedforward network training is a special case of functional minimization, where no explicit model of the data is assumed. Therefore due to the high dimensionality of the data, linearization of the training problem through use of orthogonal basis functions is not desirable. The focus is functional minimization on any basis. A number of methods based on local gradient and Hessian matrices are discussed. Modifications of many methods of first and second order training methods are considered. Using share rates data, experimentally it is proved that Conjugate gradient and Quasi Newton?s methods outperformed the Gradient Descent methods. In case of the Levenberg-Marquardt algorithm is of special interest in financial forecasting.
Abstract: Financial forecasting is an example of signal processing problems. A number of ways to train/learn the network are available. We have used Levenberg-Marquardt algorithm for error back-propagation for weight adjustment. Pre-processing of data has reduced much of the variation at large scale to small scale, reducing the variation of training data.
Abstract: In this paper a modification on Levenberg-Marquardt algorithm for MLP neural network learning is proposed. The proposed algorithm has good convergence. This method reduces the amount of oscillation in learning procedure. An example is given to show usefulness of this method. Finally a simulation verifies the results of proposed method.