Abstract: In this paper, efforts were made to examine and compare the algorithmic iterative solutions of conjugate gradient method as against other methods such as Gauss-Seidel and Jacobi approaches for solving systems of linear equations of the form Ax = b, where A is a real n x n symmetric and positive definite matrix. We performed algorithmic iterative steps and obtained analytical solutions of a typical 3 x 3 symmetric and positive definite matrix using the three methods described in this paper (Gauss-Seidel, Jacobi and Conjugate Gradient methods) respectively. From the results obtained, we discovered that the Conjugate Gradient method converges faster to exact solutions in fewer iterative steps than the two other methods which took much iteration, much time and kept tending to the exact solutions.
Abstract: In this paper, we describe how Bayesian inferential reasoning will contributes in obtaining a well-satisfied prediction for Distributed Constraint Optimization Problems (DCOPs) with uncertainties. We also demonstrate how DCOPs could be merged to multi-agent knowledge understand and prediction (i.e. Situation Awareness). The DCOPs functions were merged with Bayesian Belief Network (BBN) in the form of situation, awareness, and utility nodes. We describe how the uncertainties can be represented to the BBN and make an effective prediction using the expectation-maximization algorithm or conjugate gradient descent algorithm. The idea of variable prediction using Bayesian inference may reduce the number of variables in agents’ sampling domain and also allow missing variables estimations. Experiment results proved that the BBN perform compelling predictions with samples containing uncertainties than the perfect samples. That is, Bayesian inference can help in handling uncertainties and dynamism of DCOPs, which is the current issue in the DCOPs community. We show how Bayesian inference could be formalized with Distributed Situation Awareness (DSA) using uncertain and missing agents’ data. The whole framework was tested on multi-UAV mission for forest fire searching. Future work focuses on augmenting existing architecture to deal with dynamic DCOPs algorithms and multi-agent information merging.
Abstract: Statement of the automatic speech recognition
problem, the assignment of speech recognition and the application
fields are shown in the paper. At the same time as Azerbaijan speech,
the establishment principles of speech recognition system and the
problems arising in the system are investigated. The computing algorithms of speech features, being the main part
of speech recognition system, are analyzed. From this point of view,
the determination algorithms of Mel Frequency Cepstral Coefficients
(MFCC) and Linear Predictive Coding (LPC) coefficients expressing
the basic speech features are developed. Combined use of cepstrals of
MFCC and LPC in speech recognition system is suggested to
improve the reliability of speech recognition system. To this end, the
recognition system is divided into MFCC and LPC-based recognition
subsystems. The training and recognition processes are realized in
both subsystems separately, and recognition system gets the decision
being the same results of each subsystems. This results in decrease of
error rate during recognition. The training and recognition processes are realized by artificial
neural networks in the automatic speech recognition system. The
neural networks are trained by the conjugate gradient method. In the
paper the problems observed by the number of speech features at
training the neural networks of MFCC and LPC-based speech
recognition subsystems are investigated. The variety of results of neural networks trained from different
initial points in training process is analyzed. Methodology of
combined use of neural networks trained from different initial points
in speech recognition system is suggested to improve the reliability
of recognition system and increase the recognition quality, and
obtained practical results are shown.
Abstract: Based on the conjugate gradient (CG) algorithm, the constrained matrix equation AXB=C and the associate optimal approximation problem are considered for the symmetric arrowhead matrix solutions in the premise of consistency. The convergence results of the method are presented. At last, a numerical example is given to illustrate the efficiency of this method.
Abstract: The sound pressure level (SPL) of the moving-coil
loudspeaker (MCL) is often simulated and analyzed using the lumped
parameter model. However, the SPL of a MCL cannot be simulated
precisely in the high frequency region, because the value of cone
effective area is changed due to the geometry variation in different
mode shapes, it is also related to affect the acoustic radiation mass and
resistance. Herein, the paper presents the inverse method which has a
high ability to measure the value of cone effective area in various
frequency points, also can estimate the MCL electroacoustic
parameters simultaneously. The proposed inverse method comprises
the direct problem, adjoint problem, and sensitivity problem in
collaboration with nonlinear conjugate gradient method. Estimated
values from the inverse method are validated experimentally which
compared with the measured SPL curve result. Results presented in
this paper not only improve the accuracy of lumped parameter model
but also provide the valuable information on loudspeaker cone design.
Abstract: Conjugate gradient method has been enormously used
to solve large scale unconstrained optimization problems due to the
number of iteration, memory, CPU time, and convergence property,
in this paper we find a new class of nonlinear conjugate gradient
coefficient with global convergence properties proved by exact line
search. The numerical results for our new βK give a good result when
it compared with well known formulas.
Abstract: An inverse geometry problem is solved to predict an
unknown irregular boundary profile. The aim is to minimize the
objective function, which is the difference between real and
computed temperatures, using three different versions of Conjugate
Gradient Method. The gradient of the objective function, considered
necessary in this method, obtained as a result of solving the adjoint
equation. The abilities of three versions of Conjugate Gradient
Method in predicting the boundary profile are compared using a
numerical algorithm based on the method. The predicted shapes show
that due to its convergence rate and accuracy of predicted values, the
Powell-Beale version of the method is more effective than the
Fletcher-Reeves and Polak –Ribiere versions.
Abstract: The conjugate gradient optimization algorithm is combined with the modified back propagation algorithm to yield a computationally efficient algorithm for training multilayer perceptron (MLP) networks (CGFR/AG). The computational efficiency is enhanced by adaptively modifying initial search direction as described in the following steps: (1) Modification on standard back propagation algorithm by introducing a gain variation term in the activation function, (2) Calculation of the gradient descent of error with respect to the weights and gains values and (3) the determination of a new search direction by using information calculated in step (2). The performance of the proposed method is demonstrated by comparing accuracy and computation time with the conjugate gradient algorithm used in MATLAB neural network toolbox. The results show that the computational efficiency of the proposed method was better than the standard conjugate gradient algorithm.
Abstract: This paper reports the study results on neural network
training algorithm of numerical optimization techniques multiface
detection in static images. The training algorithms involved are scale
gradient conjugate backpropagation, conjugate gradient
backpropagation with Polak-Riebre updates, conjugate gradient
backpropagation with Fletcher-Reeves updates, one secant
backpropagation and resilent backpropagation. The final result of
each training algorithms for multiface detection application will also
be discussed and compared.
Abstract: Minimization methods for training feed-forward networks with Backpropagation are compared. Feedforward network training is a special case of functional minimization, where no explicit model of the data is assumed. Therefore due to the high dimensionality of the data, linearization of the training problem through use of orthogonal basis functions is not desirable. The focus is functional minimization on any basis. A number of methods based on local gradient and Hessian matrices are discussed. Modifications of many methods of first and second order training methods are considered. Using share rates data, experimentally it is proved that Conjugate gradient and Quasi Newton?s methods outperformed the Gradient Descent methods. In case of the Levenberg-Marquardt algorithm is of special interest in financial forecasting.
Abstract: We investigated statistical performance of Bayesian inference using maximum entropy and MAP estimation for several models which approximated wave-fronts in remote sensing using SAR interferometry. Using Monte Carlo simulation for a set of wave-fronts generated by assumed true prior, we found that the method of maximum entropy realized the optimal performance around the Bayes-optimal conditions by using model of the true prior and the likelihood representing optical measurement due to the interferometer. Also, we found that the MAP estimation regarded as a deterministic limit of maximum entropy almost achieved the same performance as the Bayes-optimal solution for the set of wave-fronts. Then, we clarified that the MAP estimation perfectly carried out phase unwrapping without using prior information, and also that the MAP estimation realized accurate phase unwrapping using conjugate gradient (CG) method, if we assumed the model of the true prior appropriately.
Abstract: This paper reports work done to improve the modeling of complex processes when only small experimental data sets are available. Neural networks are used to capture the nonlinear underlying phenomena contained in the data set and to partly eliminate the burden of having to specify completely the structure of the model. Two different types of neural networks were used for the application of pulping problem. A three layer feed forward neural networks, using the Preconditioned Conjugate Gradient (PCG) methods were used in this investigation. Preconditioning is a method to improve convergence by lowering the condition number and increasing the eigenvalues clustering. The idea is to solve the modified odified problem M-1 Ax= M-1b where M is a positive-definite preconditioner that is closely related to A. We mainly focused on Preconditioned Conjugate Gradient- based training methods which originated from optimization theory, namely Preconditioned Conjugate Gradient with Fletcher-Reeves Update (PCGF), Preconditioned Conjugate Gradient with Polak-Ribiere Update (PCGP) and Preconditioned Conjugate Gradient with Powell-Beale Restarts (PCGB). The behavior of the PCG methods in the simulations proved to be robust against phenomenon such as oscillations due to large step size.
Abstract: The Linear discriminant analysis (LDA) can be
generalized into a nonlinear form - kernel LDA (KLDA) expediently
by using the kernel functions. But KLDA is often referred to a general
eigenvalue problem in singular case. To avoid this complication, this
paper proposes an iterative algorithm for the two-class KLDA. The
proposed KLDA is used as a nonlinear discriminant classifier, and the
experiments show that it has a comparable performance with SVM.
Abstract: This paper reports work done to improve the modeling of complex processes when only small experimental data sets are available. Neural networks are used to capture the nonlinear underlying phenomena contained in the data set and to partly eliminate the burden of having to specify completely the structure of the model. Two different types of neural networks were used for the application of Pulping of Sugar Maple problem. A three layer feed forward neural networks, using the Preconditioned Conjugate Gradient (PCG) methods were used in this investigation. Preconditioning is a method to improve convergence by lowering the condition number and increasing the eigenvalues clustering. The idea is to solve the modified problem where M is a positive-definite preconditioner that is closely related to A. We mainly focused on Preconditioned Conjugate Gradient- based training methods which originated from optimization theory, namely Preconditioned Conjugate Gradient with Fletcher-Reeves Update (PCGF), Preconditioned Conjugate Gradient with Polak-Ribiere Update (PCGP) and Preconditioned Conjugate Gradient with Powell-Beale Restarts (PCGB). The behavior of the PCG methods in the simulations proved to be robust against phenomenon such as oscillations due to large step size.
Abstract: According to conjugate gradient algorithm, a new consensus protocol algorithm of discrete-time multi-agent systems is presented, which can achieve finite-time consensus. Finally, a numerical example is given to illustrate our theoretical result.
Abstract: In intensity modulated radiation therapy (IMRT)
treatment planning, beam angles are usually preselected on the basis of
experience and intuition. Therefore, getting an appropriate beam
configuration needs a very long time. Based on the present situation,
the paper puts forward beam orientation optimization using ant colony
optimization (ACO). We use ant colony optimization to select the
beam configurations, after getting the beam configuration using
Conjugate Gradient (CG) algorithm to optimize the intensity profiles.
Combining with the information of the effect of pencil beam, we can
get the global optimal solution accelerating. In order to verify the
feasibility of the presented method, a simulated and clinical case was
tested, compared with dose-volume histogram and isodose line
between target area and organ at risk. The results showed that the
effect was improved after optimizing beam configurations. The
optimization approach could make treatment planning meet clinical
requirements more efficiently, so it had extensive application
perspective.
Abstract: There are two common methodologies to verify
signatures: the functional approach and the parametric approach. This
paper presents a new approach for dynamic handwritten signature
verification (HSV) using the Neural Network with verification by the
Conjugate Gradient Neural Network (NN). It is yet another avenue in
the approach to HSV that is found to produce excellent results when
compared with other methods of dynamic. Experimental results show
the system is insensitive to the order of base-classifiers and gets a
high verification ratio.
Abstract: The conjugate gradient optimization algorithm
usually used for nonlinear least squares is presented and is
combined with the modified back propagation algorithm yielding
a new fast training multilayer perceptron (MLP) algorithm
(CGFR/AG). The approaches presented in the paper consist of
three steps: (1) Modification on standard back propagation
algorithm by introducing gain variation term of the activation
function, (2) Calculating the gradient descent on error with
respect to the weights and gains values and (3) the determination
of the new search direction by exploiting the information
calculated by gradient descent in step (2) as well as the previous
search direction. The proposed method improved the training
efficiency of back propagation algorithm by adaptively modifying
the initial search direction. Performance of the proposed method
is demonstrated by comparing to the conjugate gradient algorithm
from neural network toolbox for the chosen benchmark. The
results show that the number of iterations required by the
proposed method to converge is less than 20% of what is required
by the standard conjugate gradient and neural network toolbox
algorithm.
Abstract: This paper proposed a novel model for short term load
forecast (STLF) in the electricity market. The prior electricity
demand data are treated as time series. The model is composed of
several neural networks whose data are processed using a wavelet
technique. The model is created in the form of a simulation program
written with MATLAB. The load data are treated as time series data.
They are decomposed into several wavelet coefficient series using
the wavelet transform technique known as Non-decimated Wavelet
Transform (NWT). The reason for using this technique is the belief
in the possibility of extracting hidden patterns from the time series
data. The wavelet coefficient series are used to train the neural
networks (NNs) and used as the inputs to the NNs for electricity load
prediction. The Scale Conjugate Gradient (SCG) algorithm is used as
the learning algorithm for the NNs. To get the final forecast data, the
outputs from the NNs are recombined using the same wavelet
technique. The model was evaluated with the electricity load data of
Electronic Engineering Department in Mandalay Technological
University in Myanmar. The simulation results showed that the
model was capable of producing a reasonable forecasting accuracy in
STLF.