Conjugate Gradient Algorithm for the Symmetric Arrowhead Solution of Matrix Equation AXB=C

Based on the conjugate gradient (CG) algorithm, the constrained matrix equation AXB=C and the associate optimal approximation problem are considered for the symmetric arrowhead matrix solutions in the premise of consistency. The convergence results of the method are presented. At last, a numerical example is given to illustrate the efficiency of this method.

An Improved Conjugate Gradient Based Learning Algorithm for Back Propagation Neural Networks

The conjugate gradient optimization algorithm is combined with the modified back propagation algorithm to yield a computationally efficient algorithm for training multilayer perceptron (MLP) networks (CGFR/AG). The computational efficiency is enhanced by adaptively modifying initial search direction as described in the following steps: (1) Modification on standard back propagation algorithm by introducing a gain variation term in the activation function, (2) Calculation of the gradient descent of error with respect to the weights and gains values and (3) the determination of a new search direction by using information calculated in step (2). The performance of the proposed method is demonstrated by comparing accuracy and computation time with the conjugate gradient algorithm used in MATLAB neural network toolbox. The results show that the computational efficiency of the proposed method was better than the standard conjugate gradient algorithm.

An Iterative Algorithm for KLDA Classifier

The Linear discriminant analysis (LDA) can be generalized into a nonlinear form - kernel LDA (KLDA) expediently by using the kernel functions. But KLDA is often referred to a general eigenvalue problem in singular case. To avoid this complication, this paper proposes an iterative algorithm for the two-class KLDA. The proposed KLDA is used as a nonlinear discriminant classifier, and the experiments show that it has a comparable performance with SVM.

A Finite-Time Consensus Protocol of the Multi-Agent Systems

According to conjugate gradient algorithm, a new consensus protocol algorithm of discrete-time multi-agent systems is presented, which can achieve finite-time consensus. Finally, a numerical example is given to illustrate our theoretical result.

Beam Orientation Optimization Using Ant Colony Optimization in Intensity Modulated Radiation Therapy

In intensity modulated radiation therapy (IMRT) treatment planning, beam angles are usually preselected on the basis of experience and intuition. Therefore, getting an appropriate beam configuration needs a very long time. Based on the present situation, the paper puts forward beam orientation optimization using ant colony optimization (ACO). We use ant colony optimization to select the beam configurations, after getting the beam configuration using Conjugate Gradient (CG) algorithm to optimize the intensity profiles. Combining with the information of the effect of pencil beam, we can get the global optimal solution accelerating. In order to verify the feasibility of the presented method, a simulated and clinical case was tested, compared with dose-volume histogram and isodose line between target area and organ at risk. The results showed that the effect was improved after optimizing beam configurations. The optimization approach could make treatment planning meet clinical requirements more efficiently, so it had extensive application perspective.

An Improved Learning Algorithm based on the Conjugate Gradient Method for Back Propagation Neural Networks

The conjugate gradient optimization algorithm usually used for nonlinear least squares is presented and is combined with the modified back propagation algorithm yielding a new fast training multilayer perceptron (MLP) algorithm (CGFR/AG). The approaches presented in the paper consist of three steps: (1) Modification on standard back propagation algorithm by introducing gain variation term of the activation function, (2) Calculating the gradient descent on error with respect to the weights and gains values and (3) the determination of the new search direction by exploiting the information calculated by gradient descent in step (2) as well as the previous search direction. The proposed method improved the training efficiency of back propagation algorithm by adaptively modifying the initial search direction. Performance of the proposed method is demonstrated by comparing to the conjugate gradient algorithm from neural network toolbox for the chosen benchmark. The results show that the number of iterations required by the proposed method to converge is less than 20% of what is required by the standard conjugate gradient and neural network toolbox algorithm.