Abstract: This paper addresses certain inherent limitations of
local priority hysteresis switching logic. Our main result establishes
that under persistent excitation assumption, it is possible to
relax constraints requiring strict positivity of local priority and
hysteresis switching constants. Relaxing these constraints allows the
adaptive system to reach optimality which implies the performance
improvement. The unconstrained local priority hysteresis switching
logic is examined and conditions for global convergence are derived.
Abstract: Conjugate gradient method has been enormously used
to solve large scale unconstrained optimization problems due to the
number of iteration, memory, CPU time, and convergence property,
in this paper we find a new class of nonlinear conjugate gradient
coefficient with global convergence properties proved by exact line
search. The numerical results for our new βK give a good result when
it compared with well known formulas.
Abstract: In this article, a new inexact alternating direction method(ADM) is proposed for solving a class of variational inequality problems. At each iteration, the new method firstly solves the resulting subproblems of ADM approximately to generate an temporal point ˜xk, and then the multiplier yk is updated to get the new iterate yk+1. In order to get xk+1, we adopt a new descent direction which is simple compared with the existing prediction-correction type ADMs. For the inexact ADM, the resulting proximal subproblem has closedform solution when the proximal parameter and inexact term are chosen appropriately. We show the efficiency of the inexact ADM numerically by some preliminary numerical experiments.
Abstract: In this paper, we focus on the alternating direction method, which is one of the most effective methods for solving structured variational inequalities(VI). In fact, we propose a proximal parallel alternating direction method which only needs to solve two strongly monotone sub-VI problems at each iteration. Convergence of the new method is proved under mild assumptions. We also present some preliminary numerical results, which indicate that the new method is quite efficient.
Abstract: This paper proposes a neural network weights and
topology optimization using genetic evolution and the
backpropagation training algorithm. The proposed crossover and
mutation operators aims to adapt the networks architectures and
weights during the evolution process. Through a specific inheritance
procedure, the weights are transmitted from the parents to their
offsprings, which allows re-exploitation of the already trained
networks and hence the acceleration of the global convergence of the
algorithm. In the preprocessing phase, a new feature extraction
method is proposed based on Legendre moments with the Maximum
entropy principle MEP as a selection criterion. This allows a global
search space reduction in the design of the networks. The proposed
method has been applied and tested on the well known MNIST
database of handwritten digits.