Learning Algorithms for Fuzzy Inference Systems Composed of Double- and Single-Input Rule Modules

Most of self-tuning fuzzy systems, which are automatically constructed from learning data, are based on the steepest descent method (SDM). However, this approach often requires a large convergence time and gets stuck into a shallow local minimum. One of its solutions is to use fuzzy rule modules with a small number of inputs such as DIRMs (Double-Input Rule Modules) and SIRMs (Single-Input Rule Modules). In this paper, we consider a (generalized) DIRMs model composed of double and single-input rule modules. Further, in order to reduce the redundant modules for the (generalized) DIRMs model, pruning and generative learning algorithms for the model are suggested. In order to show the effectiveness of them, numerical simulations for function approximation, Box-Jenkins and obstacle avoidance problems are performed.

Steepest Descent Method with New Step Sizes

Steepest descent method is a simple gradient method for optimization. This method has a slow convergence in heading to the optimal solution, which occurs because of the zigzag form of the steps. Barzilai and Borwein modified this algorithm so that it performs well for problems with large dimensions. Barzilai and Borwein method results have sparked a lot of research on the method of steepest descent, including alternate minimization gradient method and Yuan method. Inspired by previous works, we modified the step size of the steepest descent method. We then compare the modification results against the Barzilai and Borwein method, alternate minimization gradient method and Yuan method for quadratic function cases in terms of the iterations number and the running time. The average results indicate that the steepest descent method with the new step sizes provide good results for small dimensions and able to compete with the results of Barzilai and Borwein method and the alternate minimization gradient method for large dimensions. The new step sizes have faster convergence compared to the other methods, especially for cases with large dimensions.

Controllability of Efficiency of Antiviral Therapy in Hepatitis B Virus Infections

An optimal control problem for a mathematical model of efficiency of antiviral therapy in hepatitis B virus infections is considered. The aim of the study is to control the new viral production, block the new infection cells and maintain the number of uninfected cells in the given range. The optimal controls represent the efficiency of antiviral therapy in inhibiting viral production and preventing new infections. Defining the cost functional, the optimal control problem is converted into the constrained optimization problem and the first order optimality system is derived. For the numerical simulation, we propose the steepest descent algorithm based on the adjoint variable method. A computer program in MATLAB is developed for the numerical simulations.

Image Compression with Back-Propagation Neural Network using Cumulative Distribution Function

Image Compression using Artificial Neural Networks is a topic where research is being carried out in various directions towards achieving a generalized and economical network. Feedforward Networks using Back propagation Algorithm adopting the method of steepest descent for error minimization is popular and widely adopted and is directly applied to image compression. Various research works are directed towards achieving quick convergence of the network without loss of quality of the restored image. In general the images used for compression are of different types like dark image, high intensity image etc. When these images are compressed using Back-propagation Network, it takes longer time to converge. The reason for this is, the given image may contain a number of distinct gray levels with narrow difference with their neighborhood pixels. If the gray levels of the pixels in an image and their neighbors are mapped in such a way that the difference in the gray levels of the neighbors with the pixel is minimum, then compression ratio as well as the convergence of the network can be improved. To achieve this, a Cumulative distribution function is estimated for the image and it is used to map the image pixels. When the mapped image pixels are used, the Back-propagation Neural Network yields high compression ratio as well as it converges quickly.

Optimal Control of Viscoelastic Melt Spinning Processes

The optimal control problem for the viscoelastic melt spinning process has not been reported yet in the literature. In this study, an optimal control problem for a mathematical model of a viscoelastic melt spinning process is considered. Maxwell-Oldroyd model is used to describe the rheology of the polymeric material, the fiber is made of. The extrusion velocity of the polymer at the spinneret as well as the velocity and the temperature of the quench air and the fiber length serve as control variables. A constrained optimization problem is derived and the first–order optimality system is set up to obtain the adjoint equations. Numerical solutions are carried out using a steepest descent algorithm. A computer program in MATLAB is developed for simulations.

Determination of the Proper Quality Costs Parameters via Variable Step Size Steepest Descent Algorithm

This paper presents the determination of the proper quality costs parameters which provide the optimum return. The system dynamics simulation was applied. The simulation model was constructed by the real data from a case of the electronic devices manufacturer in Thailand. The Steepest Descent algorithm was employed to optimise. The experimental results show that the company should spend on prevention and appraisal activities for 850 and 10 Baht/day respectively. It provides minimum cumulative total quality cost, which is 258,000 Baht in twelve months. The effect of the step size in the stage of improving the variables to the optimum was also investigated. It can be stated that the smaller step size provided a better result with more experimental runs. However, the different yield in this case is not significant in practice. Therefore, the greater step size is recommended because the region of optima could be reached more easily and rapidly.

Artificial Neural Network with Steepest Descent Backpropagation Training Algorithm for Modeling Inverse Kinematics of Manipulator

Inverse kinematics analysis plays an important role in developing a robot manipulator. But it is not too easy to derive the inverse kinematic equation of a robot manipulator especially robot manipulator which has numerous degree of freedom. This paper describes an application of Artificial Neural Network for modeling the inverse kinematics equation of a robot manipulator. In this case, the robot has three degree of freedoms and the robot was implemented for drilling a printed circuit board. The artificial neural network architecture used for modeling is a multilayer perceptron networks with steepest descent backpropagation training algorithm. The designed artificial neural network has 2 inputs, 2 outputs and varies in number of hidden layer. Experiments were done in variation of number of hidden layer and learning rate. Experimental results show that the best architecture of artificial neural network used for modeling inverse kinematics of is multilayer perceptron with 1 hidden layer and 38 neurons per hidden layer. This network resulted a RMSE value of 0.01474.