Trajectory-Based Modified Policy Iteration

This paper presents a new problem solving approach that is able to generate optimal policy solution for finite-state stochastic sequential decision-making problems with high data efficiency. The proposed algorithm iteratively builds and improves an approximate Markov Decision Process (MDP) model along with cost-to-go value approximates by generating finite length trajectories through the state-space. The approach creates a synergy between an approximate evolving model and approximate cost-to-go values to produce a sequence of improving policies finally converging to the optimal policy through an intelligent and structured search of the policy space. The approach modifies the policy update step of the policy iteration so as to result in a speedy and stable convergence to the optimal policy. We apply the algorithm to a non-holonomic mobile robot control problem and compare its performance with other Reinforcement Learning (RL) approaches, e.g., a) Q-learning, b) Watkins Q(λ), c) SARSA(λ).

Design of Multiplier-free State-Space Digital Filters

In this paper, a novel approach is presented for designing multiplier-free state-space digital filters. The multiplier-free design is obtained by finding power-of-2 coefficients and also quantizing the state variables to power-of-2 numbers. Expressions for the noise variance are derived for the quantized state vector and the output of the filter. A “structuretransformation matrix" is incorporated in these expressions. It is shown that quantization effects can be minimized by properly designing the structure-transformation matrix. Simulation results are very promising and illustrate the design algorithm.