Influence of Parameters of Modeling and Data Distribution for Optimal Condition on Locally Weighted Projection Regression Method

Recent research in neural networks science and
neuroscience for modeling complex time series data and statistical
learning has focused mostly on learning from high input space and
signals. Local linear models are a strong choice for modeling local
nonlinearity in data series. Locally weighted projection regression is
a flexible and powerful algorithm for nonlinear approximation in
high dimensional signal spaces. In this paper, different learning
scenario of one and two dimensional data series with different
distributions are investigated for simulation and further noise is
inputted to data distribution for making different disordered
distribution in time series data and for evaluation of algorithm in
locality prediction of nonlinearity. Then, the performance of this
algorithm is simulated and also when the distribution of data is high
or when the number of data is less the sensitivity of this approach to
data distribution and influence of important parameter of local
validity in this algorithm with different data distribution is explained.





References:
[1] An, C. H., Atkeson, C., & Hollerbach, “J. Model based control of a
robot manipulator”. Cambridge, MA: MIT Press, 1988 .
[2] Atkeson, C., Moore, A., & Schaal, “Locally weighted learning”.
Artificial Intelligence Review, 11(4), 76–113, 1997.
[3] D’Souza, A., Vijayakumar, S., & Schaal, SAre, “internal models of the
entire body learnable” Society for Neuroscience Abstracts, 27, 2001.
[4] Fahlman, S. E., & Lebiere, C, “The cascade-correlation learning
architecture”. In D. S. Touretzky (ed.), Advances in neural information
processing systems, 2 (pp. 524–532). San Diego: Morgan-Kaufmann.
1990.
[5] Frank, I., & Friedman, “J. A statistical view of some chemometric tools”.
Technometrics, 35(2), 109–135. (1993).
[6] Friedman, J. H., & Stutzle, W. Projection pursuit regression. Journal of
America Statistical Association, 76, 817–823,1981.
[7] Gelman, A. B., Carlin, J. S., Stern, H. S., & Rubin, D. B. Bayesian data
analysis. London: Chapman and Hall,1995.
[8] Ghahramani, Z., & Beal, M. Variational inference for Bayesian mixtures
of factor analysers. In S. A. Solla, T. K. Leen, & K.-R. Mu¨ ller (Eds.),
Advances in neural information processing systems, 12 (pp. 449–455).
Cambridge, MA: MIT Press, 2000.
[9] S. Schaal and C. G. Atkeson, “Constructive incre mental learning from
only local information,” Neural Computation, vol. 10, pp. 2047-2084,
1998.
[10] Sethu Vijayakumar, Aaron D'Souza and Stefan Schaal. “Incremental
Online Learning in High Dimensions”. Neural Computation, vol. 17, no.
12, pp. 2602-2634 (2005).
[11] Stefan Klanke, Sethu Vijayakumar and Stefan Schaal, A library for
Locally Weighted Projection Regression, Journal of Machine Learning
Research (JMLR), vol. 9, pp. 623-626 ,2008.
[12] Gibbs, M., & Mackay, D. J. C. Efficient implementation of gaussian
processes (Tech. Rep.). Cambridge: Cavendish Laboratory,1997.
[13] C. Cortes and V. Vapnik, “Support vector networks,” Machine Learning,
vol. 20, pp. 273-297, 1995.
[14] V. Vapnik, S. Golowich, and A. Smola, “Support vector method for
function approximation, regression estimation, and signal processing,”
in Advances in Neural Information Processing Systems 9, M. Mozer, M.
I. Jordan, and T. Petsche, Eds. Cambridge, MA: MIT Press, 1996, pp.
281-287 and http://en.wikipedia.org/wiki?curid=4261562.
[15] Farhad Asadi, Mohammad javad Mollakazemi, “Investigation of
fluctuation locations and effect of data distribution in time series
dynamical regimes”.. Accepted and oral presentation in ICBCBBE
2014: XII International Conference on Bioinformatics, Computational
Biology and Biomedical Engineering, October, 27-28, 2014, Istanbul,
Turkey.