Abstract: This paper presents an optimization method for
reducing the number of input channels and the complexity of the
feed-forward NARX neural network (NN) without compromising the
accuracy of the NN model. By utilizing the correlation analysis
method, the most significant regressors are selected to form the input
layer of the NN structure. An application of vehicle dynamic model
identification is also presented in this paper to demonstrate the
optimization technique and the optimal input layer structure and the
optimal number of neurons for the neural network is investigated.
Abstract: This paper proposes a complementary combination scheme of affine projection algorithm (APA) filters with different order of input regressors. A convex combination provides an interesting way to keep the advantage of APA having different order of input regressors. Consequently, a novel APA which has the rapid convergence and the reduced steady-state error is derived. Experimental results show the good properties of the proposed algorithm.
Abstract: This paper examines economic and Information and Communication Technology (ICT) development influence on recently increasing Internet purchases by individuals for European Union member states. After a growing trend for Internet purchases in EU27 was noticed, all possible regression analysis was applied using nine independent variables in 2011. Finally, two linear regression models were studied in detail. Conducted simple linear regression analysis confirmed the research hypothesis that the Internet purchases in analyzed EU countries is positively correlated with statistically significant variable Gross Domestic Product per capita (GDPpc). Also, analyzed multiple linear regression model with four regressors, showing ICT development level, indicates that ICT development is crucial for explaining the Internet purchases by individuals, confirming the research hypothesis.
Abstract: Employing a recently introduced unified adaptive filter
theory, we show how the performance of a large number of important
adaptive filter algorithms can be predicted within a general framework
in nonstationary environment. This approach is based on energy conservation
arguments and does not need to assume a Gaussian or white
distribution for the regressors. This general performance analysis can
be used to evaluate the mean square performance of the Least Mean
Square (LMS) algorithm, its normalized version (NLMS), the family
of Affine Projection Algorithms (APA), the Recursive Least Squares
(RLS), the Data-Reusing LMS (DR-LMS), its normalized version
(NDR-LMS), the Block Least Mean Squares (BLMS), the Block
Normalized LMS (BNLMS), the Transform Domain Adaptive Filters
(TDAF) and the Subband Adaptive Filters (SAF) in nonstationary
environment. Also, we establish the general expressions for the
steady-state excess mean square in this environment for all these
adaptive algorithms. Finally, we demonstrate through simulations that
these results are useful in predicting the adaptive filter performance.
Abstract: Bagging and boosting are among the most popular re-sampling ensemble methods that generate and combine a diversity of regression models using the same learning algorithm as base-learner. Boosting algorithms are considered stronger than bagging on noise-free data. However, there are strong empirical indications that bagging is much more robust than boosting in noisy settings. For this reason, in this work we built an ensemble using an averaging methodology of bagging and boosting ensembles with 10 sub-learners in each one. We performed a comparison with simple bagging and boosting ensembles with 25 sub-learners on standard benchmark datasets and the proposed ensemble gave better accuracy.
Abstract: The problem of estimating time-varying regression is
inevitably concerned with the necessity to choose the appropriate
level of model volatility - ranging from the full stationarity of instant
regression models to their absolute independence of each other. In the
stationary case the number of regression coefficients to be estimated
equals that of regressors, whereas the absence of any smoothness
assumptions augments the dimension of the unknown vector by the
factor of the time-series length. The Akaike Information Criterion
is a commonly adopted means of adjusting a model to the given
data set within a succession of nested parametric model classes,
but its crucial restriction is that the classes are rigidly defined by
the growing integer-valued dimension of the unknown vector. To
make the Kullback information maximization principle underlying the
classical AIC applicable to the problem of time-varying regression
estimation, we extend it onto a wider class of data models in which
the dimension of the parameter is fixed, but the freedom of its values
is softly constrained by a family of continuously nested a priori
probability distributions.
Abstract: In this paper we present a general formalism for the
establishment of the family of selective regressor affine projection
algorithms (SR-APA). The SR-APA, the SR regularized APA (SR-RAPA),
the SR partial rank algorithm (SR-PRA), the SR binormalized
data reusing least mean squares (SR-BNDR-LMS), and the SR normalized
LMS with orthogonal correction factors (SR-NLMS-OCF)
algorithms are established by this general formalism. We demonstrate
the performance of the presented algorithms through simulations in
acoustic echo cancellation scenario.