Abstract: This paper presents an exact pruning algorithm with
adaptive pruning interval for general dynamic neural networks
(GDNN). GDNNs are artificial neural networks with internal dynamics.
All layers have feedback connections with time delays to the
same and to all other layers. The structure of the plant is unknown, so
the identification process is started with a larger network architecture
than necessary. During parameter optimization with the Levenberg-
Marquardt (LM) algorithm irrelevant weights of the dynamic neural
network are deleted in order to find a model for the plant as
simple as possible. The weights to be pruned are found by direct
evaluation of the training data within a sliding time window. The
influence of pruning on the identification system depends on the
network architecture at pruning time and the selected weight to be
deleted. As the architecture of the model is changed drastically during
the identification and pruning process, it is suggested to adapt the
pruning interval online. Two system identification examples show
the architecture selection ability of the proposed pruning approach.
Abstract: Embedded systems need to respect stringent real
time constraints. Various hardware components included in such
systems such as cache memories exhibit variability and therefore
affect execution time. Indeed, a cache memory access from an
embedded microprocessor might result in a cache hit where the
data is available or a cache miss and the data need to be fetched
with an additional delay from an external memory. It is therefore
highly desirable to predict future memory accesses during
execution in order to appropriately prefetch data without incurring
delays. In this paper, we evaluate the potential of several artificial
neural networks for the prediction of instruction memory
addresses. Neural network have the potential to tackle the nonlinear
behavior observed in memory accesses during program
execution and their demonstrated numerous hardware
implementation emphasize this choice over traditional forecasting
techniques for their inclusion in embedded systems. However,
embedded applications execute millions of instructions and
therefore millions of addresses to be predicted. This very
challenging problem of neural network based prediction of large
time series is approached in this paper by evaluating various neural
network architectures based on the recurrent neural network
paradigm with pre-processing based on the Self Organizing Map
(SOM) classification technique.