A Compact Pi Network for Reducing Bit Error Rate in Dispersive FIR Channel Noise Model

During signal transmission, the combined effect of the transmitter filter, the transmission medium, and additive white Gaussian noise (AWGN) are included in the channel which distort and add noise to the signal. This causes the well defined signal constellation to spread causing errors in bit detection. A compact pi neural network with minimum number of nodes is proposed. The replacement of summation at each node by multiplication results in more powerful mapping. The resultant pi network is tested on six different channels.

Improved Back Propagation Algorithm to Avoid Local Minima in Multiplicative Neuron Model

The back propagation algorithm calculates the weight changes of artificial neural networks, and a common approach is to use a training algorithm consisting of a learning rate and a momentum factor. The major drawbacks of above learning algorithm are the problems of local minima and slow convergence speeds. The addition of an extra term, called a proportional factor reduces the convergence of the back propagation algorithm. We have applied the three term back propagation to multiplicative neural network learning. The algorithm is tested on XOR and parity problem and compared with the standard back propagation training algorithm.