On Diffusion Approximation of Discrete Markov Dynamical Systems

The paper is devoted to stochastic analysis of finite dimensional difference equation with dependent on ergodic Markov chain increments, which are proportional to small parameter ". A point-form solution of this difference equation may be represented as vertexes of a time-dependent continuous broken line given on the segment [0,1] with "-dependent scaling of intervals between vertexes. Tending " to zero one may apply stochastic averaging and diffusion approximation procedures and construct continuous approximation of the initial stochastic iterations as an ordinary or stochastic Ito differential equation. The paper proves that for sufficiently small " these equations may be successfully applied not only to approximate finite number of iterations but also for asymptotic analysis of iterations, when number of iterations tends to infinity.

Performance Analysis of HSDPA Systems using Low-Density Parity-Check (LDPC)Coding as Compared to Turbo Coding

HSDPA is a new feature which is introduced in Release-5 specifications of the 3GPP WCDMA/UTRA standard to realize higher speed data rate together with lower round-trip times. Moreover, the HSDPA concept offers outstanding improvement of packet throughput and also significantly reduces the packet call transfer delay as compared to Release -99 DSCH. Till now the HSDPA system uses turbo coding which is the best coding technique to achieve the Shannon limit. However, the main drawbacks of turbo coding are high decoding complexity and high latency which makes it unsuitable for some applications like satellite communications, since the transmission distance itself introduces latency due to limited speed of light. Hence in this paper it is proposed to use LDPC coding in place of Turbo coding for HSDPA system which decreases the latency and decoding complexity. But LDPC coding increases the Encoding complexity. Though the complexity of transmitter increases at NodeB, the End user is at an advantage in terms of receiver complexity and Bit- error rate. In this paper LDPC Encoder is implemented using “sparse parity check matrix" H to generate a codeword at Encoder and “Belief Propagation algorithm "for LDPC decoding .Simulation results shows that in LDPC coding the BER suddenly drops as the number of iterations increase with a small increase in Eb/No. Which is not possible in Turbo coding. Also same BER was achieved using less number of iterations and hence the latency and receiver complexity has decreased for LDPC coding. HSDPA increases the downlink data rate within a cell to a theoretical maximum of 14Mbps, with 2Mbps on the uplink. The changes that HSDPA enables includes better quality, more reliable and more robust data services. In other words, while realistic data rates are only a few Mbps, the actual quality and number of users achieved will improve significantly.

Enhanced Shell Sorting Algorithm

Many algorithms are available for sorting the unordered elements. Most important of them are Bubble sort, Heap sort, Insertion sort and Shell sort. These algorithms have their own pros and cons. Shell Sort which is an enhanced version of insertion sort, reduces the number of swaps of the elements being sorted to minimize the complexity and time as compared to insertion sort. Shell sort improves the efficiency of insertion sort by quickly shifting values to their destination. Average sort time is O(n1.25), while worst-case time is O(n1.5). It performs certain iterations. In each iteration it swaps some elements of the array in such a way that in last iteration when the value of h is one, the number of swaps will be reduced. Donald L. Shell invented a formula to calculate the value of ?h?. this work focuses to identify some improvement in the conventional Shell sort algorithm. ''Enhanced Shell Sort algorithm'' is an improvement in the algorithm to calculate the value of 'h'. It has been observed that by applying this algorithm, number of swaps can be reduced up to 60 percent as compared to the existing algorithm. In some other cases this enhancement was found faster than the existing algorithms available.

Near Perfect Reconstruction Quadrature Mirror Filter

In this paper, various algorithms for designing quadrature mirror filter are reviewed and a new algorithm is presented for the design of near perfect reconstruction quadrature mirror filter bank. In the proposed algorithm, objective function is formulated using the perfect reconstruction condition or magnitude response condition of prototype filter at frequency (ω = 0.5π) in ideal condition. The cutoff frequency is iteratively changed to adjust the filters coefficients using optimization algorithm. The performances of the proposed algorithm are evaluated in term of computation time, reconstruction error and number of iterations. The design examples illustrate that the proposed algorithm is superior in term of peak reconstruction error, computation time, and number of iterations. The proposed algorithm is simple, easy to implement, and linear in nature.

Modified Montgomery for RSA Cryptosystem

Encryption and decryption in RSA are done by modular exponentiation which is achieved by repeated modular multiplication. Hence efficiency of modular multiplication directly determines the efficiency of RSA cryptosystem. This paper designs a Modified Montgomery Modular Multiplication in which addition of operands is computed by 4:2 compressor. The basic logic operations in addition are partitioned over two iterations such that parallel computations are performed. This reduces the critical path delay of proposed Montgomery design. The proposed design and RSA are implemented on Virtex 2 and Virtex 5 FPGAs. The two factors partitioning and parallelism have improved the frequency and throughput of proposed design.

Effect of Size of the Step in the Response Surface Methodology using Nonlinear Test Functions

The response surface methodology (RSM) is a collection of mathematical and statistical techniques useful in the modeling and analysis of problems in which the dependent variable receives the influence of several independent variables, in order to determine which are the conditions under which should operate these variables to optimize a production process. The RSM estimated a regression model of first order, and sets the search direction using the method of maximum / minimum slope up / down MMS U/D. However, this method selects the step size intuitively, which can affect the efficiency of the RSM. This paper assesses how the step size affects the efficiency of this methodology. The numerical examples are carried out through Monte Carlo experiments, evaluating three response variables: efficiency gain function, the optimum distance and the number of iterations. The results in the simulation experiments showed that in response variables efficiency and gain function at the optimum distance were not affected by the step size, while the number of iterations is found that the efficiency if it is affected by the size of the step and function type of test used.

Complex-Valued Neural Network in Signal Processing: A Study on the Effectiveness of Complex Valued Generalized Mean Neuron Model

A complex valued neural network is a neural network which consists of complex valued input and/or weights and/or thresholds and/or activation functions. Complex-valued neural networks have been widening the scope of applications not only in electronics and informatics, but also in social systems. One of the most important applications of the complex valued neural network is in signal processing. In Neural networks, generalized mean neuron model (GMN) is often discussed and studied. The GMN includes a new aggregation function based on the concept of generalized mean of all the inputs to the neuron. This paper aims to present exhaustive results of using Generalized Mean Neuron model in a complex-valued neural network model that uses the back-propagation algorithm (called -Complex-BP-) for learning. Our experiments results demonstrate the effectiveness of a Generalized Mean Neuron Model in a complex plane for signal processing over a real valued neural network. We have studied and stated various observations like effect of learning rates, ranges of the initial weights randomly selected, error functions used and number of iterations for the convergence of error required on a Generalized Mean neural network model. Some inherent properties of this complex back propagation algorithm are also studied and discussed.

A New Iterative Method for Solving Nonlinear Equations

In this study, a new root-finding method for solving nonlinear equations is proposed. This method requires two starting values that do not necessarily bracketing a root. However, when the starting values are selected to be close to a root, the proposed method converges to the root quicker than the secant method. Another advantage over all iterative methods is that; the proposed method usually converges to two distinct roots when the given function has more than one root, that is, the odd iterations of this new technique converge to a root and the even iterations converge to another root. Some numerical examples, including a sine-polynomial equation, are solved by using the proposed method and compared with results obtained by the secant method; perfect agreements are found.

Transmission Pricing based on Voltage Angle Decomposition

In this paper a new approach for transmission pricing is presented. The main idea is voltage angle allocation, i.e. determining the contribution of each contract on the voltage angle of each bus. DC power flow is used to compute a primary solution for angle decomposition. To consider the impacts of system non-linearity on angle decomposition, the primary solution is corrected in different iterations of decoupled Newton-Raphson power flow. Then, the contribution of each contract on power flow of each transmission line is computed based on angle decomposition. Contract-related flows are used as a measure for “extent of use" of transmission network capacity and consequently transmission pricing. The presented approach is applied to a 4-bus test system and IEEE 30-bus test system.

Transient Combined Conduction and Radiation in a Two-Dimensional Participating Cylinder in Presence of Heat Generation

Simultaneous transient conduction and radiation heat transfer with heat generation is investigated. Analysis is carried out for both steady and unsteady situations. two-dimensional gray cylindrical enclosure with an absorbing, emitting, and isotropically scattering medium is considered. Enclosure boundaries are assumed at specified temperatures. The heat generation rate is considered uniform and constant throughout the medium. The lattice Boltzmann method (LBM) was used to solve the energy equation of a transient conduction-radiation heat transfer problem. The control volume finite element method (CVFEM) was used to compute the radiative information. To study the compatibility of the LBM for the energy equation and the CVFEM for the radiative transfer equation, transient conduction and radiation heat transfer problems in 2-D cylindrical geometries were considered. In order to establish the suitability of the LBM, the energy equation of the present problem was also solved using the the finite difference method (FDM) of the computational fluid dynamics. The CVFEM used in the radiative heat transfer was employed to compute the radiative information required for the solution of the energy equation using the LBM or the FDM (of the CFD). To study the compatibility and suitability of the LBM for the solution of energy equation and the CVFEM for the radiative information, results were analyzed for the effects of various parameters such as the boundary emissivity. The results of the LBMCVFEM combination were found to be in excellent agreement with the FDM-CVFEM combination. The number of iterations and the steady state temperature in both of the combinations were found comparable. Results are found for situations with and without heat generation. Heat generation is found to have significant bearing on temperature distribution.

Some Third Order Methods for Solving Systems of Nonlinear Equations

Based on Traub-s methods for solving nonlinear equation f(x) = 0, we develop two families of third-order methods for solving system of nonlinear equations F(x) = 0. The families include well-known existing methods as special cases. The stability is corroborated by numerical results. Comparison with well-known methods shows that the present methods are robust. These higher order methods may be very useful in the numerical applications requiring high precision in their computations because these methods yield a clear reduction in number of iterations.

Two Fourth-order Iterative Methods Based on Continued Fraction for Root-finding Problems

In this paper, we present two new one-step iterative methods based on Thiele-s continued fraction for solving nonlinear equations. By applying the truncated Thiele-s continued fraction twice, the iterative methods are obtained respectively. Analysis of convergence shows that the new methods are fourth-order convergent. Numerical tests verifying the theory are given and based on the methods, two new one-step iterations are developed.

Numerical Algorithms for Solving a Type of Nonlinear Integro-Differential Equations

In this article two algorithms, one based on variation iteration method and the other on Adomian's decomposition method, are developed to find the numerical solution of an initial value problem involving the non linear integro differantial equation where R is a nonlinear operator that contains partial derivatives with respect to x. Special cases of the integro-differential equation are solved using the algorithms. The numerical solutions are compared with analytical solutions. The results show that these two methods are efficient and accurate with only two or three iterations

Enhancing the Error-Correcting Performance of LDPC Codes through an Efficient Use of Decoding Iterations

The decoding of Low-Density Parity-Check (LDPC) codes is operated over a redundant structure known as the bipartite graph, meaning that the full set of bit nodes is not absolutely necessary for decoder convergence. In 2008, Soyjaudah and Catherine designed a recovery algorithm for LDPC codes based on this assumption and showed that the error-correcting performance of their codes outperformed conventional LDPC Codes. In this work, the use of the recovery algorithm is further explored to test the performance of LDPC codes while the number of iterations is progressively increased. For experiments conducted with small blocklengths of up to 800 bits and number of iterations of up to 2000, the results interestingly demonstrate that contrary to conventional wisdom, the error-correcting performance keeps increasing with increasing number of iterations.

An Improved Learning Algorithm based on the Conjugate Gradient Method for Back Propagation Neural Networks

The conjugate gradient optimization algorithm usually used for nonlinear least squares is presented and is combined with the modified back propagation algorithm yielding a new fast training multilayer perceptron (MLP) algorithm (CGFR/AG). The approaches presented in the paper consist of three steps: (1) Modification on standard back propagation algorithm by introducing gain variation term of the activation function, (2) Calculating the gradient descent on error with respect to the weights and gains values and (3) the determination of the new search direction by exploiting the information calculated by gradient descent in step (2) as well as the previous search direction. The proposed method improved the training efficiency of back propagation algorithm by adaptively modifying the initial search direction. Performance of the proposed method is demonstrated by comparing to the conjugate gradient algorithm from neural network toolbox for the chosen benchmark. The results show that the number of iterations required by the proposed method to converge is less than 20% of what is required by the standard conjugate gradient and neural network toolbox algorithm.

Typical Day Prediction Model for Output Power and Energy Efficiency of a Grid-Connected Solar Photovoltaic System

A novel typical day prediction model have been built and validated by the measured data of a grid-connected solar photovoltaic (PV) system in Macau. Unlike conventional statistical method used by previous study on PV systems which get results by averaging nearby continuous points, the present typical day statistical method obtain the value at every minute in a typical day by averaging discontinuous points at the same minute in different days. This typical day statistical method based on discontinuous point averaging makes it possible for us to obtain the Gaussian shape dynamical distributions for solar irradiance and output power in a yearly or monthly typical day. Based on the yearly typical day statistical analysis results, the maximum possible accumulated output energy in a year with on site climate conditions and the corresponding optimal PV system running time are obtained. Periodic Gaussian shape prediction models for solar irradiance, output energy and system energy efficiency have been built and their coefficients have been determined based on the yearly, maximum and minimum monthly typical day Gaussian distribution parameters, which are obtained from iterations for minimum Root Mean Squared Deviation (RMSD). With the present model, the dynamical effects due to time difference in a day are kept and the day to day uncertainty due to weather changing are smoothed but still included. The periodic Gaussian shape correlations for solar irradiance, output power and system energy efficiency have been compared favorably with data of the PV system in Macau and proved to be an improvement than previous models.