A Performance Comparison of Golay and Reed-Muller Coded OFDM Signal for Peak-to-Average Power Ratio Reduction

Multicarrier transmission system such as Orthogonal Frequency Division Multiplexing (OFDM) is a promising technique for high bit rate transmission in wireless communication systems. OFDM is a spectrally efficient modulation technique that can achieve high speed data transmission over multipath fading channels without the need for powerful equalization techniques. A major drawback of OFDM is the high Peak-to-Average Power Ratio (PAPR) of the transmit signal which can significantly impact the performance of the power amplifier. In this paper we have compared the PAPR reduction performance of Golay and Reed-Muller coded OFDM signal. From our simulation it has been found that the PAPR reduction performance of Golay coded OFDM is better than the Reed-Muller coded OFDM signal. Moreover, for the optimum PAPR reduction performance, code configuration for Golay and Reed-Muller codes has been identified.

Image Contrast Enhancement based Sub-histogram Equalization Technique without Over-equalization Noise

In order to enhance the contrast in the regions where the pixels have similar intensities, this paper presents a new histogram equalization scheme. Conventional global equalization schemes over-equalizes these regions so that too bright or dark pixels are resulted and local equalization schemes produce unexpected discontinuities at the boundaries of the blocks. The proposed algorithm segments the original histogram into sub-histograms with reference to brightness level and equalizes each sub-histogram with the limited extents of equalization considering its mean and variance. The final image is determined as the weighted sum of the equalized images obtained by using the sub-histogram equalizations. By limiting the maximum and minimum ranges of equalization operations on individual sub-histograms, the over-equalization effect is eliminated. Also the result image does not miss feature information in low density histogram region since the remaining these area is applied separating equalization. This paper includes how to determine the segmentation points in the histogram. The proposed algorithm has been tested with more than 100 images having various contrasts in the images and the results are compared to the conventional approaches to show its superiority.

Novel Adaptive Channel Equalization Algorithms by Statistical Sampling

In this paper, novel statistical sampling based equalization techniques and CNN based detection are proposed to increase the spectral efficiency of multiuser communication systems over fading channels. Multiuser communication combined with selective fading can result in interferences which severely deteriorate the quality of service in wireless data transmission (e.g. CDMA in mobile communication). The paper introduces new equalization methods to combat interferences by minimizing the Bit Error Rate (BER) as a function of the equalizer coefficients. This provides higher performance than the traditional Minimum Mean Square Error equalization. Since the calculation of BER as a function of the equalizer coefficients is of exponential complexity, statistical sampling methods are proposed to approximate the gradient which yields fast equalization and superior performance to the traditional algorithms. Efficient estimation of the gradient is achieved by using stratified sampling and the Li-Silvester bounds. A simple mechanism is derived to identify the dominant samples in real-time, for the sake of efficient estimation. The equalizer weights are adapted recursively by minimizing the estimated BER. The near-optimal performance of the new algorithms is also demonstrated by extensive simulations. The paper has also developed a (Cellular Neural Network) CNN based approach to detection. In this case fast quadratic optimization has been carried out by t, whereas the task of equalizer is to ensure the required template structure (sparseness) for the CNN. The performance of the method has also been analyzed by simulations.

Illumination Invariant Face Recognition using Supervised and Unsupervised Learning Algorithms

In this paper, a comparative study of application of supervised and unsupervised learning algorithms on illumination invariant face recognition has been carried out. The supervised learning has been carried out with the help of using a bi-layered artificial neural network having one input, two hidden and one output layer. The gradient descent with momentum and adaptive learning rate back propagation learning algorithm has been used to implement the supervised learning in a way that both the inputs and corresponding outputs are provided at the time of training the network, thus here is an inherent clustering and optimized learning of weights which provide us with efficient results.. The unsupervised learning has been implemented with the help of a modified Counterpropagation network. The Counterpropagation network involves the process of clustering followed by application of Outstar rule to obtain the recognized face. The face recognition system has been developed for recognizing faces which have varying illumination intensities, where the database images vary in lighting with respect to angle of illumination with horizontal and vertical planes. The supervised and unsupervised learning algorithms have been implemented and have been tested exhaustively, with and without application of histogram equalization to get efficient results.

A Novel Receiver Algorithm for Coherent Underwater Acoustic Communications

In this paper, we proposed a novel receiver algorithm for coherent underwater acoustic communications. The proposed receiver is composed of three parts: (1) Doppler tracking and correction, (2) Time reversal channel estimation and combining, and (3) Joint iterative equalization and decoding (JIED). To reduce computational complexity and optimize the equalization algorithm, Time reversal (TR) channel estimation and combining is adopted to simplify multi-channel adaptive decision feedback equalizer (ADFE) into single channel ADFE without reducing the system performance. Simultaneously, the turbo theory is adopted to form joint iterative ADFE and convolutional decoder (JIED). In JIED scheme, the ADFE and decoder exchange soft information in an iterative manner, which can enhance the equalizer performance using decoding gain. The simulation results show that the proposed algorithm can reduce computational complexity and improve the performance of equalizer. Therefore, the performance of coherent underwater acoustic communications can be improved greatly.

Single Image Defogging Method Using Variational Approach for Edge-Preserving Regularization

In this paper, we propose the variational approach to solve single image defogging problem. In the inference process of the atmospheric veil, we defined new functional for atmospheric veil that satisfy edge-preserving regularization property. By using the fundamental lemma of calculus of variations, we derive the Euler-Lagrange equation foratmospheric veil that can find the maxima of a given functional. This equation can be solved by using a gradient decent method and time parameter. Then, we can have obtained the estimated atmospheric veil, and then have conducted the image restoration by using inferred atmospheric veil. Finally we have improved the contrast of restoration image by various histogram equalization methods. The experimental results show that the proposed method achieves rather good defogging results.

A Cost Function for Joint Blind Equalization and Phase Recovery

In this paper a new cost function for blind equalization is proposed. The proposed cost function, referred to as the modified maximum normalized cumulant criterion (MMNC), is an extension of the previously proposed maximum normalized cumulant criterion (MNC). While the MNC requires a separate phase recovery system after blind equalization, the MMNC performs joint blind equalization and phase recovery. To achieve this, the proposed algorithm maximizes a cost function that considers both amplitude and phase of the equalizer output. The simulation results show that the proposed algorithm has an improved channel equalization effect than the MNC algorithm and simultaneously can correct the phase error that the MNC algorithm is unable to do. The simulation results also show that the MMNC algorithm has lower complexity than the MNC algorithm. Moreover, the MMNC algorithm outperforms the MNC algorithm particularly when the symbols block size is small.

Post Mining- Discovering Valid Rules from Different Sized Data Sources

A big organization may have multiple branches spread across different locations. Processing of data from these branches becomes a huge task when innumerable transactions take place. Also, branches may be reluctant to forward their data for centralized processing but are ready to pass their association rules. Local mining may also generate a large amount of rules. Further, it is not practically possible for all local data sources to be of the same size. A model is proposed for discovering valid rules from different sized data sources where the valid rules are high weighted rules. These rules can be obtained from the high frequency rules generated from each of the data sources. A data source selection procedure is considered in order to efficiently synthesize rules. Support Equalization is another method proposed which focuses on eliminating low frequency rules at the local sites itself thus reducing the rules by a significant amount.

Mitigation of ISI for Next Generation Wireless Channels in Outdoor Vehicular Environments

In order to accommodate various multimedia services, next generation wireless networks are characterized by very high transmission bit rates. Thus, in such systems and networks, the received signal is not only limited by noise but - especially with increasing symbols rate often more significantly by the intersymbol interference (ISI) caused by the time dispersive radio channels such as those are used in this work. This paper deals with the study of the performance of detector for high bit rate transmission on some worst case models of frequency selective fading channels for outdoor mobile radio environments. This paper deals with a number of different wireless channels with different power profiles and different number of resolvable paths. All the radio channels generated in this paper are for outdoor vehicular environments with Doppler spread of 100 Hz. A carrier frequency of 1800 MHz is used and all the channels used in this work are such that they are useful for next generation wireless systems. Schemes for mitigation of ISI with adaptive equalizers of different types have been investigated and their performances have been investigated in terms of BER measured as a function of SNR.

Image Transmission via Iterative Cellular-Turbo System

To compress, improve bit error performance and also enhance 2D images, a new scheme, called Iterative Cellular-Turbo System (IC-TS) is introduced. In IC-TS, the original image is partitioned into 2N quantization levels, where N is denoted as bit planes. Then each of the N-bit-plane is coded by Turbo encoder and transmitted over Additive White Gaussian Noise (AWGN) channel. At the receiver side, bit-planes are re-assembled taking into consideration of neighborhood relationship of pixels in 2-D images. Each of the noisy bit-plane values of the image is evaluated iteratively using IC-TS structure, which is composed of equalization block; Iterative Cellular Image Processing Algorithm (ICIPA) and Turbo decoder. In IC-TS, there is an iterative feedback link between ICIPA and Turbo decoder. ICIPA uses mean and standard deviation of estimated values of each pixel neighborhood. It has extra-ordinary satisfactory results of both Bit Error Rate (BER) and image enhancement performance for less than -1 dB Signal-to-Noise Ratio (SNR) values, compared to traditional turbo coding scheme and 2-D filtering, applied separately. Also, compression can be achieved by using IC-TS systems. In compression, less memory storage is used and data rate is increased up to N-1 times by simply choosing any number of bit slices, sacrificing resolution. Hence, it is concluded that IC-TS system will be a compromising approach in 2-D image transmission, recovery of noisy signals and image compression.

Effect of Peak-to-Average Power Ratio Reduction on the Multicarrier Communication System Performance Parameters

Multicarrier transmission system such as Orthogonal Frequency Division Multiplexing (OFDM) is a promising technique for high bit rate transmission in wireless communication system. OFDM is a spectrally efficient modulation technique that can achieve high speed data transmission over multipath fading channels without the need for powerful equalization techniques. However the price paid for this high spectral efficiency and less intensive equalization is low power efficiency. OFDM signals are very sensitive to nonlinear effects due to the high Peak-to-Average Power Ratio (PAPR), which leads to the power inefficiency in the RF section of the transmitter. This paper investigates the effect of PAPR reduction on the performance parameter of multicarrier communication system. Performance parameters considered are power consumption of Power Amplifier (PA) and Digital-to-Analog Converter (DAC), power amplifier efficiency, SNR of DAC and BER performance of the system. From our analysis it is found that irrespective of PAPR reduction technique being employed, the power consumption of PA and DAC reduces and power amplifier efficiency increases due to reduction in PAPR. Moreover, it has been shown that for a given BER performance the requirement of Input-Backoff (IBO) reduces with reduction in PAPR.

On the Symbol Based Decision Feedback Equalizer

Decision Feedback equalizers (DFEs) usually outperform linear equalizers for channels with intersymbol interference. However, the DFE performance is highly dependent on the availability of reliable past decisions. Hence, in coded systems, where reliable decisions are only available after decoding the full block, the performance of the DFE will be affected. A symbol based DFE is a DFE that only uses the decision after the block is decoded. In this paper we derive the optimal settings of both the feedforward and feedback taps of the symbol based equalizer. We present a novel symbol based DFE filterbank, and derive its taps optimal settings. We also show that it outperforms the classic DFE in terms of complexity and/or performance.

Two Different Solutions for Gigabit Ethernet Transmission over POF

Two completely different approaches for a Gigabit Ethernet compliant stream transmission over 50m of 1mm PMMA SI-POF have been experimentally demonstrated and are compared in this paper. The first solution is based on a commercial RC-LED transmission and a careful optimization of the physical layer architecture, realized during the POF-PLUS EU Project. The second solution exploits the performance of an edge-emitting laser at the transmitter side in order to avoid any sort of electrical equalization at the receiver side.

A Symbol by Symbol Clustering Based Blind Equalizer

A new blind symbol by symbol equalizer is proposed. The operation of the proposed equalizer is based on the geometric properties of the two dimensional data constellation. An unsupervised clustering technique is used to locate the clusters formed by the received data. The symmetric properties of the clusters labels are subsequently utilized in order to label the clusters. Following this step, the received data are compared to clusters and decisions are made on a symbol by symbol basis, by assigning to each data the label of the nearest cluster. The operation of the equalizer is investigated both in linear and nonlinear channels. The performance of the proposed equalizer is compared to the performance of a CMAbased blind equalizer.

Increasing Convergence Rate of a Fractionally-Spaced Channel Equalizer

In this paper a technique for increasing the convergence rate of fractionally spaced channel equalizer is proposed. Instead of symbol-spaced updating of the equalizer filter, a mechanism has been devised to update the filter at a higher rate. This ensures convergence of the equalizer filter at a higher rate and therefore less time-consuming. The proposed technique has been simulated and tested for two-ray modeled channels with various delay spreads. These channels include minimum-phase and nonminimum- phase channels. Simulation results suggest that that proposed technique outperforms the conventional technique of symbol-spaced updating of equalizer filter.

An Adaptive Mammographic Image Enhancement in Orthogonal Polynomials Domain

X-ray mammography is the most effective method for the early detection of breast diseases. However, the typical diagnostic signs such as microcalcifications and masses are difficult to detect because mammograms are of low-contrast and noisy. In this paper, a new algorithm for image denoising and enhancement in Orthogonal Polynomials Transformation (OPT) is proposed for radiologists to screen mammograms. In this method, a set of OPT edge coefficients are scaled to a new set by a scale factor called OPT scale factor. The new set of coefficients is then inverse transformed resulting in contrast improved image. Applications of the proposed method to mammograms with subtle lesions are shown. To validate the effectiveness of the proposed method, we compare the results to those obtained by the Histogram Equalization (HE) and the Unsharp Masking (UM) methods. Our preliminary results strongly suggest that the proposed method offers considerably improved enhancement capability over the HE and UM methods.

Improved Lung Nodule Visualization on Chest Radiographs using Digital Filtering and Contrast Enhancement

Early detection of lung cancer through chest radiography is a widely used method due to its relatively affordable cost. In this paper, an approach to improve lung nodule visualization on chest radiographs is presented. The approach makes use of linear phase high-frequency emphasis filter for digital filtering and histogram equalization for contrast enhancement to achieve improvements. Results obtained indicate that a filtered image can reveal sharper edges and provide more details. Also, contrast enhancement offers a way to further enhance the global (or local) visualization by equalizing the histogram of the pixel values within the whole image (or a region of interest). The work aims to improve lung nodule visualization of chest radiographs to aid detection of lung cancer which is currently the leading cause of cancer deaths worldwide.

Adaptive Equalization Using Controlled Equal Gain Combining for Uplink/Downlink MC-CDMA Systems

In this paper we propose an enhanced equalization technique for multi-carrier code division multiple access (MC-CDMA). This method is based on the control of Equal Gain Combining (EGC) technique. Indeed, we introduce a new level changer to the EGC equalizer in order to adapt the equalization parameters to the channel coefficients. The optimal equalization level is, first, determined by channel training. The new approach reduces drastically the mutliuser interferences caused by interferes, without increasing the noise power. To compare the performances of the proposed equalizer, the theoretical analysis and numerical performances are given.

Design of Auto Exposure Unit Based On 2-Way Histogram Equalization

Histogram equalization is often used in image enhancement, but it can be also used in auto exposure. However, conventional histogram equalization does not work well when many pixels are concentrated in a narrow luminance range.This paper proposes an auto exposure method based on 2-way histogram equalization. Two cumulative distribution functions are used, where one is from dark to bright and the other is from bright to dark. In this paper, the proposed auto exposure method is also designed and implemented for image signal processors with full-HD images.

Efficient Variants of Square Contour Algorithm for Blind Equalization of QAM Signals

A new distance-adjusted approach is proposed in which static square contours are defined around an estimated symbol in a QAM constellation, which create regions that correspond to fixed step sizes and weighting factors. As a result, the equalizer tap adjustment consists of a linearly weighted sum of adaptation criteria that is scaled by a variable step size. This approach is the basis of two new algorithms: the Variable step size Square Contour Algorithm (VSCA) and the Variable step size Square Contour Decision-Directed Algorithm (VSDA). The proposed schemes are compared with existing blind equalization algorithms in the SCA family in terms of convergence speed, constellation eye opening and residual ISI suppression. Simulation results for 64-QAM signaling over empirically derived microwave radio channels confirm the efficacy of the proposed algorithms. An RTL implementation of the blind adaptive equalizer based on the proposed schemes is presented and the system is configured to operate in VSCA error signal mode, for square QAM signals up to 64-QAM.