Robust Coherent Noise Suppression by Point Estimation of the Cauchy Location Parameter

This paper introduces a new point estimation algorithm, with particular focus on coherent noise suppression, given several measurements of the device under test where it is assumed that 1) the noise is first-order stationery and 2) the device under test is linear and time-invariant. The algorithm exploits the robustness of the Pitman estimator of the Cauchy location parameter through the initial scaling of the test signal by a centred Gaussian variable of predetermined variance. It is illustrated through mathematical derivations and simulation results that the proposed algorithm is more accurate and consistently robust to outliers for different tailed density functions than the conventional methods of sample mean (coherent averaging technique) and sample median search.

Confidence Intervals for the Difference of Two Normal Population Variances

Motivated by the recent work of Herbert, Hayen, Macaskill and Walter [Interval estimation for the difference of two independent variances. Communications in Statistics, Simulation and Computation, 40: 744-758, 2011.], we investigate, in this paper, new confidence intervals for the difference between two normal population variances based on the generalized confidence interval of Weerahandi [Generalized Confidence Intervals. Journal of the American Statistical Association, 88(423): 899-905, 1993.] and the closed form method of variance estimation of Zou, Huo and Taleban [Simple confidence intervals for lognormal means and their differences with environmental applications. Environmetrics 20: 172-180, 2009]. Monte Carlo simulation results indicate that our proposed confidence intervals give a better coverage probability than that of the existing confidence interval. Also two new confidence intervals perform similarly based on their coverage probabilities and their average length widths.

Image Contrast Enhancement based Sub-histogram Equalization Technique without Over-equalization Noise

In order to enhance the contrast in the regions where the pixels have similar intensities, this paper presents a new histogram equalization scheme. Conventional global equalization schemes over-equalizes these regions so that too bright or dark pixels are resulted and local equalization schemes produce unexpected discontinuities at the boundaries of the blocks. The proposed algorithm segments the original histogram into sub-histograms with reference to brightness level and equalizes each sub-histogram with the limited extents of equalization considering its mean and variance. The final image is determined as the weighted sum of the equalized images obtained by using the sub-histogram equalizations. By limiting the maximum and minimum ranges of equalization operations on individual sub-histograms, the over-equalization effect is eliminated. Also the result image does not miss feature information in low density histogram region since the remaining these area is applied separating equalization. This paper includes how to determine the segmentation points in the histogram. The proposed algorithm has been tested with more than 100 images having various contrasts in the images and the results are compared to the conventional approaches to show its superiority.

Neural Network Imputation in Complex Survey Design

Missing data yields many analysis challenges. In case of complex survey design, in addition to dealing with missing data, researchers need to account for the sampling design to achieve useful inferences. Methods for incorporating sampling weights in neural network imputation were investigated to account for complex survey designs. An estimate of variance to account for the imputation uncertainty as well as the sampling design using neural networks will be provided. A simulation study was conducted to compare estimation results based on complete case analysis, multiple imputation using a Markov Chain Monte Carlo, and neural network imputation. Furthermore, a public-use dataset was used as an example to illustrate neural networks imputation under a complex survey design

Comparing Data Analysis, Communication and Information Technologies Expertise Levels in Undergraduate Psychology Students

Aims for this study: first, to compare the expertise level in data analysis, communication and information technologies in undergraduate psychology students. Second, to verify the factor structure of E-ETICA (Escala de Experticia en Tecnologias de la Informacion, la Comunicacion y el Análisis or Data Analysis, Communication and Information'Expertise Scale) which had shown an excellent internal consistency (α= 0.92) as well as a simple factor structure. Three factors, Complex, Basic Information and Communications Technologies and E-Searching and Download Abilities, explains 63% of variance. In the present study, 260 students (119 juniors and 141 seniors) were asked to respond to ETICA (16 items Likert scale of five points 1: null domain to 5: total domain). The results show that both junior and senior students report having very similar expertise level; however, E-ETICA presents a different factor structure for juniors and four factors explained also 63% of variance: Information E-Searching, Download and Process; Data analysis; Organization; and Communication technologies.

Fast Wavelet Image Denoising Based on Local Variance and Edge Analysis

The approach based on the wavelet transform has been widely used for image denoising due to its multi-resolution nature, its ability to produce high levels of noise reduction and the low level of distortion introduced. However, by removing noise, high frequency components belonging to edges are also removed, which leads to blurring the signal features. This paper proposes a new method of image noise reduction based on local variance and edge analysis. The analysis is performed by dividing an image into 32 x 32 pixel blocks, and transforming the data into wavelet domain. Fast lifting wavelet spatial-frequency decomposition and reconstruction is developed with the advantages of being computationally efficient and boundary effects minimized. The adaptive thresholding by local variance estimation and edge strength measurement can effectively reduce image noise while preserve the features of the original image corresponding to the boundaries of the objects. Experimental results demonstrate that the method performs well for images contaminated by natural and artificial noise, and is suitable to be adapted for different class of images and type of noises. The proposed algorithm provides a potential solution with parallel computation for real time or embedded system application.

Adaptive Block State Update Method for Separating Background

In this paper, we proposed the robust mobile object detection method for light effect in the night street image block based updating reference background model using block state analysis. Experiment image is acquired sequence color video from steady camera. When suddenly appeared artificial illumination, reference background model update this information such as street light, sign light. Generally natural illumination is change by temporal, but artificial illumination is suddenly appearance. So in this paper for exactly detect artificial illumination have 2 state process. First process is compare difference between current image and reference background by block based, it can know changed blocks. Second process is difference between current image-s edge map and reference background image-s edge map, it possible to estimate illumination at any block. This information is possible to exactly detect object, artificial illumination and it was generating reference background more clearly. Block is classified by block-state analysis. Block-state has a 4 state (i.e. transient, stationary, background, artificial illumination). Fig. 1 is show characteristic of block-state respectively [1]. Experimental results show that the presented approach works well in the presence of illumination variance.

Multivariate Statistical Analysis of Decathlon Performance Results in Olympic Athletes (1988-2008)

The performance results of the athletes competed in the 1988-2008 Olympic Games were analyzed (n = 166). The data were obtained from the IAAF official protocols. In the principal component analysis, the first three principal components explained 70% of the total variance. In the 1st principal component (with 43.1% of total variance explained) the largest factor loadings were for 100m (0.89), 400m (0.81), 110m hurdle run (0.76), and long jump (–0.72). This factor can be interpreted as the 'sprinting performance'. The loadings on the 2nd factor (15.3% of the total variance) presented a counter-intuitive throwing-jumping combination: the highest loadings were for throwing events (javelin throwing 0.76; shot put 0.74; and discus throwing 0.73) and also for jumping events (high jump 0.62; pole vaulting 0.58). On the 3rd factor (11.6% of total variance), the largest loading was for 1500 m running (0.88); all other loadings were below 0.4.

Intelligent Fuzzy Input Estimator for the Input Force on the Rigid Bar Structure System

The intelligent fuzzy input estimator is used to estimate the input force of the rigid bar structural system in this study. The fuzzy Kalman filter without the input term and the fuzzy weighting recursive least square estimator are two main portions of this method. The practicability and accuracy of the proposed method were verified with numerical simulations from which the input forces of a rigid bar structural system were estimated from the output responses. In order to examine the accuracy of the proposed method, a rigid bar structural system is subjected to periodic sinusoidal dynamic loading. The excellent performance of this estimator is demonstrated by comparing it with the use of difference weighting function and improper the initial process noise covariance. The estimated results have a good agreement with the true values in all cases tested.

Signature Recognition and Verification using Hybrid Features and Clustered Artificial Neural Network(ANN)s

Signature represents an individual characteristic of a person which can be used for his / her validation. For such application proper modeling is essential. Here we propose an offline signature recognition and verification scheme which is based on extraction of several features including one hybrid set from the input signature and compare them with the already trained forms. Feature points are classified using statistical parameters like mean and variance. The scanned signature is normalized in slant using a very simple algorithm with an intention to make the system robust which is found to be very helpful. The slant correction is further aided by the use of an Artificial Neural Network (ANN). The suggested scheme discriminates between originals and forged signatures from simple and random forgeries. The primary objective is to reduce the two crucial parameters-False Acceptance Rate (FAR) and False Rejection Rate (FRR) with lesser training time with an intension to make the system dynamic using a cluster of ANNs forming a multiple classifier system.

The Use of Complex Contourlet Transform on Fusion Scheme

Image fusion aims to enhance the perception of a scene by combining important information captured by different sensors. Dual-Tree Complex Wavelet (DT-CWT) has been thouroughly investigated for image fusion, since it takes advantages of approximate shift invariance and direction selectivity. But it can only handle limited direction information. To allow a more flexible directional expansion for images, we propose a novel fusion scheme, referred to as complex contourlet transform (CCT). It successfully incorporates directional filter banks (DFB) into DT-CWT. As a result it efficiently deal with images containing contours and textures, whereas it retains the property of shift invariance. Experimental results demonstrated that the method features high quality fusion performance and can facilitate many image processing applications.

Performance Optimization of Data Mining Application Using Radial Basis Function Classifier

Text data mining is a process of exploratory data analysis. Classification maps data into predefined groups or classes. It is often referred to as supervised learning because the classes are determined before examining the data. This paper describes proposed radial basis function Classifier that performs comparative crossvalidation for existing radial basis function Classifier. The feasibility and the benefits of the proposed approach are demonstrated by means of data mining problem: direct Marketing. Direct marketing has become an important application field of data mining. Comparative Cross-validation involves estimation of accuracy by either stratified k-fold cross-validation or equivalent repeated random subsampling. While the proposed method may have high bias; its performance (accuracy estimation in our case) may be poor due to high variance. Thus the accuracy with proposed radial basis function Classifier was less than with the existing radial basis function Classifier. However there is smaller the improvement in runtime and larger improvement in precision and recall. In the proposed method Classification accuracy and prediction accuracy are determined where the prediction accuracy is comparatively high.

Noise Analysis of Single-Ended Input Differential Amplifier using Stochastic Differential Equation

In this paper, we analyze the effect of noise in a single- ended input differential amplifier working at high frequencies. Both extrinsic and intrinsic noise are analyzed using time domain method employing techniques from stochastic calculus. Stochastic differential equations are used to obtain autocorrelation functions of the output noise voltage and other solution statistics like mean and variance. The analysis leads to important design implications and suggests changes in the device parameters for improved noise characteristics of the differential amplifier.

Distributed Estimation Using an Improved Incremental Distributed LMS Algorithm

In this paper we consider the problem of distributed adaptive estimation in wireless sensor networks for two different observation noise conditions. In the first case, we assume that there are some sensors with high observation noise variance (noisy sensors) in the network. In the second case, different variance for observation noise is assumed among the sensors which is more close to real scenario. In both cases, an initial estimate of each sensor-s observation noise is obtained. For the first case, we show that when there are such sensors in the network, the performance of conventional distributed adaptive estimation algorithms such as incremental distributed least mean square (IDLMS) algorithm drastically decreases. In addition, detecting and ignoring these sensors leads to a better performance in a sense of estimation. In the next step, we propose a simple algorithm to detect theses noisy sensors and modify the IDLMS algorithm to deal with noisy sensors. For the second case, we propose a new algorithm in which the step-size parameter is adjusted for each sensor according to its observation noise variance. As the simulation results show, the proposed methods outperforms the IDLMS algorithm in the same condition.

An Improved Algorithm for Channel Estimations of OFDM System based Pilot Signal

This paper presents a new algorithm for the channel estimation of the OFDM system based on a pilot signal for the new generation of high data rate communication systems. In orthogonal frequency division multiplexing (OFDM) systems over fast-varying fading channels, channel estimation and tracking is generally carried out by transmitting known pilot symbols in given positions of the frequency-time grid. In this paper, we propose to derive an improved algorithm based on the calculation of the mean and the variance of the adjacent pilot signals for a specific distribution of the pilot signals in the OFDM frequency-time grid then calculating of the entire unknown channel coefficients from the equation of the mean and the variance. Simulation results shows that the performance of the OFDM system increase as the length of the channel increase where the accuracy of the estimated channel will be increased using this low complexity algorithm, also the number of the pilot signal needed to be inserted in the OFDM signal will be reduced which lead to increase in the throughput of the signal over the OFDM system in compared with other type of the distribution such as Comb type and Block type channel estimation.

A New Algorithm for Cluster Initialization

Clustering is a very well known technique in data mining. One of the most widely used clustering techniques is the k-means algorithm. Solutions obtained from this technique are dependent on the initialization of cluster centers. In this article we propose a new algorithm to initialize the clusters. The proposed algorithm is based on finding a set of medians extracted from a dimension with maximum variance. The algorithm has been applied to different data sets and good results are obtained.

A Robust Controller for Output Variance Reduction and Minimum Variance with Application on a Permanent Field DC-Motor

In this paper, we present an experimental testing for a new algorithm that determines an optimal controller-s coefficients for output variance reduction related to Linear Time Invariant (LTI) Systems. The algorithm features simplicity in calculation, generalization to minimal and non-minimal phase systems, and could be configured to achieve reference tracking as well as variance reduction after compromising with the output variance. An experiment of DCmotor velocity control demonstrates the application of this new algorithm in designing the controller. The results show that the controller achieves minimum variance and reference tracking for a preset velocity reference relying on an identified model of the motor.

An Interval-Based Multi-Attribute Decision Making Approach for Electric Utility Resource Planning

This paper presents an interval-based multi-attribute decision making (MADM) approach in support of the decision process with imprecise information. The proposed decision methodology is based on the model of linear additive utility function but extends the problem formulation with the measure of composite utility variance. A sample study concerning with the evaluation of electric generation expansion strategies is provided showing how the imprecise data may affect the choice toward the best solution and how a set of alternatives, acceptable to the decision maker (DM), may be identified with certain confidence.

A New Approach for Prioritization of Failure Modes in Design FMEA using ANOVA

The traditional Failure Mode and Effects Analysis (FMEA) uses Risk Priority Number (RPN) to evaluate the risk level of a component or process. The RPN index is determined by calculating the product of severity, occurrence and detection indexes. The most critically debated disadvantage of this approach is that various sets of these three indexes may produce an identical value of RPN. This research paper seeks to address the drawbacks in traditional FMEA and to propose a new approach to overcome these shortcomings. The Risk Priority Code (RPC) is used to prioritize failure modes, when two or more failure modes have the same RPN. A new method is proposed to prioritize failure modes, when there is a disagreement in ranking scale for severity, occurrence and detection. An Analysis of Variance (ANOVA) is used to compare means of RPN values. SPSS (Statistical Package for the Social Sciences) statistical analysis package is used to analyze the data. The results presented are based on two case studies. It is found that the proposed new methodology/approach resolves the limitations of traditional FMEA approach.

Reconstitute Information about Discontinued Water Quality Variables in the Nile Delta Monitoring Network Using Two Record Extension Techniques

The world economic crises and budget constraints have caused authorities, especially those in developing countries, to rationalize water quality monitoring activities. Rationalization consists of reducing the number of monitoring sites, the number of samples, and/or the number of water quality variables measured. The reduction in water quality variables is usually based on correlation. If two variables exhibit high correlation, it is an indication that some of the information produced may be redundant. Consequently, one variable can be discontinued, and the other continues to be measured. Later, the ordinary least squares (OLS) regression technique is employed to reconstitute information about discontinued variable by using the continuously measured one as an explanatory variable. In this paper, two record extension techniques are employed to reconstitute information about discontinued water quality variables, the OLS and the Line of Organic Correlation (LOC). An empirical experiment is conducted using water quality records from the Nile Delta water quality monitoring network in Egypt. The record extension techniques are compared for their ability to predict different statistical parameters of the discontinued variables. Results show that the OLS is better at estimating individual water quality records. However, results indicate an underestimation of the variance in the extended records. The LOC technique is superior in preserving characteristics of the entire distribution and avoids underestimation of the variance. It is concluded from this study that the OLS can be used for the substitution of missing values, while LOC is preferable for inferring statements about the probability distribution.