Frequent Itemset Mining Using Rough-Sets

Frequent pattern mining is the process of finding a pattern (a set of items, subsequences, substructures, etc.) that occurs frequently in a data set. It was proposed in the context of frequent itemsets and association rule mining. Frequent pattern mining is used to find inherent regularities in data. What products were often purchased together? Its applications include basket data analysis, cross-marketing, catalog design, sale campaign analysis, Web log (click stream) analysis, and DNA sequence analysis. However, one of the bottlenecks of frequent itemset mining is that as the data increase the amount of time and resources required to mining the data increases at an exponential rate. In this investigation a new algorithm is proposed which can be uses as a pre-processor for frequent itemset mining. FASTER (FeAture SelecTion using Entropy and Rough sets) is a hybrid pre-processor algorithm which utilizes entropy and roughsets to carry out record reduction and feature (attribute) selection respectively. FASTER for frequent itemset mining can produce a speed up of 3.1 times when compared to original algorithm while maintaining an accuracy of 71%.

Characteristic Function in Estimation of Probability Distribution Moments

In this article the problem of distributional moments estimation is considered. The new approach of moments estimation based on usage of the characteristic function is proposed. By statistical simulation technique author shows that new approach has some robust properties. For calculation of the derivatives of characteristic function there is used numerical differentiation. Obtained results confirmed that author’s idea has a certain working efficiency and it can be recommended for any statistical applications.

Forecasting Models for Steel Demand Uncertainty Using Bayesian Methods

 A forecasting model for steel demand uncertainty in Thailand is proposed. It consists of trend, autocorrelation, and outliers in a hierarchical Bayesian frame work. The proposed model uses a cumulative Weibull distribution function, latent first-order autocorrelation, and binary selection, to account for trend, time-varying autocorrelation, and outliers, respectively. The Gibbs sampling Markov Chain Monte Carlo (MCMC) is used for parameter estimation. The proposed model is applied to steel demand index data in Thailand. The root mean square error (RMSE), mean absolute percentage error (MAPE), and mean absolute error (MAE) criteria are used for model comparison. The study reveals that the proposed model is more appropriate than the exponential smoothing method.

Automatic Facial Skin Segmentation Using Possibilistic C-Means Algorithm for Evaluation of Facial Surgeries

Human face has a fundamental role in the appearance of individuals. So the importance of facial surgeries is undeniable. Thus, there is a need for the appropriate and accurate facial skin segmentation in order to extract different features. Since Fuzzy CMeans (FCM) clustering algorithm doesn’t work appropriately for noisy images and outliers, in this paper we exploit Possibilistic CMeans (PCM) algorithm in order to segment the facial skin. For this purpose, first, we convert facial images from RGB to YCbCr color space. To evaluate performance of the proposed algorithm, the database of Sahand University of Technology, Tabriz, Iran was used. In order to have a better understanding from the proposed algorithm; FCM and Expectation-Maximization (EM) algorithms are also used for facial skin segmentation. The proposed method shows better results than the other segmentation methods. Results include misclassification error (0.032) and the region’s area error (0.045) for the proposed algorithm.

REDUCER – An Architectural Design Pattern for Reducing Large and Noisy Data Sets

To relieve the burden of reasoning on a point to point basis, in many domains there is a need to reduce large and noisy data sets into trends for qualitative reasoning. In this paper we propose and describe a new architectural design pattern called REDUCER for reducing large and noisy data sets that can be tailored for particular situations. REDUCER consists of 2 consecutive processes: Filter which takes the original data and removes outliers, inconsistencies or noise; and Compression which takes the filtered data and derives trends in the data. In this seminal article we also show how REDUCER has successfully been applied to 3 different case studies.

Grid–SVC: An Improvement in SVC Algorithm, Based On Grid Based Clustering

Support vector clustering (SVC) is an important kernelbased clustering algorithm in multi applications. It has got two main bottle necks, the high computation price and labeling piece. In this paper, we presented a modified SVC method, named Grid–SVC, to improve the original algorithm computationally. First we normalized and then we parted the interval, where the SVC is processing, using a novel Grid–based clustering algorithm. The algorithm parts the intervals, based on the density function of the data set and then applying the cartesian multiply makes multi-dimensional grids. Eliminating many outliers and noise in the preprocess, we apply an improved SVC method to each parted grid in a parallel way. The experimental results show both improvement in time complexity order and the accuracy.

Robust Coherent Noise Suppression by Point Estimation of the Cauchy Location Parameter

This paper introduces a new point estimation algorithm, with particular focus on coherent noise suppression, given several measurements of the device under test where it is assumed that 1) the noise is first-order stationery and 2) the device under test is linear and time-invariant. The algorithm exploits the robustness of the Pitman estimator of the Cauchy location parameter through the initial scaling of the test signal by a centred Gaussian variable of predetermined variance. It is illustrated through mathematical derivations and simulation results that the proposed algorithm is more accurate and consistently robust to outliers for different tailed density functions than the conventional methods of sample mean (coherent averaging technique) and sample median search.

Evaluation of Graph-based Analysis for Forest Fire Detections

Spatial outliers in remotely sensed imageries represent observed quantities showing unusual values compared to their neighbor pixel values. There have been various methods to detect the spatial outliers based on spatial autocorrelations in statistics and data mining. These methods may be applied in detecting forest fire pixels in the MODIS imageries from NASA-s AQUA satellite. This is because the forest fire detection can be referred to as finding spatial outliers using spatial variation of brightness temperature. This point is what distinguishes our approach from the traditional fire detection methods. In this paper, we propose a graph-based forest fire detection algorithm which is based on spatial outlier detection methods, and test the proposed algorithm to evaluate its applicability. For this the ordinary scatter plot and Moran-s scatter plot were used. In order to evaluate the proposed algorithm, the results were compared with the MODIS fire product provided by the NASA MODIS Science Team, which showed the possibility of the proposed algorithm in detecting the fire pixels.

New Nonlinear Filtering Strategies for Eliminating Short and Long Tailed Noise in Images with Edge Preservation Properties

Midpoint filter is quite effective in recovering the images confounded by the short-tailed (uniform) noise. It, however, performs poorly in the presence of additive long-tailed (impulse) noise and it does not preserve the edge structures of the image signals. Median smoother discards outliers (impulses) effectively, but it fails to provide adequate smoothing for images corrupted with nonimpulse noise. In this paper, two nonlinear techniques for image filtering, namely, New Filter I and New Filter II are proposed based on a nonlinear high-pass filter algorithm. New Filter I is constructed using a midpoint filter, a highpass filter and a combiner. It suppresses uniform noise quite well. New Filter II is configured using an alpha trimmed midpoint filter, a median smoother of window size 3x3, the high pass filter and the combiner. It is robust against impulse noise and attenuates uniform noise satisfactorily. Both the filters are shown to exhibit good response at the image boundaries (edges). The proposed filters are evaluated for their performance on a test image and the results obtained are included.

Robust Regression and its Application in Financial Data Analysis

This research is aimed to describe the application of robust regression and its advantages over the least square regression method in analyzing financial data. To do this, relationship between earning per share, book value of equity per share and share price as price model and earning per share, annual change of earning per share and return of stock as return model is discussed using both robust and least square regressions, and finally the outcomes are compared. Comparing the results from the robust regression and the least square regression shows that the former can provide the possibility of a better and more realistic analysis owing to eliminating or reducing the contribution of outliers and influential data. Therefore, robust regression is recommended for getting more precise results in financial data analysis.

Internet Purchases in European Union Countries: Multiple Linear Regression Approach

This paper examines economic and Information and Communication Technology (ICT) development influence on recently increasing Internet purchases by individuals for European Union member states. After a growing trend for Internet purchases in EU27 was noticed, all possible regression analysis was applied using nine independent variables in 2011. Finally, two linear regression models were studied in detail. Conducted simple linear regression analysis confirmed the research hypothesis that the Internet purchases in analyzed EU countries is positively correlated with statistically significant variable Gross Domestic Product per capita (GDPpc). Also, analyzed multiple linear regression model with four regressors, showing ICT development level, indicates that ICT development is crucial for explaining the Internet purchases by individuals, confirming the research hypothesis.

Robust Ellipse Detection by Fitting Randomly Selected Edge Patches

In this paper, a method to detect multiple ellipses is presented. The technique is efficient and robust against incomplete ellipses due to partial occlusion, noise or missing edges and outliers. It is an iterative technique that finds and removes the best ellipse until no reasonable ellipse is found. At each run, the best ellipse is extracted from randomly selected edge patches, its fitness calculated and compared to a fitness threshold. RANSAC algorithm is applied as a sampling process together with the Direct Least Square fitting of ellipses (DLS) as the fitting algorithm. In our experiment, the method performs very well and is robust against noise and spurious edges on both synthetic and real-world image data.

A Novel Multiresolution based Optimization Scheme for Robust Affine Parameter Estimation

This paper describes a new method for affine parameter estimation between image sequences. Usually, the parameter estimation techniques can be done by least squares in a quadratic way. However, this technique can be sensitive to the presence of outliers. Therefore, parameter estimation techniques for various image processing applications are robust enough to withstand the influence of outliers. Progressively, some robust estimation functions demanding non-quadratic and perhaps non-convex potentials adopted from statistics literature have been used for solving these. Addressing the optimization of the error function in a factual framework for finding a global optimal solution, the minimization can begin with the convex estimator at the coarser level and gradually introduce nonconvexity i.e., from soft to hard redescending non-convex estimators when the iteration reaches finer level of multiresolution pyramid. Comparison has been made to find the performance of the results of proposed method with the results found individually using two different estimators.

The Robust Clustering with Reduction Dimension

A clustering is process to identify a homogeneous groups of object called as cluster. Clustering is one interesting topic on data mining. A group or class behaves similarly characteristics. This paper discusses a robust clustering process for data images with two reduction dimension approaches; i.e. the two dimensional principal component analysis (2DPCA) and principal component analysis (PCA). A standard approach to overcome this problem is dimension reduction, which transforms a high-dimensional data into a lower-dimensional space with limited loss of information. One of the most common forms of dimensionality reduction is the principal components analysis (PCA). The 2DPCA is often called a variant of principal component (PCA), the image matrices were directly treated as 2D matrices; they do not need to be transformed into a vector so that the covariance matrix of image can be constructed directly using the original image matrices. The decomposed classical covariance matrix is very sensitive to outlying observations. The objective of paper is to compare the performance of robust minimizing vector variance (MVV) in the two dimensional projection PCA (2DPCA) and the PCA for clustering on an arbitrary data image when outliers are hiden in the data set. The simulation aspects of robustness and the illustration of clustering images are discussed in the end of paper

Unsupervised Outlier Detection in Streaming Data Using Weighted Clustering

Outlier detection in streaming data is very challenging because streaming data cannot be scanned multiple times and also new concepts may keep evolving. Irrelevant attributes can be termed as noisy attributes and such attributes further magnify the challenge of working with data streams. In this paper, we propose an unsupervised outlier detection scheme for streaming data. This scheme is based on clustering as clustering is an unsupervised data mining task and it does not require labeled data, both density based and partitioning clustering are combined for outlier detection. In this scheme partitioning clustering is also used to assign weights to attributes depending upon their respective relevance and weights are adaptive. Weighted attributes are helpful to reduce or remove the effect of noisy attributes. Keeping in view the challenges of streaming data, the proposed scheme is incremental and adaptive to concept evolution. Experimental results on synthetic and real world data sets show that our proposed approach outperforms other existing approach (CORM) in terms of outlier detection rate, false alarm rate, and increasing percentages of outliers.

Signed Approach for Mining Web Content Outliers

The emergence of the Internet has brewed the revolution of information storage and retrieval. As most of the data in the web is unstructured, and contains a mix of text, video, audio etc, there is a need to mine information to cater to the specific needs of the users without loss of important hidden information. Thus developing user friendly and automated tools for providing relevant information quickly becomes a major challenge in web mining research. Most of the existing web mining algorithms have concentrated on finding frequent patterns while neglecting the less frequent ones that are likely to contain outlying data such as noise, irrelevant and redundant data. This paper mainly focuses on Signed approach and full word matching on the organized domain dictionary for mining web content outliers. This Signed approach gives the relevant web documents as well as outlying web documents. As the dictionary is organized based on the number of characters in a word, searching and retrieval of documents takes less time and less space.

Automated Stereophotogrammetry Data Cleansing

The stereophotogrammetry modality is gaining more widespread use in the clinical setting. Registration and visualization of this data, in conjunction with conventional 3D volumetric image modalities, provides virtual human data with textured soft tissue and internal anatomical and structural information. In this investigation computed tomography (CT) and stereophotogrammetry data is acquired from 4 anatomical phantoms and registered using the trimmed iterative closest point (TrICP) algorithm. This paper fully addresses the issue of imaging artifacts around the stereophotogrammetry surface edge using the registered CT data as a reference. Several iterative algorithms are implemented to automatically identify and remove stereophotogrammetry surface edge outliers, improving the overall visualization of the combined stereophotogrammetry and CT data. This paper shows that outliers at the surface edge of stereophotogrammetry data can be successfully removed automatically.

Parameter Selections of Fuzzy C-Means Based on Robust Analysis

The weighting exponent m is called the fuzzifier that can have influence on the clustering performance of fuzzy c-means (FCM) and mÎ[1.5,2.5] is suggested by Pal and Bezdek [13]. In this paper, we will discuss the robust properties of FCM and show that the parameter m will have influence on the robustness of FCM. According to our analysis, we find that a large m value will make FCM more robust to noise and outliers. However, if m is larger than the theoretical upper bound proposed by Yu et al. [14], the sample mean will become the unique optimizer. Here, we suggest to implement the FCM algorithm with mÎ[1.5,4] under the restriction when m is smaller than the theoretical upper bound.

Effective Image and Video Error Concealment using RST-Invariant Partial Patch Matching Model and Exemplar-based Inpainting

An effective visual error concealment method has been presented by employing a robust rotation, scale, and translation (RST) invariant partial patch matching model (RSTI-PPMM) and exemplar-based inpainting. While the proposed robust and inherently feature-enhanced texture synthesis approach ensures the generation of excellent and perceptually plausible visual error concealment results, the outlier pruning property guarantees the significant quality improvements, both quantitatively and qualitatively. No intermediate user-interaction is required for the pre-segmented media and the presented method follows a bootstrapping approach for an automatic visual loss recovery and the image and video error concealment.

Class Outliers Mining: Distance-Based Approach

In large datasets, identifying exceptional or rare cases with respect to a group of similar cases is considered very significant problem. The traditional problem (Outlier Mining) is to find exception or rare cases in a dataset irrespective of the class label of these cases, they are considered rare events with respect to the whole dataset. In this research, we pose the problem that is Class Outliers Mining and a method to find out those outliers. The general definition of this problem is “given a set of observations with class labels, find those that arouse suspicions, taking into account the class labels". We introduce a novel definition of Outlier that is Class Outlier, and propose the Class Outlier Factor (COF) which measures the degree of being a Class Outlier for a data object. Our work includes a proposal of a new algorithm towards mining of the Class Outliers, presenting experimental results applied on various domains of real world datasets and finally a comparison study with other related methods is performed.