Abstract: An unsupervised classification algorithm is derived
by modeling observed data as a mixture of several mutually
exclusive classes that are each described by linear combinations of
independent non-Gaussian densities. The algorithm estimates the
data density in each class by using parametric nonlinear functions
that fit to the non-Gaussian structure of the data. This improves
classification accuracy compared with standard Gaussian mixture
models. When applied to textures, the algorithm can learn basis
functions for images that capture the statistically significant structure
intrinsic in the images. We apply this technique to the problem of
unsupervised texture classification and segmentation.
Abstract: The clustering ensembles combine multiple partitions
generated by different clustering algorithms into a single clustering
solution. Clustering ensembles have emerged as a prominent method
for improving robustness, stability and accuracy of unsupervised
classification solutions. So far, many contributions have been done to
find consensus clustering. One of the major problems in clustering
ensembles is the consensus function. In this paper, firstly, we
introduce clustering ensembles, representation of multiple partitions,
its challenges and present taxonomy of combination algorithms.
Secondly, we describe consensus functions in clustering ensembles
including Hypergraph partitioning, Voting approach, Mutual
information, Co-association based functions and Finite mixture
model, and next explain their advantages, disadvantages and
computational complexity. Finally, we compare the characteristics of
clustering ensembles algorithms such as computational complexity,
robustness, simplicity and accuracy on different datasets in previous
techniques.