Abstract: Glaucoma diagnosis involves extracting three features
of the fundus image; optic cup, optic disc and vernacular. Present
manual diagnosis is expensive, tedious and time consuming. A
number of researches have been conducted to automate this process.
However, the variability between the diagnostic capability of an
automated system and ophthalmologist has yet to be established. This
paper discusses the efficiency and variability between
ophthalmologist opinion and digital technique; threshold. The
efficiency and variability measures are based on image quality
grading; poor, satisfactory or good. The images are separated into
four channels; gray, red, green and blue. A scientific investigation
was conducted on three ophthalmologists who graded the images
based on the image quality. The images are threshold using multithresholding
and graded as done by the ophthalmologist. A
comparison of grade from the ophthalmologist and threshold is made.
The results show there is a small variability between result of
ophthalmologists and digital threshold.
Abstract: In this paper a non-parametric statistical pattern recognition algorithm for the problem of credit scoring will be presented. The proposed algorithm is based on a clustering k- means algorithm and allows for the determination of subclasses of homogenous elements in the data. The algorithm will be tested on two benchmark datasets and its performance compared with other well known pattern recognition algorithm for credit scoring.
Abstract: In the last few years, three multivariate spectral
analysis techniques namely, Principal Component Analysis (PCA),
Independent Component Analysis (ICA) and Non-negative Matrix
Factorization (NMF) have emerged as effective tools for oscillation
detection and isolation. While the first method is used in determining
the number of oscillatory sources, the latter two methods
are used to identify source signatures by formulating the detection
problem as a source identification problem in the spectral domain.
In this paper, we present a critical drawback of the underlying linear
(mixing) model which strongly limits the ability of the associated
source separation methods to determine the number of sources
and/or identify the physical source signatures. It is shown that the
assumed mixing model is only valid if each unit of the process gives
equal weighting (all-pass filter) to all oscillatory components in its
inputs. This is in contrast to the fact that each unit, in general, acts
as a filter with non-uniform frequency response. Thus, the model
can only facilitate correct identification of a source with a single
frequency component, which is again unrealistic. To overcome
this deficiency, an iterative post-processing algorithm that correctly
identifies the physical source(s) is developed. An additional issue
with the existing methods is that they lack a procedure to pre-screen
non-oscillatory/noisy measurements which obscure the identification
of oscillatory sources. In this regard, a pre-screening procedure
is prescribed based on the notion of sparseness index to eliminate
the noisy and non-oscillatory measurements from the data set used
for analysis.