Abstract: In this modern era of automation, most of the academic
exams and competitive exams are Multiple Choice Questions (MCQ).
The responses of these MCQ based exams are recorded in the
Optical Mark Reader (OMR) sheet. Evaluation of the OMR sheet
requires separate specialized machines for scanning and marking.
The sheets used by these machines are special and costs more than a
normal sheet. Available process is non-economical and dependent on
paper thickness, scanning quality, paper orientation, special hardware
and customized software. This study tries to tackle the problem of
evaluating the OMR sheet without any special hardware and making
the whole process economical. We propose an image processing
based algorithm which can be used to read and evaluate the scanned
OMR sheets with no special hardware required. It will eliminate the
use of special OMR sheet. Responses recorded in normal sheet is
enough for evaluation. The proposed system takes care of color,
brightness, rotation, little imperfections in the OMR sheet images.
Abstract: This paper presents a denoising method called EMD-Custom that was based on Empirical Mode Decomposition (EMD) and the modified Customized Thresholding Function (Custom) algorithms. EMD was applied to decompose adaptively a noisy signal into intrinsic mode functions (IMFs). Then, all the noisy IMFs got threshold by applying the presented thresholding function to suppress noise and to improve the signal to noise ratio (SNR). The method was tested on simulated data and real ECG signal, and the results were compared to the EMD-Based signal denoising methods using the soft and hard thresholding. The results showed the superior performance of the proposed EMD-Custom denoising over the traditional approach. The performances were evaluated in terms of SNR in dB, and Mean Square Error (MSE).
Abstract: Diabetic Retinopathy (DR) is an eye disease that leads to blindness. The earliest signs of DR are the appearance of red and yellow lesions on the retina called hemorrhages and exudates. Early diagnosis of DR prevents from blindness; hence, many automated algorithms have been proposed to extract hemorrhages and exudates. In this paper, an automated algorithm is presented to extract hemorrhages and exudates separately from retinal fundus images using different image processing techniques including Circular Hough Transform (CHT), Contrast Limited Adaptive Histogram Equalization (CLAHE), Gabor filter and thresholding. Since Optic Disc is the same color as the exudates, it is first localized and detected. The presented method has been tested on fundus images from Structured Analysis of the Retina (STARE) and Digital Retinal Images for Vessel Extraction (DRIVE) databases by using MATLAB codes. The results show that this method is perfectly capable of detecting hard exudates and the highly probable soft exudates. It is also capable of detecting the hemorrhages and distinguishing them from blood vessels.
Abstract: Diabetic Retinopathy (DR) is a severe retinal disease which is caused by diabetes mellitus. It leads to blindness when it progress to proliferative level. Early indications of DR are the appearance of microaneurysms, hemorrhages and hard exudates. In this paper, an automatic algorithm for detection of DR has been proposed. The algorithm is based on combination of several image processing techniques including Circular Hough Transform (CHT), Contrast Limited Adaptive Histogram Equalization (CLAHE), Gabor filter and thresholding. Also, Support Vector Machine (SVM) Classifier is used to classify retinal images to normal or abnormal cases including non-proliferative or proliferative DR. The proposed method has been tested on images selected from Structured Analysis of the Retinal (STARE) database using MATLAB code. The method is perfectly able to detect DR. The sensitivity specificity and accuracy of this approach are 90%, 87.5%, and 91.4% respectively.
Abstract: This paper describes an analysis of domestic and international trends of image processing for data in UAV (unmanned aerial vehicle) and also explains about UAV and Quadcopter. Overseas examples of image processing using UAV include image processing for totaling the total numberof vehicles, edge/target detection, detection and evasion algorithm, image processing using SIFT(scale invariant features transform) matching, and application of median filter and thresholding. In Korea, many studies are underway including visualization of new urban buildings.
Abstract: Enhancing the quality of two dimensional signals is one of the most important factors in the fields of video surveillance and computer vision. Usually in real-life video surveillance, false detection occurs due to the presence of random noise, illumination
and shadow artifacts. The detection methods based on background subtraction faces several problems in accurately detecting objects in realistic environments: In this paper, we propose a noise removal algorithm using neighborhood comparison method with thresholding. The illumination variations correction is done in the detected foreground objects by using an amalgamation of techniques like homomorphic decomposition, curvelet transformation and gamma adjustment operator. Shadow is removed using chromaticity estimator with local relation estimator. Results are compared with the existing methods and prove as high robustness in the video surveillance.
Abstract: An effective approach for extracting document images from a noisy background is introduced. The entire scheme is divided into three sub- stechniques – the initial preprocessing operations for noise cluster tightening, introduction of a new thresholding method by maximizing the ratio of stan- dard deviations of the combined effect on the image to the sum of weighted classes and finally the image restoration phase by image binarization utiliz- ing the proposed optimum threshold level. The proposed method is found to be efficient compared to the existing schemes in terms of computational complexity as well as speed with better noise rejection.
Abstract: Iris-based biometric system is gaining its importance in several applications. However, processing of iris biometric is a challenging and time consuming task. Detection of iris part in an eye image poses a number of challenges such as, inferior image quality, occlusion of eyelids and eyelashes etc. Due to these problems it is not possible to achieve 100% accuracy rate in any iris-based biometric authentication systems. Further, iris detection is a computationally intensive task in the overall iris biometric processing. In this paper, we address these two problems and propose a technique to localize iris part efficiently and accurately. We propose scaling and color level transform followed by thresholding, finding pupil boundary points for pupil boundary detection and dilation, thresholding, vertical edge detection and removal of unnecessary edges present in the eye images for iris boundary detection. Scaling reduces the search space significantly and intensity level transform is helpful for image thresholding. Experimental results show that our approach is comparable with the existing approaches. Following our approach it is possible to detect iris part with 95-99% accuracy as substantiated by our experiments on CASIA Ver-3.0, ICE 2005, UBIRIS, Bath and MMU iris image databases.
Abstract: A multilayer self organizing neural neural network
(MLSONN) architecture for binary object extraction, guided by a beta
activation function and characterized by backpropagation of errors
estimated from the linear indices of fuzziness of the network output
states, is discussed. Since the MLSONN architecture is designed to
operate in a single point fixed/uniform thresholding scenario, it does
not take into cognizance the heterogeneity of image information in
the extraction process. The performance of the MLSONN architecture
with representative values of the threshold parameters of the beta
activation function employed is also studied. A three layer bidirectional
self organizing neural network (BDSONN) architecture
comprising fully connected neurons, for the extraction of objects from
a noisy background and capable of incorporating the underlying image
context heterogeneity through variable and adaptive thresholding,
is proposed in this article. The input layer of the network architecture
represents the fuzzy membership information of the image scene to
be extracted. The second layer (the intermediate layer) and the final
layer (the output layer) of the network architecture deal with the self
supervised object extraction task by bi-directional propagation of the
network states. Each layer except the output layer is connected to the
next layer following a neighborhood based topology. The output layer
neurons are in turn, connected to the intermediate layer following
similar topology, thus forming a counter-propagating architecture
with the intermediate layer. The novelty of the proposed architecture
is that the assignment/updating of the inter-layer connection weights
are done using the relative fuzzy membership values at the constituent
neurons in the different network layers. Another interesting feature
of the network lies in the fact that the processing capabilities of
the intermediate and the output layer neurons are guided by a beta
activation function, which uses image context sensitive adaptive
thresholding arising out of the fuzzy cardinality estimates of the
different network neighborhood fuzzy subsets, rather than resorting to
fixed and single point thresholding. An application of the proposed
architecture for object extraction is demonstrated using a synthetic
and a real life image. The extraction efficiency of the proposed
network architecture is evaluated by a proposed system transfer index
characteristic of the network.
Abstract: Clusters of microcalcifications in mammograms are an
important sign of breast cancer. This paper presents a complete
Computer Aided Detection (CAD) scheme for automatic detection of
clustered microcalcifications in digital mammograms. The proposed
system, MammoScan μCaD, consists of three main steps. Firstly
all potential microcalcifications are detected using a a method for
feature extraction, VarMet, and adaptive thresholding. This will also
give a number of false detections. The goal of the second step,
Classifier level 1, is to remove everything but microcalcifications.
The last step, Classifier level 2, uses learned dictionaries and sparse
representations as a texture classification technique to distinguish
single, benign microcalcifications from clustered microcalcifications,
in addition to remove some remaining false detections. The system
is trained and tested on true digital data from Stavanger University
Hospital, and the results are evaluated by radiologists. The overall
results are promising, with a sensitivity > 90 % and a low false
detection rate (approx 1 unwanted pr. image, or 0.3 false pr. image).
Abstract: Speckle noise affects all coherent imaging systems
including medical ultrasound. In medical images, noise suppression
is a particularly delicate and difficult task. A tradeoff between noise
reduction and the preservation of actual image features has to be made
in a way that enhances the diagnostically relevant image content.
Even though wavelets have been extensively used for denoising
speckle images, we have found that denoising using contourlets gives
much better performance in terms of SNR, PSNR, MSE, variance and
correlation coefficient. The objective of the paper is to determine the
number of levels of Laplacian pyramidal decomposition, the number
of directional decompositions to perform on each pyramidal level and
thresholding schemes which yields optimal despeckling of medical
ultrasound images, in particular. The proposed method consists of the
log transformed original ultrasound image being subjected to contourlet
transform, to obtain contourlet coefficients. The transformed
image is denoised by applying thresholding techniques on individual
band pass sub bands using a Bayes shrinkage rule. We quantify the
achieved performance improvement.
Abstract: In this work a novel approach for color image
segmentation using higher order entropy as a textural feature for
determination of thresholds over a two dimensional image histogram
is discussed. A similar approach is applied to achieve multi-level
thresholding in both grayscale and color images. The paper discusses
two methods of color image segmentation using RGB space as the
standard processing space. The threshold for segmentation is decided
by the maximization of conditional entropy in the two dimensional
histogram of the color image separated into three grayscale images of
R, G and B. The features are first developed independently for the
three ( R, G, B ) spaces, and combined to get different color
component segmentation. By considering local maxima instead of the
maximum of conditional entropy yields multiple thresholds for the
same image which forms the basis for multilevel thresholding.
Abstract: Emerging Bio-engineering fields such as Brain
Computer Interfaces, neuroprothesis devices and modeling and
simulation of neural networks have led to increased research activity
in algorithms for the detection, isolation and classification of Action
Potentials (AP) from noisy data trains. Current techniques in the field
of 'unsupervised no-prior knowledge' biosignal processing include
energy operators, wavelet detection and adaptive thresholding. These
tend to bias towards larger AP waveforms, AP may be missed due to
deviations in spike shape and frequency and correlated noise
spectrums can cause false detection. Also, such algorithms tend to
suffer from large computational expense.
A new signal detection technique based upon the ideas of phasespace
diagrams and trajectories is proposed based upon the use of a
delayed copy of the AP to highlight discontinuities relative to
background noise. This idea has been used to create algorithms that
are computationally inexpensive and address the above problems.
Distinct AP have been picked out and manually classified from
real physiological data recorded from a cockroach. To facilitate
testing of the new technique, an Auto Regressive Moving Average
(ARMA) noise model has been constructed bases upon background
noise of the recordings. Along with the AP classification means this
model enables generation of realistic neuronal data sets at arbitrary
signal to noise ratio (SNR).
Abstract: In this paper, we propose a texture feature-based
language identification using wavelet-domain BDIP (block difference
of inverse probabilities) and BVLC (block variance of local
correlation coefficients) features and FFT (fast Fourier transform)
feature. In the proposed method, wavelet subbands are first obtained
by wavelet transform from a test image and denoised by Donoho-s
soft-thresholding. BDIP and BVLC operators are next applied to the
wavelet subbands. FFT blocks are also obtained by 2D (twodimensional)
FFT from the blocks into which the test image is
partitioned. Some significant FFT coefficients in each block are
selected and magnitude operator is applied to them. Moments for each
subband of BDIP and BVLC and for each magnitude of significant
FFT coefficients are then computed and fused into a feature vector. In
classification, a stabilized Bayesian classifier, which adopts variance
thresholding, searches the training feature vector most similar to the
test feature vector. Experimental results show that the proposed
method with the three operations yields excellent language
identification even with rather low feature dimension.
Abstract: This frame work describes a computationally more
efficient and adaptive threshold estimation method for image
denoising in the wavelet domain based on Generalized Gaussian
Distribution (GGD) modeling of subband coefficients. In this
proposed method, the choice of the threshold estimation is carried out
by analysing the statistical parameters of the wavelet subband
coefficients like standard deviation, arithmetic mean and geometrical
mean. The noisy image is first decomposed into many levels to
obtain different frequency bands. Then soft thresholding method is
used to remove the noisy coefficients, by fixing the optimum
thresholding value by the proposed method. Experimental results on
several test images by using this method show that this method yields
significantly superior image quality and better Peak Signal to Noise
Ratio (PSNR). Here, to prove the efficiency of this method in image
denoising, we have compared this with various denoising methods
like wiener filter, Average filter, VisuShrink and BayesShrink.