Abstract: Diabetic Retinopathy (DR) is an eye disease that leads to blindness. The earliest signs of DR are the appearance of red and yellow lesions on the retina called hemorrhages and exudates. Early diagnosis of DR prevents from blindness; hence, many automated algorithms have been proposed to extract hemorrhages and exudates. In this paper, an automated algorithm is presented to extract hemorrhages and exudates separately from retinal fundus images using different image processing techniques including Circular Hough Transform (CHT), Contrast Limited Adaptive Histogram Equalization (CLAHE), Gabor filter and thresholding. Since Optic Disc is the same color as the exudates, it is first localized and detected. The presented method has been tested on fundus images from Structured Analysis of the Retina (STARE) and Digital Retinal Images for Vessel Extraction (DRIVE) databases by using MATLAB codes. The results show that this method is perfectly capable of detecting hard exudates and the highly probable soft exudates. It is also capable of detecting the hemorrhages and distinguishing them from blood vessels.
Abstract: Diabetic Retinopathy (DR) is a severe retinal disease which is caused by diabetes mellitus. It leads to blindness when it progress to proliferative level. Early indications of DR are the appearance of microaneurysms, hemorrhages and hard exudates. In this paper, an automatic algorithm for detection of DR has been proposed. The algorithm is based on combination of several image processing techniques including Circular Hough Transform (CHT), Contrast Limited Adaptive Histogram Equalization (CLAHE), Gabor filter and thresholding. Also, Support Vector Machine (SVM) Classifier is used to classify retinal images to normal or abnormal cases including non-proliferative or proliferative DR. The proposed method has been tested on images selected from Structured Analysis of the Retinal (STARE) database using MATLAB code. The method is perfectly able to detect DR. The sensitivity specificity and accuracy of this approach are 90%, 87.5%, and 91.4% respectively.
Abstract: Nowadays, robust and secure watermarking algorithm and its optimization have been need of the hour. A watermarking algorithm is presented to achieve the copy right protection of the owner based on visual cryptography, histogram shape property and entropy. In this, both host image and watermark are preprocessed. Host image is preprocessed by using Butterworth filter, and watermark is with visual cryptography. Applying visual cryptography on water mark generates two shares. One share is used for embedding the watermark, and the other one is used for solving any dispute with the aid of trusted authority. Usage of histogram shape makes the process more robust against geometric and signal processing attacks. The combination of visual cryptography, Butterworth filter, histogram, and entropy can make the algorithm more robust, imperceptible, and copy right protection of the owner.
Abstract: Insulation performance of a gas insulated system is severely affected by particle contaminants. These metallic particles adversely affect the characteristics of insulating system. These particles can produce surface charges due to partial discharge activities. These particles which are free to move enhance the local electric fields. This paper deals with the influence of conducting particle placed in a co-axial duct on the discharge characteristics of gas mixtures. Co-axial duct placed in a high pressure chamber is used for the purpose. A gas pressure of 0.1, 0.2 and 0.3 MPa have been considered with a 10:90 SF6 and N2 gas mixtures. The 2D and 3D histograms of clean duct and duct with copper particle are discussed in this paper.
Abstract: With the aging of the world population and the
continuous growth in technology, service robots are more and more
explored nowadays as alternatives to healthcare givers or personal
assistants for the elderly or disabled people. Any service robot
should be capable of interacting with the human companion, receive
commands, navigate through the environment, either known or
unknown, and recognize objects. This paper proposes an approach
for object recognition based on the use of depth information and
color images for a service robot. We present a study on two of the
most used methods for object detection, where 3D data is used to
detect the position of objects to classify that are found on horizontal
surfaces. Since most of the objects of interest accessible for service
robots are on these surfaces, the proposed 3D segmentation reduces
the processing time and simplifies the scene for object recognition.
The first approach for object recognition is based on color histograms,
while the second is based on the use of the SIFT and SURF feature
descriptors. We present comparative experimental results obtained
with a real service robot.
Abstract: In this paper, we propose an improved face recognition algorithm using histogram-based features in spatial and frequency domains. For adding spatial information of the face to improve recognition performance, a region-division (RD) method is utilized. The facial area is firstly divided into several regions, then feature vectors of each facial part are generated by Binary Vector Quantization (BVQ) histogram using DCT coefficients in low frequency domains, as well as Local Binary Pattern (LBP) histogram in spatial domain. Recognition results with different regions are first obtained separately and then fused by weighted averaging. Publicly available ORL database is used for the evaluation of our proposed algorithm, which is consisted of 40 subjects with 10 images per subject containing variations in lighting, posing, and expressions. It is demonstrated that face recognition using RD method can achieve much higher recognition rate.
Abstract: This paper presents an unsupervised color image segmentation method. It is based on a hierarchical analysis of 2-D histogram in RGB color space. This histogram minimizes storage space of images and thus facilitates the operations between them. The improved segmentation approach shows a better identification of objects in a color image and, at the same time, the system is fast.
Abstract: Increasing our ability to solve complex engineering problems is directly related to the processing capacity of computers. By means of such equipments, one is able to fast and accurately run numerical algorithms. Besides the increasing interest in numerical simulations, probabilistic approaches are also of great importance. This way, statistical tools have shown their relevance to the modelling of practical engineering problems. In general, statistical approaches to such problems consider that the random variables involved follow a normal distribution. This assumption tends to provide incorrect results when skew data is present since normal distributions are symmetric about their means. Thus, in order to visualize and quantify this aspect, 9 statistical distributions (symmetric and skew) have been considered to model a hypothetical slope stability problem. The data modeled is the friction angle of a superficial soil in Brasilia, Brazil. Despite the apparent universality, the normal distribution did not qualify as the best fit. In the present effort, data obtained in consolidated-drained triaxial tests and saturated direct shear tests have been modeled and used to analytically derive the probability density function (PDF) of the safety factor of a hypothetical slope based on Mohr-Coulomb rupture criterion. Therefore, based on this analysis, it is possible to explicitly derive the failure probability considering the friction angle as a random variable. Furthermore, it is possible to compare the stability analysis when the friction angle is modelled as a Dagum distribution (distribution that presented the best fit to the histogram) and as a Normal distribution. This comparison leads to relevant differences when analyzed in light of the risk management.
Abstract: In this paper, we present a pedestrian detection descriptor called Fused Structure and Texture (FST) features based on the combination of the local phase information with the texture features. Since the phase of the signal conveys more structural information than the magnitude, the phase congruency concept is used to capture the structural features. On the other hand, the Center-Symmetric Local Binary Pattern (CSLBP) approach is used to capture the texture information of the image. The dimension less quantity of the phase congruency and the robustness of the CSLBP operator on the flat images, as well as the blur and illumination changes, lead the proposed descriptor to be more robust and less sensitive to the light variations. The proposed descriptor can be formed by extracting the phase congruency and the CSLBP values of each pixel of the image with respect to its neighborhood. The histogram of the oriented phase and the histogram of the CSLBP values for the local regions in the image are computed and concatenated to construct the FST descriptor. Several experiments were conducted on INRIA and the low resolution DaimlerChrysler datasets to evaluate the detection performance of the pedestrian detection system that is based on the FST descriptor. A linear Support Vector Machine (SVM) is used to train the pedestrian classifier. These experiments showed that the proposed FST descriptor has better detection performance over a set of state of the art feature extraction methodologies.
Abstract: Recently, traffic monitoring has attracted the attention
of computer vision researchers. Many algorithms have been
developed to detect and track moving vehicles. In fact, vehicle
tracking in daytime and in nighttime cannot be approached with the
same techniques, due to the extreme different illumination conditions.
Consequently, traffic-monitoring systems are in need of having a
component to differentiate between daytime and nighttime scenes. In
this paper, a HSV-based day/night detector is proposed for traffic
monitoring scenes. The detector employs the hue-histogram and the
value-histogram on the top half of the image frame. Experimental
results show that the extraction of the brightness features along with
the color features within the top region of the image is effective for
classifying traffic scenes. In addition, the detector achieves high
precision and recall rates along with it is feasible for real time
applications.
Abstract: In order to retrieve images efficiently from a large
database, a unique method integrating color and texture features
using genetic programming has been proposed. Opponent color
histogram which gives shadow, shade, and light intensity invariant
property is employed in the proposed framework for extracting color
features. For texture feature extraction, fast discrete curvelet
transform which captures more orientation information at different
scales is incorporated to represent curved like edges. The recent
scenario in the issues of image retrieval is to reduce the semantic gap
between user’s preference and low level features. To address this
concern, genetic algorithm combined with relevance feedback is
embedded to reduce semantic gap and retrieve user’s preference
images. Extensive and comparative experiments have been conducted
to evaluate proposed framework for content based image retrieval on
two databases, i.e., COIL-100 and Corel-1000. Experimental results
clearly show that the proposed system surpassed other existing
systems in terms of precision and recall. The proposed work achieves
highest performance with average precision of 88.2% on COIL-100
and 76.3% on Corel, the average recall of 69.9% on COIL and 76.3%
on Corel. Thus, the experimental results confirm that the proposed
content based image retrieval system architecture attains better
solution for image retrieval.
Abstract: Background modeling and subtraction in video
analysis has been widely used as an effective method for moving
objects detection in many computer vision applications. Recently, a
large number of approaches have been developed to tackle different
types of challenges in this field. However, the dynamic background
and illumination variations are the most frequently occurred problems
in the practical situation. This paper presents a favorable two-layer
model based on codebook algorithm incorporated with local binary
pattern (LBP) texture measure, targeted for handling dynamic
background and illumination variation problems. More specifically,
the first layer is designed by block-based codebook combining with
LBP histogram and mean value of each RGB color channel. Because
of the invariance of the LBP features with respect to monotonic
gray-scale changes, this layer can produce block wise detection results
with considerable tolerance of illumination variations. The pixel-based
codebook is employed to reinforce the precision from the output of the
first layer which is to eliminate false positives further. As a result, the
proposed approach can greatly promote the accuracy under the
circumstances of dynamic background and illumination changes.
Experimental results on several popular background subtraction
datasets demonstrate very competitive performance compared to
previous models.
Abstract: In this paper, we present a comparative study of three
methods of 2D face recognition system such as: Iso-Geodesic Curves
(IGC), Geodesic Distance (GD) and Geodesic-Intensity Histogram
(GIH). These approaches are based on computing of geodesic
distance between points of facial surface and between facial curves.
In this study we represented the image at gray level as a 2D surface in
a 3D space, with the third coordinate proportional to the intensity
values of pixels. In the classifying step, we use: Neural Networks
(NN), K-Nearest Neighbor (KNN) and Support Vector Machines
(SVM). The images used in our experiments are from two wellknown
databases of face images ORL and YaleB. ORL data base was
used to evaluate the performance of methods under conditions where
the pose and sample size are varied, and the database YaleB was used
to examine the performance of the systems when the facial
expressions and lighting are varied.
Abstract: Background subtraction and temporal difference are
often used for moving object detection in video. Both approaches are
computationally simple and easy to be deployed in real-time image
processing. However, while the background subtraction is highly
sensitive to dynamic background and illumination changes, the
temporal difference approach is poor at extracting relevant pixels of
the moving object and at detecting the stopped or slowly moving
objects in the scene. In this paper, we propose a simple moving object
detection scheme based on adaptive background subtraction and
temporal difference exploiting dynamic background updates. The
proposed technique consists of histogram equalization, a linear
combination of background and temporal difference, followed by the
novel frame-based and pixel-based background updating techniques.
Finally, morphological operations are applied to the output images.
Experimental results show that the proposed algorithm can solve the
drawbacks of both background subtraction and temporal difference
methods and can provide better performance than that of each method.
Abstract: In this paper, we are interested in the problem of
finding similar images in a large database. For this purpose we
propose a new algorithm based on a combination of the 2-D
histogram intersection in the HSV space and statistical moments. The
proposed histogram is based on a 3x3 window and not only on the
intensity of the pixel. This approach overcome the drawback of the
conventional 1-D histogram which is ignoring the spatial distribution
of pixels in the image, while the statistical moments are used to
escape the effects of the discretisation of the color space which is
intrinsic to the use of histograms. We compare the performance of
our new algorithm to various methods of the state of the art and we
show that it has several advantages. It is fast, consumes little memory
and requires no learning. To validate our results, we apply this
algorithm to search for similar images in different image databases.
Abstract: The detection of moving objects from a video image
sequences is very important for object tracking, activity recognition,
and behavior understanding in video surveillance.
The most used approach for moving objects detection / tracking is
background subtraction algorithms. Many approaches have been
suggested for background subtraction. But, these are illumination
change sensitive and the solutions proposed to bypass this problem
are time consuming.
In this paper, we propose a robust yet computationally efficient
background subtraction approach and, mainly, focus on the ability to
detect moving objects on dynamic scenes, for possible applications in
complex and restricted access areas monitoring, where moving and
motionless persons must be reliably detected. It consists of three
main phases, establishing illumination changes invariance,
background/foreground modeling and morphological analysis for
noise removing.
We handle illumination changes using Contrast Limited Histogram
Equalization (CLAHE), which limits the intensity of each pixel to
user determined maximum. Thus, it mitigates the degradation due to
scene illumination changes and improves the visibility of the video
signal. Initially, the background and foreground images are extracted
from the video sequence. Then, the background and foreground
images are separately enhanced by applying CLAHE.
In order to form multi-modal backgrounds we model each channel
of a pixel as a mixture of K Gaussians (K=5) using Gaussian Mixture
Model (GMM). Finally, we post process the resulting binary
foreground mask using morphological erosion and dilation
transformations to remove possible noise.
For experimental test, we used a standard dataset to challenge the
efficiency and accuracy of the proposed method on a diverse set of
dynamic scenes.
Abstract: The aim of this work is to build a model based on
tissue characterization that is able to discriminate pathological and
non-pathological regions from three-phasic CT images. With our
research and based on a feature selection in different phases, we are
trying to design a neural network system with an optimal neuron
number in a hidden layer. Our approach consists of three steps:
feature selection, feature reduction, and classification. For each
region of interest (ROI), 6 distinct sets of texture features are
extracted such as: first order histogram parameters, absolute gradient,
run-length matrix, co-occurrence matrix, autoregressive model, and
wavelet, for a total of 270 texture features. When analyzing more
phases, we show that the injection of liquid cause changes to the high
relevant features in each region. Our results demonstrate that for
detecting HCC tumor phase 3 is the best one in most of the features
that we apply to the classification algorithm. The percentage of
detection between pathology and healthy classes, according to our
method, relates to first order histogram parameters with accuracy of
85% in phase 1, 95% in phase 2, and 95% in phase 3.
Abstract: This paper presents an efficient fusion algorithm for
iris images to generate stable feature for recognition in unconstrained
environment. Recently, iris recognition systems are focused on real
scenarios in our daily life without the subject’s cooperation. Under
large variation in the environment, the objective of this paper is to
combine information from multiple images of the same iris. The
result of image fusion is a new image which is more stable for further
iris recognition than each original noise iris image. A wavelet-based
approach for multi-resolution image fusion is applied in the fusion
process. The detection of the iris image is based on Adaboost
algorithm and then local binary pattern (LBP) histogram is then
applied to texture classification with the weighting scheme.
Experiment showed that the generated features from the proposed
fusion algorithm can improve the performance for verification system
through iris recognition.
Abstract: Content-based image retrieval (CBIR) uses the
contents of images to characterize and contact the images. This paper
focus on retrieving the image by separating images into its three color
mechanism R, G and B and for that Discrete Wavelet Transformation
is applied. Then Wavelet based Generalized Gaussian Density (GGD)
is practical which is used for modeling the coefficients from the
wavelet transforms. After that it is agreed to Histogram of Oriented
Gradient (HOG) for extracting its characteristic vectors with Relevant
Feedback technique is used. The performance of this approach is
calculated by exactness and it confirms that this method is wellorganized
for image retrieval.
Abstract: Image spam is a kind of email spam where the spam
text is embedded with an image. It is a new spamming technique
being used by spammers to send their messages to bulk of internet
users. Spam email has become a big problem in the lives of internet
users, causing time consumption and economic losses. The main
objective of this paper is to detect the image spam by using histogram
properties of an image. Though there are many techniques to
automatically detect and avoid this problem, spammers employing
new tricks to bypass those techniques, as a result those techniques are
inefficient to detect the spam mails. In this paper we have proposed a
new method to detect the image spam. Here the image features are
extracted by using RGB histogram, HSV histogram and combination
of both RGB and HSV histogram. Based on the optimized image
feature set classification is done by using k- Nearest Neighbor(k-NN)
algorithm. Experimental result shows that our method has achieved
better accuracy. From the result it is known that combination of RGB
and HSV histogram with k-NN algorithm gives the best accuracy in
spam detection.