Abstract: This article describes the Random Subspace Neural Classifier (RSC) for the recognition of meteors in the night sky. We used images of meteors entering the atmosphere at night between 8:00 p.m.-5: 00 a.m. The objective of this project is to classify meteor and star images (with stars as the image background). The monitoring of the sky and the classification of meteors are made for future applications by scientists. The image database was collected from different websites. We worked with RGB-type images with dimensions of 220x220 pixels stored in the BitMap Protocol (BMP) format. Subsequent window scanning and processing were carried out for each image. The scan window where the characteristics were extracted had the size of 20x20 pixels with a scanning step size of 10 pixels. Brightness, contrast and contour orientation histograms were used as inputs for the RSC. The RSC worked with two classes and classified into: 1) with meteors and 2) without meteors. Different tests were carried out by varying the number of training cycles and the number of images for training and recognition. The percentage error for the neural classifier was calculated. The results show a good RSC classifier response with 89% correct recognition. The results of these experiments are presented and discussed.
Abstract: This paper presents a deep-learning mechanism for classifying computer generated images and photographic images. The proposed method accounts for a convolutional layer capable of automatically learning correlation between neighbouring pixels. In the current form, Convolutional Neural Network (CNN) will learn features based on an image's content instead of the structural features of the image. The layer is particularly designed to subdue an image's content and robustly learn the sensor pattern noise features (usually inherited from image processing in a camera) as well as the statistical properties of images. The paper was assessed on latest natural and computer generated images, and it was concluded that it performs better than the current state of the art methods.
Abstract: With the development of HyperSpectral Imagery
(HSI) technology, the spectral resolution of HSI became denser,
which resulted in large number of spectral bands, high correlation
between neighboring, and high data redundancy. However, the
semantic interpretation is a challenging task for HSI analysis
due to the high dimensionality and the high correlation of the
different spectral bands. In fact, this work presents a dimensionality
reduction approach that allows to overcome the different issues
improving the semantic interpretation of HSI. Therefore, in order
to preserve the spatial information, the Tensor Locality Preserving
Projection (TLPP) has been applied to transform the original HSI.
In the second step, knowledge has been extracted based on the
adjacency graph to describe the different pixels. Based on the
transformation matrix using TLPP, a weighted matrix has been
constructed to rank the different spectral bands based on their
contribution score. Thus, the relevant bands have been adaptively
selected based on the weighted matrix. The performance of the
presented approach has been validated by implementing several
experiments, and the obtained results demonstrate the efficiency
of this approach compared to various existing dimensionality
reduction techniques. Also, according to the experimental results,
we can conclude that this approach can adaptively select the
relevant spectral improving the semantic interpretation of HSI.
Abstract: In this paper, we present a comparative study of three
methods of 2D face recognition system such as: Iso-Geodesic Curves
(IGC), Geodesic Distance (GD) and Geodesic-Intensity Histogram
(GIH). These approaches are based on computing of geodesic
distance between points of facial surface and between facial curves.
In this study we represented the image at gray level as a 2D surface in
a 3D space, with the third coordinate proportional to the intensity
values of pixels. In the classifying step, we use: Neural Networks
(NN), K-Nearest Neighbor (KNN) and Support Vector Machines
(SVM). The images used in our experiments are from two wellknown
databases of face images ORL and YaleB. ORL data base was
used to evaluate the performance of methods under conditions where
the pose and sample size are varied, and the database YaleB was used
to examine the performance of the systems when the facial
expressions and lighting are varied.
Abstract: In this paper, we present a new segmentation approach
for focal liver lesions in contrast enhanced ultrasound imaging. This
approach, based on a two-cluster Fuzzy C-Means methodology,
considers type-II fuzzy sets to handle uncertainty due to the image
modality (presence of speckle noise, low contrast, etc.), and to
calculate the optimum inter-cluster threshold. Fine boundaries are
detected by a local recursive merging of ambiguous pixels. The
method has been tested on a representative database. Compared to
both Otsu and type-I Fuzzy C-Means techniques, the proposed
method significantly reduces the segmentation errors.
Abstract: This paper presents the local mesh co-occurrence
patterns (LMCoP) using HSV color space for image retrieval system.
HSV color space is used in this method to utilize color, intensity and
brightness of images. Local mesh patterns are applied to define the
local information of image and gray level co-occurrence is used to
obtain the co-occurrence of LMeP pixels. Local mesh co-occurrence
pattern extracts the local directional information from local mesh
pattern and converts it into a well-mannered feature vector using gray
level co-occurrence matrix. The proposed method is tested on three
different databases called MIT VisTex, Corel, and STex. Also, this
algorithm is compared with existing methods, and results in terms of
precision and recall are shown in this paper.
Abstract: The aim of this paper is image encryption using Genetic Algorithm (GA). The proposed encryption method consists of two phases. In modification phase, pixels locations are altered to reduce correlation among adjacent pixels. Then, pixels values are changed in the diffusion phase to encrypt the input image. Both phases are performed by GA with binary chromosomes. For modification phase, these binary patterns are generated by Local Binary Pattern (LBP) operator while for diffusion phase binary chromosomes are obtained by Bit Plane Slicing (BPS). Initial population in GA includes rows and columns of the input image. Instead of subjective selection of parents from this initial population, a random generator with predefined key is utilized. It is necessary to decrypt the coded image and reconstruct the initial input image. Fitness function is defined as average of transition from 0 to 1 in LBP image and histogram uniformity in modification and diffusion phases, respectively. Randomness of the encrypted image is measured by entropy, correlation coefficients and histogram analysis. Experimental results show that the proposed method is fast enough and can be used effectively for image encryption.
Abstract: This paper describes fast and efficient method for page segmentation of document containing nonrectangular block. The segmentation is based on edge following algorithm using small window of 16 by 32 pixels. This segmentation is very fast since only border pixels of paragraph are used without scanning the whole page. Still, the segmentation may contain error if the space between them is smaller than the window used in edge following. Consequently, this paper reduce this error by first identify the missed segmentation point using direction information in edge following then, using X-Y cut at the missed segmentation point to separate the connected columns. The advantage of the proposed method is the fast identification of missed segmentation point. This methodology is faster with fewer overheads than other algorithms that need to access much more pixel of a document.
Abstract: Spatial outliers in remotely sensed imageries represent
observed quantities showing unusual values compared to their
neighbor pixel values. There have been various methods to detect the
spatial outliers based on spatial autocorrelations in statistics and data
mining. These methods may be applied in detecting forest fire pixels
in the MODIS imageries from NASA-s AQUA satellite. This is
because the forest fire detection can be referred to as finding spatial
outliers using spatial variation of brightness temperature. This point is
what distinguishes our approach from the traditional fire detection
methods. In this paper, we propose a graph-based forest fire detection
algorithm which is based on spatial outlier detection methods, and test
the proposed algorithm to evaluate its applicability. For this the
ordinary scatter plot and Moran-s scatter plot were used. In order to
evaluate the proposed algorithm, the results were compared with the
MODIS fire product provided by the NASA MODIS Science Team,
which showed the possibility of the proposed algorithm in detecting
the fire pixels.
Abstract: In this paper, we propose a method for detecting
circular shapes with subpixel accuracy. First, the geometric properties
of circles have been used to find the diameters as well as the
circumference pixels. The center and radius are then estimated by the
circumference pixels. Both synthetic and real images have been tested
by the proposed method. The experimental results show that the new
method is efficient.
Abstract: one of the significant factors for improving the
accuracy of Land Surface Temperature (LST) retrieval is the correct
understanding of the directional anisotropy for thermal radiance. In
this paper, the multiple scattering effect between heterogeneous
non-isothermal surfaces is described rigorously according to the
concept of configuration factor, based on which a directional thermal
radiance model is built, and the directional radiant character for urban
canopy is analyzed. The model is applied to a simple urban canopy
with row structure to simulate the change of Directional Brightness
Temperature (DBT). The results show that the DBT is aggrandized
because of the multiple scattering effects, whereas the change range of
DBT is smoothed. The temperature difference, spatial distribution,
emissivity of the components can all lead to the change of DBT. The
“hot spot" phenomenon occurs when the proportion of high
temperature component in the vision field came to a head. On the other
hand, the “cool spot" phenomena occur when low temperature
proportion came to the head. The “spot" effect disappears only when
the proportion of every component keeps invariability. The model
built in this paper can be used for the study of directional effect on
emissivity, the LST retrieval over urban areas and the adjacency effect
of thermal remote sensing pixels.
Abstract: Removing noise from the any processed images is very important. Noise should be removed in such a way that important information of image should be preserved. A decisionbased nonlinear algorithm for elimination of band lines, drop lines, mark, band lost and impulses in images is presented in this paper. The algorithm performs two simultaneous operations, namely, detection of corrupted pixels and evaluation of new pixels for replacing the corrupted pixels. Removal of these artifacts is achieved without damaging edges and details. However, the restricted window size renders median operation less effective whenever noise is excessive in that case the proposed algorithm automatically switches to mean filtering. The performance of the algorithm is analyzed in terms of Mean Square Error [MSE], Peak-Signal-to-Noise Ratio [PSNR], Signal-to-Noise Ratio Improved [SNRI], Percentage Of Noise Attenuated [PONA], and Percentage Of Spoiled Pixels [POSP]. This is compared with standard algorithms already in use and improved performance of the proposed algorithm is presented. The advantage of the proposed algorithm is that a single algorithm can replace several independent algorithms which are required for removal of different artifacts.
Abstract: Skin color can provide a useful and robust cue
for human-related image analysis, such as face detection,
pornographic image filtering, hand detection and tracking,
people retrieval in databases and Internet, etc. The major
problem of such kinds of skin color detection algorithms is
that it is time consuming and hence cannot be applied to a real
time system. To overcome this problem, we introduce a new
fast technique for skin detection which can be applied in a real
time system. In this technique, instead of testing each image
pixel to label it as skin or non-skin (as in classic techniques),
we skip a set of pixels. The reason of the skipping process is
the high probability that neighbors of the skin color pixels are
also skin pixels, especially in adult images and vise versa. The
proposed method can rapidly detect skin and non-skin color
pixels, which in turn dramatically reduce the CPU time
required for the protection process. Since many fast detection
techniques are based on image resizing, we apply our
proposed pixel skipping technique with image resizing to
obtain better results. The performance evaluation of the
proposed skipping and hybrid techniques in terms of the
measured CPU time is presented. Experimental results
demonstrate that the proposed methods achieve better result
than the relevant classic method.
Abstract: Iris pattern is an important biological feature of human body; it becomes very hot topic in both research and practical applications. In this paper, an algorithm is proposed for iris recognition and a simple, efficient and fast method is introduced to extract a set of discriminatory features using first order gradient operator applied on grayscale images. The gradient based features are robust, up to certain extents, against the variations may occur in contrast or brightness of iris image samples; the variations are mostly occur due lightening differences and camera changes. At first, the iris region is located, after that it is remapped to a rectangular area of size 360x60 pixels. Also, a new method is proposed for detecting eyelash and eyelid points; it depends on making image statistical analysis, to mark the eyelash and eyelid as a noise points. In order to cover the features localization (variation), the rectangular iris image is partitioned into N overlapped sub-images (blocks); then from each block a set of different average directional gradient densities values is calculated to be used as texture features vector. The applied gradient operators are taken along the horizontal, vertical and diagonal directions. The low order norms of gradient components were used to establish the feature vector. Euclidean distance based classifier was used as a matching metric for determining the degree of similarity between the features vector extracted from the tested iris image and template features vectors stored in the database. Experimental tests were performed using 2639 iris images from CASIA V4-Interival database, the attained recognition accuracy has reached up to 99.92%.
Abstract: One of the main limitations for the resolution of
optical instruments is the size of the sensor-s pixels. In this paper we
introduce a new sub pixel resolution algorithm to enhance the
resolution of images. This method is based on the analysis of multiimages
which are fast recorded during the fine relative motion of
image and pixel arrays of CCDs. It is shown that by applying this
method for a sample noise free image one will enhance the resolution
with 10-14 order of error.
Abstract: In this study, a new criterion for determining the number of classes an image should be segmented is proposed. This criterion is based on discriminant analysis for measuring the separability among the segmented classes of pixels. Based on the new discriminant criterion, two algorithms for recursively segmenting the image into determined number of classes are proposed. The proposed methods can automatically and correctly segment objects with various illuminations into separated images for further processing. Experiments on the extraction of text strings from complex document images demonstrate the effectiveness of the proposed methods.1
Abstract: Steganography meaning covered writing. Steganography includes the concealment of information within computer files [1]. In other words, it is the Secret communication by hiding the existence of message. In this paper, we will refer to cover image, to indicate the images that do not yet contain a secret message, while we will refer to stego images, to indicate an image with an embedded secret message. Moreover, we will refer to the secret message as stego-message or hidden message. In this paper, we proposed a technique called RGB intensity based steganography model as RGB model is the technique used in this field to hide the data. The methods used here are based on the manipulation of the least significant bits of pixel values [3][4] or the rearrangement of colors to create least significant bit or parity bit patterns, which correspond to the message being hidden. The proposed technique attempts to overcome the problem of the sequential fashion and the use of stego-key to select the pixels.
Abstract: The segmentation of mouth and lips is a fundamental
problem in facial image analyisis. In this paper we propose a method
for lip segmentation based on rg-color histogram. Statistical analysis
shows, using the rg-color-space is optimal for this purpose of a pure
color based segmentation. Initially a rough adaptive threshold selects
a histogram region, that assures that all pixels in that region are
skin pixels. Based on that pixels we build a gaussian model which
represents the skin pixels distribution and is utilized to obtain a
refined, optimal threshold. We are not incorporating shape or edge
information. In experiments we show the performance of our lip pixel
segmentation method compared to the ground truth of our dataset and
a conventional watershed algorithm.
Abstract: Image coding based on clustering provides immediate
access to targeted features of interest in a high quality decoded
image. This approach is useful for intelligent devices, as well as for
multimedia content-based description standards. The result of image
clustering cannot be precise in some positions especially on pixels
with edge information which produce ambiguity among the clusters.
Even with a good enhancement operator based on PDE, the quality of
the decoded image will highly depend on the clustering process. In
this paper, we introduce an ambiguity cluster in image coding to
represent pixels with vagueness properties. The presence of such
cluster allows preserving some details inherent to edges as well for
uncertain pixels. It will also be very useful during the decoding phase
in which an anisotropic diffusion operator, such as Perona-Malik,
enhances the quality of the restored image. This work also offers a
comparative study to demonstrate the effectiveness of a fuzzy
clustering technique in detecting the ambiguity cluster without losing
lot of the essential image information. Several experiments have been
carried out to demonstrate the usefulness of ambiguity concept in
image compression. The coding results and the performance of the
proposed algorithms are discussed in terms of the peak signal-tonoise
ratio and the quantity of ambiguous pixels.
Abstract: This paper presents the application of a signal
intensity independent registration criterion for 2D rigid body
registration of medical images using 1D binary projections. The
criterion is defined as the weighted ratio of two projections. The ratio
is computed on a pixel per pixel basis and weighting is performed by
setting the ratios between one and zero pixels to a standard high
value. The mean squared value of the weighted ratio is computed
over the union of the one areas of the two projections and it is
minimized using the Chebyshev polynomial approximation using
n=5 points. The sum of x and y projections is used for translational
adjustment and a 45deg projection for rotational adjustment. 20 T1-
T2 registration experiments were performed and gave mean errors
1.19deg and 1.78 pixels. The method is suitable for contour/surface
matching. Further research is necessary to determine the robustness
of the method with regards to threshold, shape and missing data.