Abstract: This manuscript presents, palmprint recognition by
combining different texture extraction approaches with high accuracy.
The Region of Interest (ROI) is decomposed into different frequencytime
sub-bands by wavelet transform up-to two levels and only the
approximate image of two levels is selected, which is known as
Approximate Image ROI (AIROI). This AIROI has information of
principal lines of the palm. The Competitive Index is used as the
features of the palmprint, in which six Gabor filters of different
orientations convolve with the palmprint image to extract the orientation
information from the image. The winner-take-all strategy
is used to select dominant orientation for each pixel, which is
known as Competitive Index. Further, PCA is applied to select highly
uncorrelated Competitive Index features, to reduce the dimensions of
the feature vector, and to project the features on Eigen space. The
similarity of two palmprints is measured by the Euclidean distance
metrics. The algorithm is tested on Hong Kong PolyU palmprint
database. Different AIROI of different wavelet filter families are also
tested with the Competitive Index and PCA. AIROI of db7 wavelet
filter achievs Equal Error Rate (EER) of 0.0152% and Genuine
Acceptance Rate (GAR) of 99.67% on the palm database of Hong
Kong PolyU.
Abstract: In this paper, we propose a novel approach for image
segmentation via fuzzification of Rènyi Entropy of Generalized
Distributions (REGD). The fuzzy REGD is used to precisely measure
the structural information of image and to locate the optimal
threshold desired by segmentation. The proposed approach draws
upon the postulation that the optimal threshold concurs with
maximum information content of the distribution. The contributions
in the paper are as follow: Initially, the fuzzy REGD as a measure of
the spatial structure of image is introduced. Then, we propose an
efficient entropic segmentation approach using fuzzy REGD.
However the proposed approach belongs to entropic segmentation
approaches (i.e. these approaches are commonly applied to grayscale
images), it is adapted to be viable for segmenting color images.
Lastly, diverse experiments on real images that show the superior
performance of the proposed method are carried out.
Abstract: World has entered in 21st century. The technology of
computer graphics and digital cameras is prevalent. High resolution
display and printer are available. Therefore high resolution images
are needed in order to produce high quality display images and high
quality prints. However, since high resolution images are not usually
provided, there is a need to magnify the original images. One
common difficulty in the previous magnification techniques is that of
preserving details, i.e. edges and at the same time smoothing the data
for not introducing the spurious artefacts. A definitive solution to this
is still an open issue. In this paper an image magnification using
adaptive interpolation by pixel level data-dependent geometrical
shapes is proposed that tries to take into account information about
the edges (sharp luminance variations) and smoothness of the image.
It calculate threshold, classify interpolation region in the form of
geometrical shapes and then assign suitable values inside
interpolation region to the undefined pixels while preserving the
sharp luminance variations and smoothness at the same time.
The results of proposed technique has been compared qualitatively
and quantitatively with five other techniques. In which the qualitative
results show that the proposed method beats completely the Nearest
Neighbouring (NN), bilinear(BL) and bicubic(BC) interpolation. The
quantitative results are competitive and consistent with NN, BL, BC
and others.
Abstract: Two algorithms are proposed to reduce the storage requirements for mammogram images. The input image goes through a shrinking process that converts the 16-bit images to 8-bits by using pixel-depth conversion algorithm followed by enhancement process. The performance of the algorithms is evaluated objectively and subjectively. A 50% reduction in size is obtained with no loss of significant data at the breast region.
Abstract: Two-dimensional (2D) bar codes were designed to
carry significantly more data with higher information density and
robustness than its 1D counterpart. Thanks to the popular
combination of cameras and mobile phones, it will naturally bring
great commercial value to use the camera phone for 2D bar code
reading. This paper addresses the problem of specific 2D bar code
design for mobile phones and introduces a low-level encoding
method of matrix codes. At the same time, we propose an efficient
scheme for 2D bar codes decoding, of which the effort is put on
solutions of the difficulties introduced by low image quality that is
very common in bar code images taken by a phone camera.
Abstract: There are few studies on eggshell of leatherback turtle
which is endangered species in Thailand. This study was focusing on
the ultrastructure and elemental composition of leatherback turtle
eggshells collected from Andaman Sea Shore, Thailand during the
nesting season using scanning electron microscope (SEM). Three
eggshell layers of leatherback turtle; the outer cuticle layer or
calcareous layer, the middle layer or middle multistrata layer and the
inner fibrous layer were recognized. The outer calcareous layer was
thick and porosity which consisted of loose nodular units of various
crystal shapes and sizes. The loose attachment between these units
resulted in numerous spaces and openings. The middle layer was
compact thick with several multistrata and contained numerous
openings connecting to both outer cuticle layer and inner fibrous
layer. The inner fibrous layer was compact and thin, and composed of
numerous reticular fibers. Energy dispersive X-ray microanalysis
detector revealed energy spectrum of X-rays character emitted from
all elements on each layer. The percentages of all elements were
found in the following order: carbon (C) > oxygen (O) > calcium
(Ca) > sulfur (S) > potassium (K) > aluminum (Al) > iodine (I) >
silicon (Si) > chlorine (Cl) > sodium (Na) > fluorine (F) >
phosphorus (P) > magnesium (Mg). Each layer consisted of high
percentage of CaCO3 (approximately 98%) implying that it was
essential for turtle embryonic development. A significant difference
was found in the percentages of Ca and Mo in the 3layers. Moreover,
transition metal, metal and toxic non-metal contaminations were
found in leatherback turtle eggshell samples. These were palladium
(Pd), molybdenum (Mo), copper (Cu), aluminum (Al), lead (Pb), and
bromine (Br). The contamination elements were seen in the outer
layers except for Mo. All elements were readily observed and
mapped using Smiling program. X-ray images which mapped the
location of all elements were showed. Calcium containing in the
eggshell appeared in high contents and was widely distributing in
clusters of the outer cuticle layer to form CaCO3 structure. Moreover,
the accumulation of Na and Cl was observed to form NaCl which was
widely distributing in 3 eggshell layers. The results from this study
would be valuable on assessing the emergent success in this
endangered species.
Abstract: The effect of different tempering temperatures and heat treatment times on the corrosion resistance of austenitic stainless steels in oxalic acid was studied in this work using conventional weight loss and electrochemical measurements. Typical 304 and 316 stainless steel samples were tempered at 150oC, 250oC and 350oC after being austenized at 1050oC for 10 minutes. These samples were then immersed in 1.0M oxalic acid and their weight losses were measured at every five days for 30 days. The results show that corrosion of both types of ASS samples increased with an increase in tempering temperature and time and this was due to the precipitation of chromium carbides at the grain boundaries of these metals. Electrochemical results also confirm that the 304 ASS is more susceptible to corrosion than 316 ASS in this medium. This is attributed to the molybdenum in the composition of the latter. The metallographic images of these samples showed non–uniform distribution of precipitated chromium carbides at the grain boundaries of these metals and unevenly distributed carbides and retained austenite phases which cause galvanic effects in the medium.
Abstract: A new method for low complexity image coding is presented, that permits different settings and great scalability in the generation of the final bit stream. This coding presents a continuoustone still image compression system that groups loss and lossless compression making use of finite arithmetic reversible transforms. Both transformation in the space of color and wavelet transformation are reversible. The transformed coefficients are coded by means of a coding system in depending on a subdivision into smaller components (CFDS) similar to the bit importance codification. The subcomponents so obtained are reordered by means of a highly configure alignment system depending on the application that makes possible the re-configure of the elements of the image and obtaining different levels of importance from which the bit stream will be generated. The subcomponents of each level of importance are coded using a variable length entropy coding system (VBLm) that permits the generation of an embedded bit stream. This bit stream supposes itself a bit stream that codes a compressed still image. However, the use of a packing system on the bit stream after the VBLm allows the realization of a final highly scalable bit stream from a basic image level and one or several enhance levels.
Abstract: This paper proposes a new approach to perform the
problem of real-time face detection. The proposed method combines
primitive Haar-Like feature and variance value to construct a new
feature, so-called Variance based Haar-Like feature. Face in image
can be represented with a small quantity of features using this
new feature. We used SVM instead of AdaBoost for training and
classification. We made a database containing 5,000 face samples
and 10,000 non-face samples extracted from real images for learning
purposed. The 5,000 face samples contain many images which have
many differences of light conditions. And experiments showed that
face detection system using Variance based Haar-Like feature and
SVM can be much more efficient than face detection system using
primitive Haar-Like feature and AdaBoost. We tested our method on
two Face databases and one Non-Face database. We have obtained
96.17% of correct detection rate on YaleB face database, which is
higher 4.21% than that of using primitive Haar-Like feature and
AdaBoost.
Abstract: The huge development of new technologies and the
apparition of open communication system more and more
sophisticated create a new challenge to protect digital content from
piracy. Digital watermarking is a recent research axis and a new
technique suggested as a solution to these problems. This technique
consists in inserting identification information (watermark) into
digital data (audio, video, image, databases...) in an invisible and
indelible manner and in such a way not to degrade original medium-s
quality. Moreover, we must be able to correctly extract the
watermark despite the deterioration of the watermarked medium (i.e
attacks). In this paper we propose a system for watermarking satellite
images. We chose to embed the watermark into frequency domain,
precisely the discrete wavelet transform (DWT). We applied our
algorithm on satellite images of Tunisian center. The experiments
show satisfying results. In addition, our algorithm showed an
important resistance facing different attacks, notably the compression
(JEPG, JPEG2000), the filtering, the histogram-s manipulation and
geometric distortions such as rotation, cropping, scaling.
Abstract: In recent years, we see an increase of interest for efficient tracking systems in surveillance applications. Many of the proposed techniques are designed for static cameras environments. When the camera is moving, tracking moving objects become more difficult and many techniques fail to detect and track the desired targets. The problem becomes more complex when we want to track a specific object in real-time using a moving Pan and Tilt camera system to keep the target within the image. This type of tracking is of high importance in surveillance applications. When a target is detected at a certain zone, the possibility of automatically tracking it continuously and keeping it within the image until action is taken is very important for security personnel working in very sensitive sites. This work presents a real-time tracking system permitting the detection and continuous tracking of targets using a Pan and Tilt camera platform. A novel and efficient approach for dealing with occlusions is presented. Also a new intelligent forget factor is introduced in order to take into account target shape variations and avoid learning non desired objects. Tests conducted in outdoor operational scenarios show the efficiency and robustness of the proposed approach.
Abstract: Image data holds a large amount of different context
information. However, as of today, these resources remain largely
untouched. It is thus the aim of this paper to present a basic technical
framework which allows for a quick and easy exploitation of context
information from image data especially by non-expert users.
Furthermore, the proposed framework is discussed in detail
concerning important social and ethical issues which demand special
requirements in system design. Finally, a first sensor prototype is
presented which meets the identified requirements. Additionally,
necessary implications for the software and hardware design of the
system are discussed, rendering a sensor system which could be
regarded as a good, acceptable and justifiable technical and thereby
enabling the extraction of context information from image data.
Abstract: One of the main environmental problems which affect extensive areas in the world is soil salinity. Traditional data collection methods are neither enough for considering this important environmental problem nor accurate for soil studies. Remote sensing data could overcome most of these problems. Although satellite images are commonly used for these studies, however there are still needs to find the best calibration between the data and real situations in each specified area. Neyshaboor area, North East of Iran was selected as a field study of this research. Landsat satellite images for this area were used in order to prepare suitable learning samples for processing and classifying the images. 300 locations were selected randomly in the area to collect soil samples and finally 273 locations were reselected for further laboratory works and image processing analysis. Electrical conductivity of all samples was measured. Six reflective bands of ETM+ satellite images taken from the study area in 2002 were used for soil salinity classification. The classification was carried out using common algorithms based on the best composition bands. The results showed that the reflective bands 7, 3, 4 and 1 are the best band composition for preparing the color composite images. We also found out, that hybrid classification is a suitable method for identifying and delineation of different salinity classes in the area.
Abstract: Support Vector Machine (SVM) is a recent class of statistical classification and regression techniques playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM is applied to an infrared (IR) binary communication system with different types of channel models including Ricean multipath fading and partially developed scattering channel with additive white Gaussian noise (AWGN) at the receiver. The structure and performance of SVM in terms of the bit error rate (BER) metric is derived and simulated for these channel stochastic models and the computational complexity of the implementation, in terms of average computational time per bit, is also presented. The performance of SVM is then compared to classical binary signal maximum likelihood detection using a matched filter driven by On-Off keying (OOK) modulation. We found that the performance of SVM is superior to that of the traditional optimal detection schemes used in statistical communication, especially for very low signal-to-noise ratio (SNR) ranges. For large SNR, the performance of the SVM is similar to that of the classical detectors. The implication of these results is that SVM can prove very beneficial to IR communication systems that notoriously suffer from low SNR at the cost of increased computational complexity.
Abstract: Speed estimation is one of the important and practical tasks in machine vision, Robotic and Mechatronic. the availability of high quality and inexpensive video cameras, and the increasing need for automated video analysis has generated a great deal of interest in machine vision algorithms. Numerous approaches for speed estimation have been proposed. So classification and survey of the proposed methods can be very useful. The goal of this paper is first to review and verify these methods. Then we will propose a novel algorithm to estimate the speed of moving object by using fuzzy concept. There is a direct relation between motion blur parameters and object speed. In our new approach we will use Radon transform to find direction of blurred image, and Fuzzy sets to estimate motion blur length. The most benefit of this algorithm is its robustness and precision in noisy images. Our method was tested on many images with different range of SNR and is satisfiable.
Abstract: The need to evaluate and understand the natural
drainage pattern in a flood prone, and fast developing environment is
of paramount importance. This information will go a long way to
help the town planners to determine the drainage pattern, road
networks and areas where prominent structures are to be located. This
research work was carried out with the aim of studying the Bayelsa
landscape topography using digitized topographic information, and to
model the natural drainage flow pattern that will aid the
understanding and constructions of workable drainages. To achieve
this, digitize information of elevation and coordinate points were
extracted from a global imagery map. The extracted information was
modeled into 3D surfaces. The result revealed that the average
elevation for Bayelsa State is 12 m above sea level. The highest
elevation is 28 m, and the lowest elevation 0 m, along the coastline.
In Yenagoa the capital city of Bayelsa were a detail survey was
carried out showed that average elevation is 15 m, the highest
elevation is 25 m and lowest is 3 m above the mean sea level. The
regional elevation in Bayelsa, showed a gradation decrease from the
North Eastern zone to the South Western Zone. Yenagoa showed an
observed elevation lineament, were low depression is flanked by high
elevation that runs from the North East to the South west. Hence,
future drainages in Yenagoa should be directed from the high
elevation, from South East toward the North West and from the
North West toward South East, to the point of convergence which is
at the center that flows from South East toward the North West.
Bayelsa when considered on a regional Scale, the flow pattern is from
the North East to the South West, and also North South. It is
recommended that in the event of any large drainage construction at
municipal scale, it should be directed from North East to the South
West or from North to South. Secondly, detail survey should be
carried out to ascertain the local topography and the drainage pattern
before the design and construction of any drainage system in any part
of Bayelsa.
Abstract: This paper introduces and studies new indexing techniques for content-based queries in images databases. Indexing is the key to providing sophisticated, accurate and fast searches for queries in image data. This research describes a new indexing approach, which depends on linear modeling of signals, using bases for modeling. A basis is a set of chosen images, and modeling an image is a least-squares approximation of the image as a linear combination of the basis images. The coefficients of the basis images are taken together to serve as index for that image. The paper describes the implementation of the indexing scheme, and presents the findings of our extensive evaluation that was conducted to optimize (1) the choice of the basis matrix (B), and (2) the size of the index A (N). Furthermore, we compare the performance of our indexing scheme with other schemes. Our results show that our scheme has significantly higher performance.
Abstract: This paper proposes a method of adaptively generating a gait pattern of biped robot. The gait synthesis is based on human's gait pattern analysis. The proposed method can easily be applied to generate the natural and stable gait pattern of any biped robot. To analyze the human's gait pattern, sequential images of the human's gait on the sagittal plane are acquired from which the gait control values are extracted. The gait pattern of biped robot on the sagittal plane is adaptively generated by a genetic algorithm using the human's gait control values. However, gait trajectories of the biped robot on the sagittal plane are not enough to construct the complete gait pattern because the biped robot moves on 3-dimension space. Therefore, the gait pattern on the frontal plane, generated from Zero Moment Point (ZMP), is added to the gait one acquired on the sagittal plane. Consequently, the natural and stable walking pattern for the biped robot is obtained.
Abstract: In current common research reports, salient regions
are usually defined as those regions that could present the main
meaningful or semantic contents. However, there are no uniform
saliency metrics that could describe the saliency of implicit image
regions. Most common metrics take those regions as salient regions,
which have many abrupt changes or some unpredictable
characteristics. But, this metric will fail to detect those salient useful
regions with flat textures. In fact, according to human semantic
perceptions, color and texture distinctions are the main characteristics
that could distinct different regions. Thus, we present a novel saliency
metric coupled with color and texture features, and its corresponding
salient region extraction methods. In order to evaluate the
corresponding saliency values of implicit regions in one image, three
main colors and multi-resolution Gabor features are respectively used
for color and texture features. For each region, its saliency value is
actually to evaluate the total sum of its Euclidean distances for other
regions in the color and texture spaces. A special synthesized image
and several practical images with main salient regions are used to
evaluate the performance of the proposed saliency metric and other
several common metrics, i.e., scale saliency, wavelet transform
modulus maxima point density, and important index based metrics.
Experiment results verified that the proposed saliency metric could
achieve more robust performance than those common saliency
metrics.
Abstract: The segmentation of endovascular tools in fluoroscopy images can be accurately performed automatically or by minimum user intervention, using known modern techniques. It has been proven in literature, but no clinical implementation exists so far because the computational time requirements of such technology have not yet been met. A classical segmentation scheme is composed of edge enhancement filtering, line detection, and segmentation. A new method is presented that consists of a vector that propagates in the image to track an edge as it advances. The filtering is performed progressively in the projected path of the vector, whose orientation allows for oriented edge detection, and a minimal image area is globally filtered. Such an algorithm is rapidly computed and can be implemented in real-time applications. It was tested on medical fluoroscopy images from an endovascular cerebral intervention. Ex- periments showed that the 2D tracking was limited to guidewires without intersection crosspoints, while the 3D implementation was able to cope with such planar difficulties.