Abstract: Automatic Vehicle Identification (AVI) has many
applications in traffic systems (highway electronic toll collection, red
light violation enforcement, border and customs checkpoints, etc.).
License Plate Recognition is an effective form of AVI systems. In
this study, a smart and simple algorithm is presented for vehicle-s
license plate recognition system. The proposed algorithm consists of
three major parts: Extraction of plate region, segmentation of
characters and recognition of plate characters. For extracting the
plate region, edge detection algorithms and smearing algorithms are
used. In segmentation part, smearing algorithms, filtering and some
morphological algorithms are used. And finally statistical based
template matching is used for recognition of plate characters. The
performance of the proposed algorithm has been tested on real
images. Based on the experimental results, we noted that our
algorithm shows superior performance in car license plate
recognition.
Abstract: Several methods have been proposed for color image
compression but the reconstructed image had very low signal to noise
ratio which made it inefficient. This paper describes a lossy
compression technique for color images which overcomes the
drawbacks. The technique works on spatial domain where the pixel
values of RGB planes of the input color image is mapped onto two
dimensional planes. The proposed technique produced better results
than JPEG2000, 2DPCA and a comparative study is reported based
on the image quality measures such as PSNR and MSE.Experiments
on real time images are shown that compare this methodology with
previous ones and demonstrate its advantages.
Abstract: Content-based Image Retrieval (CBIR) aims at searching image databases for specific images that are similar to a given query image based on matching of features derived from the image content. This paper focuses on a low-dimensional color based indexing technique for achieving efficient and effective retrieval performance. In our approach, the color features are extracted using the mean shift algorithm, a robust clustering technique. Then the cluster (region) mode is used as representative of the image in 3-D color space. The feature descriptor consists of the representative color of a region and is indexed using a spatial indexing method that uses *R -tree thus avoiding the high-dimensional indexing problems associated with the traditional color histogram. Alternatively, the images in the database are clustered based on region feature similarity using Euclidian distance. Only representative (centroids) features of these clusters are indexed using *R -tree thus improving the efficiency. For similarity retrieval, each representative color in the query image or region is used independently to find regions containing that color. The results of these methods are compared. A JAVA based query engine supporting query-by- example is built to retrieve images by color.
Abstract: The development of many measurement and inspection systems of products based on real-time image processing can not be carried out totally in a laboratory due to the size or the temperature of the manufactured products. Those systems must be developed in successive phases. Firstly, the system is installed in the production line with only an operational service to acquire images of the products and other complementary signals. Next, a recording service of the image and signals must be developed and integrated in the system. Only after a large set of images of products is available, the development of the real-time image processing algorithms for measurement or inspection of the products can be accomplished under realistic conditions. Finally, the recording service is turned off or eliminated and the system operates only with the real-time services for the acquisition and processing of the images. This article presents a systematic performance evaluation of the image compression algorithms currently available to implement a real-time recording service. The results allow establishing a trade off between the reduction or compression of the image size and the CPU time required to get that compression level.
Abstract: Buildings are considered as significant part in the
cities, which plays main role in organization and arrangement of city
appearance, which is affects image of that building facades, as an
connective between inner and outer space, have a main role in city
image and they are classified as rich image and poor image by people
evaluation which related to visual architectural and urban elements in
building facades. the buildings in Karimi street , in Lahijan city
where, lies in north of Iran, contain the variety of building's facade
types which, have made a city image in Historical part of Lahijan
city, while reflected the Iranian cities identity. The study attempt to
identify the architectural and urban elements that impression the
image of building facades in historical area, based on public
evaluation. Quantitative method were used and the data was collected
through questionnaire survey, the result presented architectural style,
color, shape, and design evaluated by people as most important factor
which should be understate in future development. in fact, the rich
architectural style with strong design make strong city image as weak
design make poor city image.
Abstract: The nexus between language and culture is so
intertwined and very significant that language is largely seen as a
vehicle for cultural transmission. Culture itself refers to the aggregate
belief system of a people, embellishing its corporate national image
or brand. If we conceive national rebranding as a campaign to
rekindle the patriotic flame in the consciousness of a people towards
its sociocultural imperatives and values, then, Nigerian indigenous
linguistic flame has not been ignited. Consequently, the paper
contends that the current national rebranding policy remains a myth
in the confines of the elitists' intellectual squabble. It however
recommends that the use of our indigenous languages should be
supported by adequate legislation and also propagated by Nollywood
in order to revamp and sustain the people’s interest in their local
languages. Finally, the use of the indigenous Nigerian languages
demonstrates patriotism, an important ingredient for actualizing a
genuine national rebranding.
Abstract: The notion of S-fuzzy left h-ideals in a hemiring is introduced and it's basic properties are investigated.We also study the homomorphic image and preimage of S-fuzzy left h-ideal of hemirings.Using a collection of left h-ideals of a hemiring, S-fuzzy left h-ideal of hemirings are established.The notion of a finite-valued S-fuzzy left h-ideal is introduced,and its characterization is given.S-fuzzy relations on hemirings are discussed.The notion of direct product and S-product are introduced and some properties of the direct product and S-product of S-fuzzy left h-ideal of hemiring are also discussed.
Abstract: For the communication between human and computer
in an interactive computing environment, the gesture recognition is
studied vigorously. Therefore, a lot of studies have proposed efficient
methods about the recognition algorithm using 2D camera captured
images. However, there is a limitation to these methods, such as the
extracted features cannot fully represent the object in real world.
Although many studies used 3D features instead of 2D features for
more accurate gesture recognition, the problem, such as the processing
time to generate 3D objects, is still unsolved in related researches.
Therefore we propose a method to extract the 3D features combined
with the 3D object reconstruction. This method uses the modified
GPU-based visual hull generation algorithm which disables unnecessary
processes, such as the texture calculation to generate three kinds
of 3D projection maps as the 3D feature: a nearest boundary, a farthest
boundary, and a thickness of the object projected on the base-plane. In
the section of experimental results, we present results of proposed
method on eight human postures: T shape, both hands up, right hand
up, left hand up, hands front, stand, sit and bend, and compare the
computational time of the proposed method with that of the previous
methods.
Abstract: In this paper, the computation of the electrical field distribution around AC high-voltage lines is demonstrated. The advantages and disadvantages of two different methods are described to evaluate the electrical field quantity. The first method is a seminumerical method using the laws of electrostatic techniques to simulate the two-dimensional electric field under the high-voltage overhead line. The second method which will be discussed is the finite element method (FEM) using specific boundary conditions to compute the two- dimensional electric field distributions in an efficient way.
Abstract: this paper gives a novel approach towards real-time speed estimation of multiple traffic vehicles using fuzzy logic and image processing techniques with proper arrangement of camera parameters. The described algorithm consists of several important steps. First, the background is estimated by computing median over time window of specific frames. Second, the foreground is extracted using fuzzy similarity approach (FSA) between estimated background pixels and the current frame pixels containing foreground and background. Third, the traffic lanes are divided into two parts for both direction vehicles for parallel processing. Finally, the speeds of vehicles are estimated by Maximum a Posterior Probability (MAP) estimator. True ground speed is determined by utilizing infrared sensors for three different vehicles and the results are compared to the proposed algorithm with an accuracy of ± 0.74 kmph.
Abstract: The purpose of this research is to compare the original
intra-oral digital dental radiograph images with images that are
enhanced using a combination of image processing algorithms. Intraoral
digital dental radiograph images are often noisy, blur edges and
low in contrast. A combination of sharpening and enhancement
method are used to overcome these problems. Three types of
proposed compound algorithms used are Sharp Adaptive Histogram
Equalization (SAHE), Sharp Median Adaptive Histogram
Equalization (SMAHE) and Sharp Contrast adaptive histogram
equalization (SCLAHE). This paper presents an initial study of the
perception of six dentists on the details of abnormal pathologies and
improvement of image quality in ten intra-oral radiographs. The
research focus on the detection of only three types of pathology
which is periapical radiolucency, widen periodontal ligament space
and loss of lamina dura. The overall result shows that SCLAHE-s
slightly improve the appearance of dental abnormalities- over the
original image and also outperform the other two proposed
compound algorithms.
Abstract: At present, increased concerns about global
environmental problems have magnified the importance of
sustainability management. To move towards sustainability,
companies need to look at everything from a holistic perspective in
order to understand the interconnections between economic growth
and environmental and social sustainability. This paper aims to gain
an understanding of key determinants that drive sustainability
management and barriers that hinder its development. It employs
semi-structured interviews with key informants, site observation and
documentation. The informants are production, marketing and
environmental managers of the leading wine producer, which aims to
become an Asia-s leader in wine & wine based products. It is found
that corporate image and top management leadership are the primary
factors influencing the adoption of sustainability management. Lack
of environmental knowledge and inefficient communication are
identified as barriers.
Abstract: Image watermarking has proven to be quite an
efficient tool for the purpose of copyright protection and
authentication over the last few years. In this paper, a novel image
watermarking technique in the wavelet domain is suggested and
tested. To achieve more security and robustness, the proposed
techniques relies on using two nested watermarks that are embedded
into the image to be watermarked. A primary watermark in form of a
PN sequence is first embedded into an image (the secondary
watermark) before being embedded into the host image. The
technique is implemented using Daubechies mother wavelets where
an arbitrary embedding factor α is introduced to improve the
invisibility and robustness. The proposed technique has been applied
on several gray scale images where a PSNR of about 60 dB was
achieved.
Abstract: This paper is the tomographic images reconstruction
simulation for defects detection in specimen. The specimen is the
thin cylindrical steel contained with low density materials. The
defects in material are simulated in three shapes.The specimen image
function will be transformed to projection data. Radon transform and
its inverse provide the mathematical for reconstructing tomographic
images from projection data. The result of the simulation show that
the reconstruction images is complete for defect detection.
Abstract: This paper presents the use of a newly created network
structure known as a Self-Delaying Dynamic Network (SDN) to
create a high resolution image from a set of time stepped input
frames. These SDNs are non-recurrent temporal neural networks
which can process time sampled data. SDNs can store input data
for a lifecycle and feature dynamic logic based connections between
layers. Several low resolution images and one high resolution image
of a scene were presented to the SDN during training by a Genetic
Algorithm. The SDN was trained to process the input frames in order
to recreate the high resolution image. The trained SDN was then used
to enhance a number of unseen noisy image sets. The quality of high
resolution images produced by the SDN is compared to that of high
resolution images generated using Bi-Cubic interpolation. The SDN
produced images are superior in several ways to the images produced
using Bi-Cubic interpolation.
Abstract: The objectives of this research are to produce
prototype coconut oil based solvent offset printing inks and to
analyze a basic quality of printing work derived from coconut oil
based solvent offset printing inks, by mean of bringing coconut oil
for producing varnish and bringing such varnish to produce black
offset printing inks. Then, analysis of qualities i.e. CIELAB value,
density value, and dot gain value of printing work from coconut oil
based solvent offset printing inks which printed on gloss-coated
woodfree paper weighs 130 grams were done. The research result of
coconut oil based solvent offset printing inks indicated that the
suitable varnish formulation is using 51% of coconut oil, 36% of
phenolic resin, and 14% of solvent oil 14%, while the result of
producing black offset ink displayed that the suitable formula of
printing ink is using varnish mixed with 20% of coconut oil, and the
analyzing printing work of coconut oil based solvent offset printing
inks which printed on paper, the results were as follows: CIELAB
value of black offset printing ink is at L* = 31.90, a* = 0.27, and b* =
1.86, density value is at 1.27 and dot gain value was high at mid tone
area of image area.
Abstract: Echocardiography imaging is one of the most common diagnostic tests that are widely used for assessing the abnormalities of the regional heart ventricle function. The main goal of the image enhancement task in 2D-echocardiography (2DE) is to solve two major anatomical structure problems; speckle noise and low quality. Therefore, speckle noise reduction is one of the important steps that used as a pre-processing to reduce the distortion effects in 2DE image segmentation. In this paper, we present the common filters that based on some form of low-pass spatial smoothing filters such as Mean, Gaussian, and Median. The Laplacian filter was used as a high-pass sharpening filter. A comparative analysis was presented to test the effectiveness of these filters after being applied to original 2DE images of 4-chamber and 2-chamber views. Three statistical quantity measures: root mean square error (RMSE), peak signal-to-ratio (PSNR) and signal-tonoise ratio (SNR) are used to evaluate the filter performance quantitatively on the output enhanced image.
Abstract: In modern human computer interaction systems
(HCI), emotion recognition is becoming an imperative characteristic.
The quest for effective and reliable emotion recognition in HCI has
resulted in a need for better face detection, feature extraction and
classification. In this paper we present results of feature space analysis
after briefly explaining our fully automatic vision based emotion
recognition method. We demonstrate the compactness of the feature
space and show how the 2d/3d based method achieves superior features
for the purpose of emotion classification. Also it is exposed that
through feature normalization a widely person independent feature
space is created. As a consequence, the classifier architecture has
only a minor influence on the classification result. This is particularly
elucidated with the help of confusion matrices. For this purpose
advanced classification algorithms, such as Support Vector Machines
and Artificial Neural Networks are employed, as well as the simple k-
Nearest Neighbor classifier.
Abstract: In this manuscript, a wavelet-based blind
watermarking scheme has been proposed as a means to provide
security to authenticity of a fingerprint. The information used for
identification or verification of a fingerprint mainly lies in its
minutiae. By robust watermarking of the minutiae in the fingerprint
image itself, the useful information can be extracted accurately even
if the fingerprint is severely degraded. The minutiae are converted in
a binary watermark and embedding these watermarks in the detail
regions increases the robustness of watermarking, at little to no
additional impact on image quality. It has been experimentally shown
that when the minutiae is embedded into wavelet detail coefficients
of a fingerprint image in spread spectrum fashion using a
pseudorandom sequence, the robustness is observed to have a
proportional response while perceptual invisibility has an inversely
proportional response to amplification factor “K". The DWT-based
technique has been found to be very robust against noises,
geometrical distortions filtering and JPEG compression attacks and is
also found to give remarkably better performance than DCT-based
technique in terms of correlation coefficient and number of erroneous
minutiae.
Abstract: This paper presents an application of level sets for the segmentation of abdominal and thoracic aortic aneurysms in CTA
datasets. An important challenge in reliably detecting aortic is the
need to overcome problems associated with intensity
inhomogeneities. Level sets are part of an important class of methods
that utilize partial differential equations (PDEs) and have been extensively applied in image segmentation. A kernel function in the
level set formulation aids the suppression of noise in the extracted
regions of interest and then guides the motion of the evolving contour
for the detection of weak boundaries. The speed of curve evolution
has been significantly improved with a resulting decrease in segmentation time compared with previous implementations of level
sets, and are shown to be more effective than other approaches in
coping with intensity inhomogeneities. We have applied the Courant
Friedrichs Levy (CFL) condition as stability criterion for our algorithm.