Abstract: To increase the temperature contrast in thermal
images, the characteristics of the electrical conductivity and thermal
imaging modalities can be combined. In this experimental study, it is
objected to observe whether the temperature contrast created by the
tumor tissue can be improved just due to the current application
within medical safety limits. Various thermal breast phantoms are
developed to simulate the female breast tissue. In vitro experiments
are implemented using a thermal infrared camera in a controlled
manner. Since experiments are implemented in vitro, there is no
metabolic heat generation and blood perfusion. Only the effects and
results of the electrical stimulation are investigated. Experimental
study is implemented with two-dimensional models. Temperature
contrasts due to the tumor tissues are obtained. Cancerous tissue is
determined using the difference and ratio of healthy and tumor
images. 1 cm diameter single tumor tissue causes almost 40 °mC
temperature contrast on the thermal-breast phantom. Electrode
artifacts are reduced by taking the difference and ratio of background
(healthy) and tumor images. Ratio of healthy and tumor images show
that temperature contrast is increased by the current application.
Abstract: Liver cancer is one of the common diseases that cause the death. Early detection is important to diagnose and reduce the incidence of death. Improvements in medical imaging and image processing techniques have significantly enhanced interpretation of medical images. Computer-Aided Diagnosis (CAD) systems based on these techniques play a vital role in the early detection of liver disease and hence reduce liver cancer death rate. This paper presents an automated CAD system consists of three stages; firstly, automatic liver segmentation and lesion’s detection. Secondly, extracting features. Finally, classifying liver lesions into benign and malignant by using the novel contrasting feature-difference approach. Several types of intensity, texture features are extracted from both; the lesion area and its surrounding normal liver tissue. The difference between the features of both areas is then used as the new lesion descriptors. Machine learning classifiers are then trained on the new descriptors to automatically classify liver lesions into benign or malignant. The experimental results show promising improvements. Moreover, the proposed approach can overcome the problems of varying ranges of intensity and textures between patients, demographics, and imaging devices and settings.
Abstract: The Metropolitan Region of Sao Paulo (MRSP) has suffered from serious water scarcity. Consequently, the most convenient solution has been building wells to extract groundwater from local aquifers. However, it requires constant vigilance to prevent over extraction and future events that can pose serious threat to the population, such as subsidence. Radar imaging techniques (InSAR) have allowed continuous investigation of such phenomena. The analysis of data in the present study consists of 23 SAR images dated from October 2007 to March 2011, obtained by the ALOS-1 spacecraft. Data processing was made with the software GMTSAR, by using the InSAR technique to create pairs of interferograms with ground displacement during different time spans. First results show a correlation between the location of 102 wells registered in 2009 and signals of ground displacement equal or lower than -90 millimeters (mm) in the region. The longest time span interferogram obtained dates from October 2007 to March 2010. As a result, from that interferogram, it was possible to detect the average velocity of displacement in millimeters per year (mm/y), and which areas strong signals have persisted in the MRSP. Four specific areas with signals of subsidence of 28 mm/y to 40 mm/y were chosen to investigate the phenomenon: Guarulhos (Sao Paulo International Airport), the Greater Sao Paulo, Itaquera and Sao Caetano do Sul. The coverage area of the signals was between 0.6 km and 1.65 km of length. All areas are located above a sedimentary type of aquifer. Itaquera and Sao Caetano do Sul showed signals varying from 28 mm/y to 32 mm/y. On the other hand, the places most likely to be suffering from stronger subsidence are the ones in the Greater Sao Paulo and Guarulhos, right beside the International Airport of Sao Paulo. The rate of displacement observed in both regions goes from 35 mm/y to 40 mm/y. Previous investigations of the water use at the International Airport highlight the risks of excessive water extraction that was being done through 9 deep wells. Therefore, it is affirmed that subsidence events are likely to occur and to cause serious damage in the area. This study could show a situation that has not been explored with proper importance in the city, given its social and economic consequences. Since the data were only available until 2011, the question that remains is if the situation still persists. It could be reaffirmed, however, a scenario of risk at the International Airport of Sao Paulo that needs further investigation.
Abstract: Automatic License plate recognition (ALPR) is a technology which recognizes the registration plate or number plate or License plate of a vehicle. In this paper, an Indian vehicle number plate is mined and the characters are predicted in efficient manner. ALPR involves four major technique i) Pre-processing ii) License Plate Location Identification iii) Individual Character Segmentation iv) Character Recognition. The opening phase, named pre-processing helps to remove noises and enhances the quality of the image using the conception of Morphological Operation and Image subtraction. The second phase, the most puzzling stage ascertain the location of license plate using the protocol Canny Edge detection, dilation and erosion. In the third phase, each characters characterized by Connected Component Approach (CCA) and in the ending phase, each segmented characters are conceptualized using cross correlation template matching- a scheme specifically appropriate for fixed format. Major application of ALPR is Tolling collection, Border Control, Parking, Stolen cars, Enforcement, Access Control, Traffic control. The database consists of 500 car images taken under dissimilar lighting condition is used. The efficiency of the system is 97%. Our future focus is Indian Vehicle License Plate Validation (Whether License plate of a vehicle is as per Road transport and highway standard).
Abstract: Nowadays, food safety is a great public concern;
therefore, robust and effective techniques are required for detecting
the safety situation of goods. Hyperspectral Imaging (HSI) is an
attractive material for researchers to inspect food quality and safety
estimation such as meat quality assessment, automated poultry
carcass inspection, quality evaluation of fish, bruise detection of
apples, quality analysis and grading of citrus fruits, bruise detection
of strawberry, visualization of sugar distribution of melons,
measuring ripening of tomatoes, defect detection of pickling
cucumber, and classification of wheat kernels. HSI can be used to
concurrently collect large amounts of spatial and spectral data on the
objects being observed. This technique yields with exceptional
detection skills, which otherwise cannot be achieved with either
imaging or spectroscopy alone. This paper presents a nonlinear
technique based on kernel Fukunaga-Koontz transform (KFKT) for
detection of fat content in ground meat using HSI. The KFKT which
is the nonlinear version of FKT is one of the most effective
techniques for solving problems involving two-pattern nature. The
conventional FKT method has been improved with kernel machines
for increasing the nonlinear discrimination ability and capturing
higher order of statistics of data. The proposed approach in this paper
aims to segment the fat content of the ground meat by regarding the
fat as target class which is tried to be separated from the remaining
classes (as clutter). We have applied the KFKT on visible and nearinfrared
(VNIR) hyperspectral images of ground meat to determine
fat percentage. The experimental studies indicate that the proposed
technique produces high detection performance for fat ratio in ground
meat.
Abstract: Growth and remodeling of biological structures have
gained lots of attention over the past decades. Determining the
response of living tissues to mechanical loads is necessary for a wide
range of developing fields such as prosthetics design or computerassisted
surgical interventions. It is a well-known fact that biological
structures are never stress-free, even when externally unloaded. The
exact origin of these residual stresses is not clear, but theoretically,
growth is one of the main sources. Extracting body organ’s shapes
from medical imaging does not produce any information regarding
the existing residual stresses in that organ. The simplest cause of such
stresses is gravity since an organ grows under its influence from
birth. Ignoring such residual stresses might cause erroneous results in
numerical simulations. Accounting for residual stresses due to tissue
growth can improve the accuracy of mechanical analysis results. This
paper presents an original computational framework based on gradual
growth to determine the residual stresses due to growth. To illustrate
the method, we apply it to a finite element model of a healthy human
face reconstructed from medical images. The distribution of residual
stress in facial tissues is computed, which can overcome the effect of
gravity and maintain tissues firmness. Our assumption is that tissue
wrinkles caused by aging could be a consequence of decreasing
residual stress and thus not counteracting gravity. Taking into
account these stresses seems therefore extremely important in
maxillofacial surgery. It would indeed help surgeons to estimate
tissues changes after surgery.
Abstract: Magnetic Resonance Imaging Contrast Agents
(MRI-CM) are significant in the clinical and biological imaging as
they have the ability to alter the normal tissue contrast, thereby
affecting the signal intensity to enhance the visibility and detectability
of images. Superparamagnetic Iron Oxide (SPIO) nanoparticles,
coated with dextran or carboxydextran are currently available for
clinical MR imaging of the liver. Most SPIO contrast agents are
T2 shortening agents and Resovist (Ferucarbotran) is one of a
clinically tested, organ-specific, SPIO agent which has a low
molecular carboxydextran coating. The enhancement effect of
Resovist depends on its relaxivity which in turn depends on factors
like magnetic field strength, concentrations, nanoparticle properties,
pH and temperature. Therefore, this study was conducted to
investigate the impact of field strength and different contrast
concentrations on enhancement effects of Resovist. The study
explored the MRI signal intensity of Resovist in the physiological
range of plasma from T2-weighted spin echo sequence at three
magnetic field strengths: 0.47 T (r1=15, r2=101), 1.5 T (r1=7.4,
r2=95), and 3 T (r1=3.3, r2=160) and the range of contrast
concentrations by a mathematical simulation. Relaxivities of r1 and r2
(L mmol-1 Sec-1) were obtained from a previous study and the selected
concentrations were 0.05, 0.06, 0.07, 0.08, 0.09, 0.1, 0.2, 0.3, 0.4, 0.5,
0.6, 0.7, 0.8, 0.9, 1.0, 2.0, and 3.0 mmol/L. T2-weighted images were
simulated using TR/TE ratio as 2000 ms /100 ms. According to the
reference literature, with increasing magnetic field strengths, the
r1 relaxivity tends to decrease while the r2 did not show any
systematic relationship with the selected field strengths. In parallel,
this study results revealed that the signal intensity of Resovist at lower
concentrations tends to increase than the higher concentrations. The
highest reported signal intensity was observed in the low field strength
of 0.47 T. The maximum signal intensities for 0.47 T, 1.5 T and 3 T
were found at the concentration levels of 0.05, 0.06 and 0.05 mmol/L,
respectively. Furthermore, it was revealed that, the concentrations
higher than the above, the signal intensity was decreased
exponentially. An inverse relationship can be found between the field
strength and T2 relaxation time, whereas, the field strength was
increased, T2 relaxation time was decreased accordingly. However,
resulted T2 relaxation time was not significantly different between
0.47 T and 1.5 T in this study. Moreover, a linear correlation of
transverse relaxation rates (1/T2, s–1) with the concentrations of
Resovist can be observed. According to these results, it can conclude
that the concentration of SPIO nanoparticle contrast agents and the
field strengths of MRI are two important parameters which can affect the signal intensity of T2-weighted SE sequence. Therefore, when MR
imaging those two parameters should be considered prudently.
Abstract: The increasing availability of information about earth
surface elevation (Digital Elevation Models DEM) generated from
different sources (remote sensing, Aerial Images, Lidar) poses the
question about how to integrate and make available to the most than
possible audience this huge amount of data. In order to exploit the potential of 3D elevation representation the
quality of data management plays a fundamental role. Due to the high
acquisition costs and the huge amount of generated data, highresolution
terrain surveys tend to be small or medium sized and
available on limited portion of earth. Here comes the need to merge
large-scale height maps that typically are made available for free at
worldwide level, with very specific high resolute datasets. One the
other hand, the third dimension increases the user experience and the
data representation quality, unlocking new possibilities in data
analysis for civil protection, real estate, urban planning, environment
monitoring, etc. The open-source 3D virtual globes, which are
trending topics in Geovisual Analytics, aim at improving the
visualization of geographical data provided by standard web services
or with proprietary formats. Typically, 3D Virtual globes like do not
offer an open-source tool that allows the generation of a terrain
elevation data structure starting from heterogeneous-resolution terrain
datasets. This paper describes a technological solution aimed to set
up a so-called “Terrain Builder”. This tool is able to merge
heterogeneous-resolution datasets, and to provide a multi-resolution
worldwide terrain services fully compatible with CesiumJS and
therefore accessible via web using traditional browser without any
additional plug-in.
Abstract: Data fusion technology can be the best way to extract
useful information from multiple sources of data. It has been widely
applied in various applications. This paper presents a data fusion
approach in multimedia data for event detection in twitter by using
Dempster-Shafer evidence theory. The methodology applies a mining
algorithm to detect the event. There are two types of data in the
fusion. The first is features extracted from text by using the bag-ofwords
method which is calculated using the term frequency-inverse
document frequency (TF-IDF). The second is the visual features
extracted by applying scale-invariant feature transform (SIFT). The
Dempster - Shafer theory of evidence is applied in order to fuse the
information from these two sources. Our experiments have indicated
that comparing to the approaches using individual data source, the
proposed data fusion approach can increase the prediction accuracy
for event detection. The experimental result showed that the proposed
method achieved a high accuracy of 0.97, comparing with 0.93 with
texts only, and 0.86 with images only.
Abstract: In the deep south of Thailand, checkpoints for people
verification are necessary for the security management of risk zones,
such as official buildings in the conflict area. In this paper, we
propose an automatic checkpoint system that verifies persons using
information from ID cards and facial features. The methods for a
person’s information abstraction and verification are introduced
based on useful information such as ID number and name, extracted
from official cards, and facial images from videos. The proposed
system shows promising results and has a real impact on the local
society.
Abstract: This paper outlines the development of an
experimental technique in quantifying supersonic jet flows, in an
attempt to avoid seeding particle problems frequently associated with
particle-image velocimetry (PIV) techniques at high Mach numbers.
Based on optical flow algorithms, the idea behind the technique
involves using high speed cameras to capture Schlieren images of the
supersonic jet shear layers, before they are subjected to an adapted
optical flow algorithm based on the Horn-Schnuck method to
determine the associated flow fields. The proposed method is capable
of offering full-field unsteady flow information with potentially
higher accuracy and resolution than existing point-measurements or
PIV techniques. Preliminary study via numerical simulations of a
circular de Laval jet nozzle successfully reveals flow and shock
structures typically associated with supersonic jet flows, which serve
as useful data for subsequent validation of the optical flow based
experimental results. For experimental technique, a Z-type Schlieren
setup is proposed with supersonic jet operated in cold mode,
stagnation pressure of 4 bar and exit Mach of 1.5. High-speed singleframe
or double-frame cameras are used to capture successive
Schlieren images. As implementation of optical flow technique to
supersonic flows remains rare, the current focus revolves around
methodology validation through synthetic images. The results of
validation test offers valuable insight into how the optical flow
algorithm can be further improved to improve robustness and
accuracy. Despite these challenges however, this supersonic flow
measurement technique may potentially offer a simpler way to
identify and quantify the fine spatial structures within the shock shear
layer.
Abstract: Fractal based digital image compression is a specific
technique in the field of color image. The method is best suited for
irregular shape of image like snow bobs, clouds, flame of fire; tree
leaves images, depending on the fact that parts of an image often
resemble with other parts of the same image. This technique has
drawn much attention in recent years because of very high
compression ratio that can be achieved. Hybrid scheme incorporating
fractal compression and speedup techniques have achieved high
compression ratio compared to pure fractal compression. Fractal
image compression is a lossy compression method in which selfsimilarity
nature of an image is used. This technique provides high
compression ratio, less encoding time and fart decoding process. In
this paper, fractal compression with quad tree and DCT is proposed
to compress the color image. The proposed hybrid schemes require
four phases to compress the color image. First: the image is
segmented and Discrete Cosine Transform is applied to each block of
the segmented image. Second: the block values are scanned in a
zigzag manner to prevent zero co-efficient. Third: the resulting image
is partitioned as fractals by quadtree approach. Fourth: the image is
compressed using Run length encoding technique.
Abstract: In order to retrieve images efficiently from a large
database, a unique method integrating color and texture features
using genetic programming has been proposed. Opponent color
histogram which gives shadow, shade, and light intensity invariant
property is employed in the proposed framework for extracting color
features. For texture feature extraction, fast discrete curvelet
transform which captures more orientation information at different
scales is incorporated to represent curved like edges. The recent
scenario in the issues of image retrieval is to reduce the semantic gap
between user’s preference and low level features. To address this
concern, genetic algorithm combined with relevance feedback is
embedded to reduce semantic gap and retrieve user’s preference
images. Extensive and comparative experiments have been conducted
to evaluate proposed framework for content based image retrieval on
two databases, i.e., COIL-100 and Corel-1000. Experimental results
clearly show that the proposed system surpassed other existing
systems in terms of precision and recall. The proposed work achieves
highest performance with average precision of 88.2% on COIL-100
and 76.3% on Corel, the average recall of 69.9% on COIL and 76.3%
on Corel. Thus, the experimental results confirm that the proposed
content based image retrieval system architecture attains better
solution for image retrieval.
Abstract: In this paper, an approach for the liver tumor detection
in computed tomography (CT) images is represented. The detection
process is based on classifying the features of target liver cell to
either tumor or non-tumor. Fractional differential (FD) is applied for
enhancement of Liver CT images, with the aim of enhancing texture
and edge features. Later on, a fusion method is applied to merge
between the various enhanced images and produce a variety of
feature improvement, which will increase the accuracy of
classification. Each image is divided into NxN non-overlapping
blocks, to extract the desired features. Support vector machines
(SVM) classifier is trained later on a supplied dataset different from
the tested one. Finally, the block cells are identified whether they are
classified as tumor or not. Our approach is validated on a group of
patients’ CT liver tumor datasets. The experiment results
demonstrated the efficiency of detection in the proposed technique.
Abstract: Advances in spatial and spectral resolution of satellite
images have led to tremendous growth in large image databases. The
data we acquire through satellites, radars, and sensors consists of
important geographical information that can be used for remote
sensing applications such as region planning, disaster management.
Spatial data classification and object recognition are important tasks
for many applications. However, classifying objects and identifying
them manually from images is a difficult task. Object recognition is
often considered as a classification problem, this task can be
performed using machine-learning techniques. Despite of many
machine-learning algorithms, the classification is done using
supervised classifiers such as Support Vector Machines (SVM) as the
area of interest is known. We proposed a classification method,
which considers neighboring pixels in a region for feature extraction
and it evaluates classifications precisely according to neighboring
classes for semantic interpretation of region of interest (ROI). A
dataset has been created for training and testing purpose; we
generated the attributes by considering pixel intensity values and
mean values of reflectance. We demonstrated the benefits of using
knowledge discovery and data-mining techniques, which can be on
image data for accurate information extraction and classification from
high spatial resolution remote sensing imagery.
Abstract: Segmentation of left ventricle (LV) from cardiac
ultrasound images provides a quantitative functional analysis of the
heart to diagnose disease. Active Shape Model (ASM) is widely used
for LV segmentation, but it suffers from the drawback that
initialization of the shape model is not sufficiently close to the target,
especially when dealing with abnormal shapes in disease. In this work,
a two-step framework is improved to achieve a fast and efficient LV
segmentation. First, a robust and efficient detection based on Hough
forest localizes cardiac feature points. Such feature points are used to
predict the initial fitting of the LV shape model. Second, ASM is
applied to further fit the LV shape model to the cardiac ultrasound
image. With the robust initialization, ASM is able to achieve more
accurate segmentation. The performance of the proposed method is
evaluated on a dataset of 810 cardiac ultrasound images that are mostly
abnormal shapes. This proposed method is compared with several
combinations of ASM and existing initialization methods. Our
experiment results demonstrate that accuracy of the proposed method
for feature point detection for initialization was 40% higher than the
existing methods. Moreover, the proposed method significantly
reduces the number of necessary ASM fitting loops and thus speeds up
the whole segmentation process. Therefore, the proposed method is
able to achieve more accurate and efficient segmentation results and is
applicable to unusual shapes of heart with cardiac diseases, such as left
atrial enlargement.
Abstract: Digital images are widely used in computer
applications. To store or transmit the uncompressed images
requires considerable storage capacity and transmission bandwidth.
Image compression is a means to perform transmission or storage of
visual data in the most economical way. This paper explains about
how images can be encoded to be transmitted in a multiplexing
time-frequency domain channel. Multiplexing involves packing
signals together whose representations are compact in the working
domain. In order to optimize transmission resources each 4 × 4
pixel block of the image is transformed by a suitable polynomial
approximation, into a minimal number of coefficients. Less than
4 × 4 coefficients in one block spares a significant amount of
transmitted information, but some information is lost. Different
approximations for image transformation have been evaluated as
polynomial representation (Vandermonde matrix), least squares +
gradient descent, 1-D Chebyshev polynomials, 2-D Chebyshev
polynomials or singular value decomposition (SVD). Results have
been compared in terms of nominal compression rate (NCR),
compression ratio (CR) and peak signal-to-noise ratio (PSNR)
in order to minimize the error function defined as the difference
between the original pixel gray levels and the approximated
polynomial output. Polynomial coefficients have been later encoded
and handled for generating chirps in a target rate of about two
chirps per 4 × 4 pixel block and then submitted to a transmission
multiplexing operation in the time-frequency domain.
Abstract: The aim of this paper is to propose a general
framework for storing, analyzing, and extracting knowledge from
two-dimensional echocardiographic images, color Doppler images,
non-medical images, and general data sets. A number of high
performance data mining algorithms have been used to carry out this
task. Our framework encompasses four layers namely physical
storage, object identification, knowledge discovery, user level.
Techniques such as active contour model to identify the cardiac
chambers, pixel classification to segment the color Doppler echo
image, universal model for image retrieval, Bayesian method for
classification, parallel algorithms for image segmentation, etc., were
employed. Using the feature vector database that have been
efficiently constructed, one can perform various data mining tasks
like clustering, classification, etc. with efficient algorithms along
with image mining given a query image. All these facilities are
included in the framework that is supported by state-of-the-art user
interface (UI). The algorithms were tested with actual patient data
and Coral image database and the results show that their performance
is better than the results reported already.
Abstract: In this paper, for an arbitrary multiplicative functional
f from the set of all upper triangular fuzzy matrices to the fuzzy
algebra, we prove that there exist a multiplicative functional F and a
functional G from the fuzzy algebra to the fuzzy algebra such that the
image of an upper triangular fuzzy matrix under f can be represented
as the product of all the images of its main diagonal elements under
F and other elements under G.
Abstract: This paper introduces an effective method of
segmenting Korean text (place names in Korean) from a Korean road
sign image. A Korean advanced directional road sign is composed of
several types of visual information such as arrows, place names in
Korean and English, and route numbers. Automatic classification of
the visual information and extraction of Korean place names from the
road sign images make it possible to avoid a lot of manual inputs to a
database system for management of road signs nationwide. We
propose a series of problem-specific heuristics that correctly segments
Korean place names, which is the most crucial information, from the
other information by leaving out non-text information effectively. The
experimental results with a dataset of 368 road sign images show 96%
of the detection rate per Korean place name and 84% per road sign
image.