Abstract: The aim of this paper is to propose a general
framework for storing, analyzing, and extracting knowledge from
two-dimensional echocardiographic images, color Doppler images,
non-medical images, and general data sets. A number of high
performance data mining algorithms have been used to carry out this
task. Our framework encompasses four layers namely physical
storage, object identification, knowledge discovery, user level.
Techniques such as active contour model to identify the cardiac
chambers, pixel classification to segment the color Doppler echo
image, universal model for image retrieval, Bayesian method for
classification, parallel algorithms for image segmentation, etc., were
employed. Using the feature vector database that have been
efficiently constructed, one can perform various data mining tasks
like clustering, classification, etc. with efficient algorithms along
with image mining given a query image. All these facilities are
included in the framework that is supported by state-of-the-art user
interface (UI). The algorithms were tested with actual patient data
and Coral image database and the results show that their performance
is better than the results reported already.
Abstract: This paper introduces an effective method of
segmenting Korean text (place names in Korean) from a Korean road
sign image. A Korean advanced directional road sign is composed of
several types of visual information such as arrows, place names in
Korean and English, and route numbers. Automatic classification of
the visual information and extraction of Korean place names from the
road sign images make it possible to avoid a lot of manual inputs to a
database system for management of road signs nationwide. We
propose a series of problem-specific heuristics that correctly segments
Korean place names, which is the most crucial information, from the
other information by leaving out non-text information effectively. The
experimental results with a dataset of 368 road sign images show 96%
of the detection rate per Korean place name and 84% per road sign
image.
Abstract: In this paper, we propose the variational EM inference
algorithm for the multi-class Gaussian process classification model
that can be used in the field of human behavior recognition. This
algorithm can drive simultaneously both a posterior distribution of a
latent function and estimators of hyper-parameters in a Gaussian
process classification model with multiclass. Our algorithm is based
on the Laplace approximation (LA) technique and variational EM
framework. This is performed in two steps: called expectation and
maximization steps. First, in the expectation step, using the Bayesian
formula and LA technique, we derive approximately the posterior
distribution of the latent function indicating the possibility that each
observation belongs to a certain class in the Gaussian process
classification model. Second, in the maximization step, using a derived
posterior distribution of latent function, we compute the maximum
likelihood estimator for hyper-parameters of a covariance matrix
necessary to define prior distribution for latent function. These two
steps iteratively repeat until a convergence condition satisfies.
Moreover, we apply the proposed algorithm with human action
classification problem using a public database, namely, the KTH
human action data set. Experimental results reveal that the proposed
algorithm shows good performance on this data set.
Abstract: The Petri nets are the first standard for business
process modeling. Most probably, it is one of the core reasons why
all new standards created afterwards have to be so reformed as to
reach the stage of mapping the new standard onto Petri nets. The paper presents a business process repository based on a
universal database. The repository provides the possibility the data
about a given process to be stored in three different ways. Business
process repository is developed with regard to the reformation of a
given model to a Petri net in order to be easily simulated. Two different techniques for business process simulation based on
Petri nets - Yasper and Woflan are discussed. Their advantages and
drawbacks are outlined. The way of simulating business process
models, stored in the Business process repository is shown.
Abstract: This paper presents the results of a study to assess
crucial aspects and the strength of the scientific basis of a typically
interdisciplinary, applied field: food supply chain risk assessment
research. Our approach is based on an advanced scientometrics
analysis that is a quantitative study of the disciplines of science based
on published literature to measure interdisciplinary. This paper aims
to describe the quantity and quality of the publication trends in food
supply chain risk assessment. The publication under study was
composed of 266 articles from database web of science. The results
were analyzed based on date of publication, type of document,
language of the documents, source of publications, subject areas,
authors and their affiliations, and the countries involved in
developing the articles.
Abstract: Magnetic Resonance Imaging (MRI) is one of the
most important medical imaging modality. Subjective assessment of
the image quality is regarded as the gold standard to evaluate MR
images. In this study, a database of 210 MR images which contains
ten reference images and 200 distorted images is presented. The
reference images were distorted with four types of distortions: Rician
Noise, Gaussian White Noise, Gaussian Blur and DCT compression.
The 210 images were assessed by ten subjects. The subjective scores
were presented in Difference Mean Opinion Score (DMOS). The
DMOS values were compared with four FR-IQA metrics. We have
used Pearson Linear Coefficient (PLCC) and Spearman Rank Order
Correlation Coefficient (SROCC) to validate the DMOS values. The
high correlation values of PLCC and SROCC shows that the DMOS
values are close to the objective FR-IQA metrics.
Abstract: In this paper, a novel fuzzy approach is developed
while solving the Dynamic Routing and Wavelength Assignment
(DRWA) problem in optical networks with Wavelength Division
Multiplexing (WDM). In this work, the effect of nonlinear and linear
impairments such as Four Wave Mixing (FWM) and amplifier
spontaneous emission (ASE) noise are incorporated respectively. The
novel algorithm incorporates fuzzy logic controller (FLC) to reduce
the effect of FWM noise and ASE noise on a requested lightpath
referred in this work as FWM aware fuzzy dynamic routing and
wavelength assignment algorithm. The FWM crosstalk products and
the static FWM noise power per link are pre computed in order to
reduce the set up time of a requested lightpath, and stored in an
offline database. These are retrieved during the setting up of a
lightpath and evaluated online taking the dynamic parameters like
cost of the links into consideration.
Abstract: A growing demand is felt today for realistic 3D
models enabling the cognition and popularization of historical-artistic
heritage. Evaluation and preservation of Cultural Heritage is
inextricably connected with the innovative processes of gaining,
managing, and using knowledge. The development and perfecting of
techniques for acquiring and elaborating photorealistic 3D models,
made them pivotal elements for popularizing information of objects
on the scale of architectonic structures.
Abstract: In this paper, we present an application of Riemannian
geometry for processing non-Euclidean image data. We consider the
image as residing in a Riemannian manifold, for developing a new
method to brain edge detection and brain extraction. Automating this
process is a challenge due to the high diversity in appearance brain
tissue, among different patients and sequences. The main contribution, in this paper, is the use of an edge-based
anisotropic diffusion tensor for the segmentation task by integrating
both image edge geometry and Riemannian manifold (geodesic,
metric tensor) to regularize the convergence contour and extract
complex anatomical structures. We check the accuracy of the
segmentation results on simulated brain MRI scans of single
T1-weighted, T2-weighted and Proton Density sequences. We
validate our approach using two different databases: BrainWeb
database, and MRI Multiple sclerosis Database (MRI MS DB). We
have compared, qualitatively and quantitatively, our approach with
the well-known brain extraction algorithms. We show that using
a Riemannian manifolds to medical image analysis improves the
efficient results to brain extraction, in real time, outperforming the
results of the standard techniques.
Abstract: People, throughout the history, have made estimates
and inferences about the future by using their past experiences.
Developing information technologies and the improvements in the
database management systems make it possible to extract useful
information from knowledge in hand for the strategic decisions.
Therefore, different methods have been developed. Data mining by
association rules learning is one of such methods. Apriori algorithm,
one of the well-known association rules learning algorithms, is not
commonly used in spatio-temporal data sets. However, it is possible
to embed time and space features into the data sets and make Apriori
algorithm a suitable data mining technique for learning spatiotemporal
association rules. Lake Van, the largest lake of Turkey, is a
closed basin. This feature causes the volume of the lake to increase or
decrease as a result of change in water amount it holds. In this study,
evaporation, humidity, lake altitude, amount of rainfall and
temperature parameters recorded in Lake Van region throughout the
years are used by the Apriori algorithm and a spatio-temporal data
mining application is developed to identify overflows and newlyformed
soil regions (underflows) occurring in the coastal parts of
Lake Van. Identifying possible reasons of overflows and underflows
may be used to alert the experts to take precautions and make the
necessary investments.
Abstract: Modelling of building processes of a multimodal
freight transportation support information system is discussed based
on modern CASE technologies. Functional efficiencies of ports in
the eastern part of the Black Sea are analyzed taking into account
their ecological, seasonal, resource usage parameters. By resources,
we mean capacities of berths, cranes, automotive transport, as well as
work crews and neighbouring airports. For the purpose of designing
database of computer support system for Managerial (Logistics)
function, using Object-Role Modeling (ORM) tool (NORMA–Natural ORM Architecture) is proposed, after which Entity
Relationship Model (ERM) is generated in automated process.
Software is developed based on Process-Oriented and Service-Oriented architecture, in Visual Studio.NET environment.
Abstract: In this paper, we present a comparative study of three
methods of 2D face recognition system such as: Iso-Geodesic Curves
(IGC), Geodesic Distance (GD) and Geodesic-Intensity Histogram
(GIH). These approaches are based on computing of geodesic
distance between points of facial surface and between facial curves.
In this study we represented the image at gray level as a 2D surface in
a 3D space, with the third coordinate proportional to the intensity
values of pixels. In the classifying step, we use: Neural Networks
(NN), K-Nearest Neighbor (KNN) and Support Vector Machines
(SVM). The images used in our experiments are from two wellknown
databases of face images ORL and YaleB. ORL data base was
used to evaluate the performance of methods under conditions where
the pose and sample size are varied, and the database YaleB was used
to examine the performance of the systems when the facial
expressions and lighting are varied.
Abstract: Food supply chain is one of the most complex supply
chain networks due to its perishable nature and customer oriented
products, and food safety is the major concern for this industry. IT
system could help to minimize the production and consumption of
unsafe food by controlling and monitoring the entire system.
However, there have been many issues in adoption of IT system in
this industry specifically within SMEs sector. With this regard, this
study presents a novel approach to use IT and tractability systems in
the food supply chain, using application of RFID and central
database.
Abstract: Many existing amusement parks have been operated
with assistance of a variety of information and communications
technologies to design friendly and efficient service systems for
tourists. However, these systems leave various levels of decisions to
tourists to make by themselves. This incurs pressure on tourists and
thereby bringing negative experience in their tour. This paper
proposes a smart amusement park system to offer each tourist the
GPS-based customized plan without tourists making decisions by
themselves. The proposed system consists of the mobile app
subsystem, the central subsystem, and the detecting/counting
subsystem. The mobile app subsystem interacts with the central
subsystem. The central subsystem performs the necessary computing
and database management of the proposed system. The
detecting/counting subsystem aims to detect and compute the number
of visitors to an attraction. Experimental results show that the
proposed system can not only work well, but also provide an
innovative business operating model for owners of amusement parks.
Abstract: Measuring the Electrocardiogram (ECG) signal is an
essential process for the diagnosis of the heart diseases. The ECG
signal has the information of the degree of how much the heart
performs its functions. In medical diagnosis and treatment systems,
Decision Support Systems processing the ECG signal are being
developed for the use of clinicians while medical examination. In this
study, a modular wireless ECG (WECG) measuring and recording
system using a single board computer and e-Health sensor platform
is developed. In this designed modular system, after the ECG signal
is taken from the body surface by the electrodes first, it is filtered and
converted to digital form. Then, it is recorded to the health database
using Wi-Fi communication technology. The real time access of the
ECG data is provided through the internet utilizing the developed
web interface.
Abstract: In this article, a method is presented to effectively
estimate the deformed shape of a thick plate due to line heating. The
method uses a fifth order spline interpolation, with up to C3
continuity at specific points to compute the shape of the deformed
geometry. First and second order derivatives over a surface are the
resulting parameters of a given heating line on a plate. These
parameters are determined through experiments and/or finite element
simulations. Very accurate kriging models are fitted to real or virtual
surfaces to build-up a database of maps. Maps of first and second
order derivatives are then applied on numerical plate models to
evaluate their evolving shapes through a sequence of heating lines.
Adding an optimization process to this approach would allow
determining the trajectories of heating lines needed to shape complex
geometries, such as Francis turbine blades.
Abstract: Computer aided diagnosis systems provide vital
opinion to radiologists in the detection of early signs of breast cancer
from mammogram images. Architectural distortions, masses and
microcalcifications are the major abnormalities. In this paper, a
computer aided diagnosis system has been proposed for
distinguishing abnormal mammograms with architectural distortion
from normal mammogram. Four types of texture features GLCM
texture, GLRLM texture, fractal texture and spectral texture features
for the regions of suspicion are extracted. Support vector machine
has been used as classifier in this study. The proposed system yielded
an overall sensitivity of 96.47% and an accuracy of 96% for
mammogram images collected from digital database for screening
mammography database.
Abstract: In this paper, we present a new segmentation approach
for focal liver lesions in contrast enhanced ultrasound imaging. This
approach, based on a two-cluster Fuzzy C-Means methodology,
considers type-II fuzzy sets to handle uncertainty due to the image
modality (presence of speckle noise, low contrast, etc.), and to
calculate the optimum inter-cluster threshold. Fine boundaries are
detected by a local recursive merging of ambiguous pixels. The
method has been tested on a representative database. Compared to
both Otsu and type-I Fuzzy C-Means techniques, the proposed
method significantly reduces the segmentation errors.
Abstract: One of the most critical decision points in the design of a
face recognition system is the choice of an appropriate face representation.
Effective feature descriptors are expected to convey sufficient, invariant
and non-redundant facial information. In this work we propose a set of
Hahn moments as a new approach for feature description. Hahn moments
have been widely used in image analysis due to their invariance, nonredundancy
and the ability to extract features either globally and locally.
To assess the applicability of Hahn moments to Face Recognition we
conduct two experiments on the Olivetti Research Laboratory (ORL)
database and University of Notre-Dame (UND) X1 biometric collection.
Fusion of the global features along with the features from local facial
regions are used as an input for the conventional k-NN classifier. The
method reaches an accuracy of 93% of correctly recognized subjects for
the ORL database and 94% for the UND database.
Abstract: Speaker Identification (SI) is the task of establishing
identity of an individual based on his/her voice characteristics. The SI
task is typically achieved by two-stage signal processing: training and
testing. The training process calculates speaker specific feature
parameters from the speech and generates speaker models
accordingly. In the testing phase, speech samples from unknown
speakers are compared with the models and classified. Even though
performance of speaker identification systems has improved due to
recent advances in speech processing techniques, there is still need of
improvement. In this paper, a Closed-Set Tex-Independent Speaker
Identification System (CISI) based on a Multiple Classifier System
(MCS) is proposed, using Mel Frequency Cepstrum Coefficient
(MFCC) as feature extraction and suitable combination of vector
quantization (VQ) and Gaussian Mixture Model (GMM) together
with Expectation Maximization algorithm (EM) for speaker
modeling. The use of Voice Activity Detector (VAD) with a hybrid
approach based on Short Time Energy (STE) and Statistical
Modeling of Background Noise in the pre-processing step of the
feature extraction yields a better and more robust automatic speaker
identification system. Also investigation of Linde-Buzo-Gray (LBG)
clustering algorithm for initialization of GMM, for estimating the
underlying parameters, in the EM step improved the convergence rate
and systems performance. It also uses relative index as confidence
measures in case of contradiction in identification process by GMM
and VQ as well. Simulation results carried out on voxforge.org
speech database using MATLAB highlight the efficacy of the
proposed method compared to earlier work.