Abstract: This paper presents a dominant color descriptor
technique for medical image retrieval. The medical image system
will collect and store into medical database. The purpose of
dominant color descriptor (DCD) technique is to retrieve medical
image and to display similar image using queried image. First, this
technique will search and retrieve medical image based on keyword
entered by user. After image is found, the system will assign this
image as a queried image. DCD technique will calculate the image
value of dominant color. Then, system will search and retrieve again
medical image based on value of dominant color query image.
Finally, the system will display similar images with the queried
image to user. Simple application has been developed and tested
using dominant color descriptor. Result based on experiment
indicates this technique is effective and can be used for medical
image retrieval.
Abstract: Breast skin-line estimation and breast segmentation is an important pre-process in mammogram image processing and computer-aided diagnosis of breast cancer. Limiting the area to be processed into a specific target region in an image would increase the accuracy and efficiency of processing algorithms. In this paper we are presenting a new algorithm for estimating skin-line and breast segmentation using fast marching algorithm. Fast marching is a partial-differential equation based numerical technique to track evolution of interfaces. We have introduced some modifications to the traditional fast marching method, specifically to improve the accuracy of skin-line estimation and breast tissue segmentation. Proposed modifications ensure that the evolving front stops near the desired boundary. We have evaluated the performance of the algorithm by using 100 mammogram images taken from mini-MIAS database. The results obtained from the experimental evaluation indicate that this algorithm explains 98.6% of the ground truth breast region and accuracy of the segmentation is 99.1%. Also this algorithm is capable of partially-extracting nipple when it is available in the profile.
Abstract: The analysis is mainly concentrating on the knowledge
management literatures productivity trend which subjects as
“knowledge management" in SSCI database. The purpose what the
analysis will propose is to summarize the trend information for
knowledge management researchers since core knowledge will be
concentrated in core categories. The result indicated that the literature
productivity which topic as “knowledge management" is still
increasing extremely and will demonstrate the trend by different
categories including author, country/territory, institution name,
document type, language, publication year, and subject area. Focus on
the right categories, you will catch the core research information. This
implies that the phenomenon "success breeds success" is more
common in higher quality publications.
Abstract: Nowadays, organizations and business has several motivating factors to protect an individual-s privacy. Confidentiality refers to type of sharing information to third parties. This is always referring to private information, especially for personal information that usually needs to keep as a private. Because of the important of privacy concerns today, we need to design a database system that suits with privacy. Agrawal et. al. has introduced Hippocratic Database also we refer here as a privacy-aware database. This paper will explain how HD can be a future trend for web-based application to enhance their privacy level of trustworthiness among internet users.
Abstract: In this paper we present a novel approach for wavelet compression of electrocardiogram (ECG) signals based on the set partitioning in hierarchical trees (SPIHT) coding algorithm. SPIHT algorithm has achieved prominent success in image compression. Here we use a modified version of SPIHT for one dimensional signals. We applied wavelet transform with SPIHT coding algorithm on different records of MIT-BIH database. The results show the high efficiency of this method in ECG compression.
Abstract: Data Warehouses (DWs) are repositories which contain the unified history of an enterprise for decision support. The data must be Extracted from information sources, Transformed and integrated to be Loaded (ETL) into the DW, using ETL tools. These tools focus on data movement, where the models are only used as a means to this aim. Under a conceptual viewpoint, the authors want to innovate the ETL process in two ways: 1) to make clear compatibility between models in a declarative fashion, using correspondence assertions and 2) to identify the instances of different sources that represent the same entity in the real-world. This paper presents the overview of the proposed framework to model the ETL process, which is based on the use of a reference model and perspective schemata. This approach provides the designer with a better understanding of the semantic associated with the ETL process.
Abstract: This paper analyzes different techniques of the fine grained security of relational databases for the two variables-data accessibility and inference. Data accessibility measures the amount of data available to the users after applying a security technique on a table. Inference is the proportion of information leakage after suppressing a cell containing secret data. A row containing a secret cell which is suppressed can become a security threat if an intruder generates useful information from the related visible information of the same row. This paper measures data accessibility and inference associated with row, cell, and column level security techniques. Cell level security offers greatest data accessibility as it suppresses secret data only. But on the other hand, there is a high probability of inference in cell level security. Row and column level security techniques have least data accessibility and inference. This paper introduces cell plus innocent security technique that utilizes the cell level security method but suppresses some innocent data to dodge an intruder that a suppressed cell may not necessarily contain secret data. Four variations of the technique namely cell plus innocent 1/4, cell plus innocent 2/4, cell plus innocent 3/4, and cell plus innocent 4/4 respectively have been introduced to suppress innocent data equal to 1/4, 2/4, 3/4, and 4/4 percent of the true secret data inside the database. Results show that the new technique offers better control over data accessibility and inference as compared to the state-of-theart security techniques. This paper further discusses the combination of techniques together to be used. The paper shows that cell plus innocent 1/4, 2/4, and 3/4 techniques can be used as a replacement for the cell level security.
Abstract: In a handwriting recognition problem, characters can
be represented using chain codes. The main problem in representing
characters using chain code is optimizing the length of the chain
code. This paper proposes to use randomized algorithm to minimize
the length of Freeman Chain Codes (FCC) generated from isolated
handwritten characters. Feedforward neural network is used in the
classification stage to recognize the image characters. Our test results
show that by applying the proposed model, we reached a relatively
high accuracy for the problem of isolated handwritten when tested on
NIST database.
Abstract: Over the past decades, automatic face recognition has become a highly active research area, mainly due to the countless application possibilities in both the private as well as the public sector. Numerous algorithms have been proposed in the literature to cope with the problem of face recognition, nevertheless, a group of methods commonly referred to as appearance based have emerged as the dominant solution to the face recognition problem. Many comparative studies concerned with the performance of appearance based methods have already been presented in the literature, not rarely with inconclusive and often with contradictory results. No consent has been reached within the scientific community regarding the relative ranking of the efficiency of appearance based methods for the face recognition task, let alone regarding their susceptibility to appearance changes induced by various environmental factors. To tackle these open issues, this paper assess the performance of the three dominant appearance based methods: principal component analysis, linear discriminant analysis and independent component analysis, and compares them on equal footing (i.e., with the same preprocessing procedure, with optimized parameters for the best possible performance, etc.) in face verification experiments on the publicly available XM2VTS database. In addition to the comparative analysis on the XM2VTS database, ten degraded versions of the database are also employed in the experiments to evaluate the susceptibility of the appearance based methods on various image degradations which can occur in "real-life" operating conditions. Our experimental results suggest that linear discriminant analysis ensures the most consistent verification rates across the tested databases.
Abstract: Electrocardiogram (ECG) segmentation is necessary
to help reduce the time consuming task of manually annotating
ECG-s. Several algorithms have been developed to segment the ECG
automatically. We first review several of such methods, and then
present a new single lead segmentation method based on Adaptive
piecewise constant approximation (APCA) and Piecewise derivative
dynamic time warping (PDDTW). The results are tested on the QT
database. We compared our results to Laguna-s two lead method. Our
proposed approach has a comparable mean error, but yields a slightly
higher standard deviation than Laguna-s method.
Abstract: In this paper, we propose a novel fast search algorithm for short MPEG video clips from video database. This algorithm is based on the adjacent pixel intensity difference quantization (APIDQ) algorithm, which had been reliably applied to human face recognition previously. An APIDQ histogram is utilized as the feature vector of the frame image. Instead of fully decompressed video frames, partially decoded data, namely DC images are utilized. Combined with active search [4], a temporal pruning algorithm, fast and robust video search can be realized. The proposed search algorithm has been evaluated by 6 hours of video to search for given 200 MPEG video clips which each length is 15 seconds. Experimental results show the proposed algorithm can detect the similar video clip in merely 80ms, and Equal Error Rate (ERR) of 3 % is achieved, which is more accurately and robust than conventional fast video search algorithm.
Abstract: Multimedia, as it stands now is perhaps the most
diverse and rich culture around the globe. One of the major needs of
Multimedia is to have a single system that enables people to
efficiently search through their multimedia catalogues. Many
Domain Specific Systems and architectures have been proposed but
up till now no generic and complete architecture is proposed. In this
paper, we have suggested a generic architecture for Multimedia
Database. The main strengths of our architecture besides being
generic are Semantic Libraries to reduce semantic gap, levels of
feature extraction for more specific and detailed feature extraction
according to classes defined by prior level, and merging of two types
of queries i.e. text and QBE (Query by Example) for more accurate
yet detailed results.
Abstract: In this paper, we represent protein structure by using
graph. A protein structure database will become a graph database.
Each graph is represented by a spectral vector. We use Jacobi
rotation algorithm to calculate the eigenvalues of the normalized
Laplacian representation of adjacency matrix of graph. To measure
the similarity between two graphs, we calculate the Euclidean
distance between two graph spectral vectors. To cluster the graphs,
we use M-tree with the Euclidean distance to cluster spectral vectors.
Besides, M-tree can be used for graph searching in graph database.
Our proposal method was tested with graph database of 100 graphs
representing 100 protein structures downloaded from Protein Data
Bank (PDB) and we compare the result with the SCOP hierarchical
structure.
Abstract: The technological concepts such as wireless hospital
and portable cardiac telemetry system require the development of
physiological signal acquisition devices to be easily integrated into
the hospital database. In this paper we present the low cost, portable
wireless ECG acquisition hardware that transmits ECG signals to a
dedicated computer.The front end of the system obtains and
processes incoming signals, which are then transmitted via a
microcontroller and wireless Bluetooth module. A monitoring
purpose Bluetooth based end user application integrated with patient
database management module is developed for the computers. The
system will act as a continuous event recorder, which can be used to
follow up patients who have been resuscitatedfrom cardiac arrest,
ventricular tachycardia but also for diagnostic purposes for patients
with arrhythmia symptoms. In addition, cardiac information can be
saved into the patient-s database of the hospital.
Abstract: Discovering new biological knowledge from the highthroughput biological data is a major challenge to bioinformatics today. To address this challenge, we developed a new approach for protein classification. Proteins that are evolutionarily- and thereby functionally- related are said to belong to the same classification. Identifying protein classification is of fundamental importance to document the diversity of the known protein universe. It also provides a means to determine the functional roles of newly discovered protein sequences. Our goal is to predict the functional classification of novel protein sequences based on a set of features extracted from each protein sequence. The proposed technique used datasets extracted from the Structural Classification of Proteins (SCOP) database. A set of spectral domain features based on Fast Fourier Transform (FFT) is used. The proposed classifier uses multilayer back propagation (MLBP) neural network for protein classification. The maximum classification accuracy is about 91% when applying the classifier to the full four levels of the SCOP database. However, it reaches a maximum of 96% when limiting the classification to the family level. The classification results reveal that spectral domain contains information that can be used for classification with high accuracy. In addition, the results emphasize that sequence similarity measures are of great importance especially at the family level.
Abstract: This article describes Uruk, the virtual museum of
Iraq that we developed for visual exploration and retrieval of image
collections. The system largely exploits the loosely-structured
hierarchy of XML documents that provides a useful representation
method to store semi-structured or unstructured data, which does not
easily fit into existing database. The system offers users the
capability to mine and manage the XML-based image collections
through a web-based Graphical User Interface (GUI). Typically, at an
interactive session with the system, the user can browse a visual
structural summary of the XML database in order to select interesting
elements. Using this intermediate result, queries combining structure
and textual references can be composed and presented to the system.
After query evaluation, the full set of answers is presented in a visual
and structured way.
Abstract: In this paper we describe a hybrid technique of Minimax search and aggregate Mahalanobis distance function synthesis to evolve Awale game player. The hybrid technique helps to suggest a move in a short amount of time without looking into endgame database. However, the effectiveness of the technique is heavily dependent on the training dataset of the Awale strategies utilized. The evolved player was tested against Awale shareware program and the result is appealing.
Abstract: The paper describes a new approach for fingerprint
classification, based on the distribution of local features (minute
details or minutiae) of the fingerprints. The main advantage is that
fingerprint classification provides an indexing scheme to facilitate
efficient matching in a large fingerprint database. A set of rules based
on heuristic approach has been proposed. The area around the core
point is treated as the area of interest for extracting the minutiae
features as there are substantial variations around the core point as
compared to the areas away from the core point. The core point in a
fingerprint has been located at a point where there is maximum
curvature. The experimental results report an overall average
accuracy of 86.57 % in fingerprint classification.
Abstract: This paper proposes a hybrid method for eyes localization
in facial images. The novelty is in combining techniques
that utilise colour, edge and illumination cues to improve accuracy.
The method is based on the observation that eye regions have dark
colour, high density of edges and low illumination as compared
to other parts of face. The first step in the method is to extract
connected regions from facial images using colour, edge density and
illumination cues separately. Some of the regions are then removed
by applying rules that are based on the general geometry and shape
of eyes. The remaining connected regions obtained through these
three cues are then combined in a systematic way to enhance the
identification of the candidate regions for the eyes. The geometry
and shape based rules are then applied again to further remove the
false eye regions. The proposed method was tested using images from
the PICS facial images database. The proposed method has 93.7%
and 87% accuracies for initial blobs extraction and final eye detection
respectively.
Abstract: Online trading is an alternative to conventional shopping method. People trade goods which are new or pre-owned before. However, there are times when a user is not able to search the items wanted online. This is because the items may not be posted as yet, thus ending the search. Conventional search mechanism only works by searching and matching search criteria (requirement) with data available in a particular database. This research aims to match current search requirements with future postings. This would involve the time factor in the conventional search method. A Car Matching Alert System (CMAS) prototype was developed to test the matching algorithm. When a buyer-s search returns no result, the system saves the search and the buyer will be alerted if there is a match found based on future postings. The algorithm developed is useful and as it can be applied in other search context.