Abstract: This paper presents an information retrieval model on
XML documents based on tree matching. Queries and documents are
represented by extended trees. An extended tree is built starting from
the original tree, with additional weighted virtual links between each
node and its indirect descendants allowing to directly reach each
descendant. Therefore only one level separates between each node
and its indirect descendants. This allows to compare the user query
and the document with flexibility and with respect to the structural
constraints of the query. The content of each node is very important to
decide weither a document element is relevant or not, thus the content
should be taken into account in the retrieval process. We separate
between the structure-based and the content-based retrieval processes.
The content-based score of each node is commonly based on the
well-known Tf × Idf criteria. In this paper, we compare between
this criteria and another one we call Tf × Ief. The comparison
is based on some experiments into a dataset provided by INEX1 to
show the effectiveness of our approach on one hand and those of
both weighting functions on the other.
Abstract: Scheduling algorithm is a key technology in satellite
switching system with input-buffer. In this paper, a new scheduling
algorithm and its realization are proposed. Based on Crossbar
switching fabric, the algorithm adopts serial scheduling strategy and
adjusts the output port arbitrating strategy for the better equity of every
port. Consequently, it increases the matching probability. The
algorithm can greatly reduce the scheduling delay and cell loss rate.
The analysis and simulation results by OPNET show that the proposed
algorithm has the better performance than others in average delay and
cell loss rate, and has the equivalent complexity. On the basis of these
results, the hardware realization and simulation based on FPGA are
completed, which validate the feasibility of the new scheduling
algorithm.
Abstract: Steganography, derived from Greek, literally means
“covered writing". It includes a vast array of secret communications
methods that conceal the message-s very existence. These methods
include invisible inks, microdots, character arrangement, digital
signatures, covert channels, and spread spectrum communications.
This paper proposes a new improved version of Least Significant Bit
(LSB) method. The approach proposed is simple for implementation
when compared to Pixel value Differencing (PVD) method and yet
achieves a High embedding capacity and imperceptibility. The
proposed method can also be applied to 24 bit color images and
achieve embedding capacity much higher than PVD.
Abstract: K-Modes is an extension of K-Means clustering algorithm, developed to cluster the categorical data, where the mean is replaced by the mode. The similarity measure proposed by Huang is the simple matching or mismatching measure. Weight of attribute values contribute much in clustering; thus in this paper we propose a new weighted dissimilarity measure for K-Modes, based on the ratio of frequency of attribute values in the cluster and in the data set. The new weighted measure is experimented with the data sets obtained from the UCI data repository. The results are compared with K-Modes and K-representative, which show that the new measure generates clusters with high purity.
Abstract: This article concerned with the translation of Quranic
verses to Braille symbols, by using Visual basic program. The
system has the ability to translate the special vibration for the Quran.
This study limited for the (Noun + Scoon) vibrations. It builds on an
existing translation system that combines a finite state machine with
left and right context matching and a set of translation rules. This
allows to translate the Arabic language from text to Braille symbols
after detect the vibration for the Quran verses.
Abstract: The task of face recognition has been actively
researched in recent years. This paper provides an up-to-date review of major human face recognition research. We first present an
overview of face recognition and its applications. Then, a literature review of the most recent face recognition techniques is presented.
Description and limitations of face databases which are used to test
the performance of these face recognition algorithms are given. A
brief summary of the face recognition vendor test (FRVT) 2002, a
large scale evaluation of automatic face recognition technology, and
its conclusions are also given. Finally, we give a summary of the research results.
Abstract: The aim of this study was to remove the two principal
noises which disturb the surface electromyography signal
(Diaphragm). These signals are the electrocardiogram ECG artefact
and the power line interference artefact. The algorithm proposed
focuses on a new Lean Mean Square (LMS) Widrow adaptive
structure. These structures require a reference signal that is correlated
with the noise contaminating the signal. The noise references are
then extracted : first with a noise reference mathematically
constructed using two different cosine functions; 50Hz (the
fundamental) function and 150Hz (the first harmonic) function for
the power line interference and second with a matching pursuit
technique combined to an LMS structure for the ECG artefact
estimation. The two removal procedures are attained without the use
of supplementary electrodes. These techniques of filtering are
validated on real records of surface diaphragm electromyography
signal. The performance of the proposed methods was compared with
already conducted research results.
Abstract: Retinal vascularity assessment plays an important role in diagnosis of ophthalmic pathologies. The employment of digital images for this purpose makes possible a computerized approach and has motivated development of many methods for automated vascular tree segmentation. Metrics based on contingency tables for binary classification have been widely used for evaluating performance of these algorithms and, concretely, the accuracy has been mostly used as measure of global performance in this topic. However, this metric shows very poor matching with human perception as well as other notable deficiencies. Here, a new similarity function for measuring quality of retinal vessel segmentations is proposed. This similarity function is based on characterizing the vascular tree as a connected structure with a measurable area and length. Tests made indicate that this new approach shows better behaviour than the current one does. Generalizing, this concept of measuring descriptive properties may be used for designing functions for measuring more successfully segmentation quality of other complex structures.
Abstract: The intent of this essay is to evaluate the effectiveness
of surge suppressor aimed at power supply used for automation
devices in power distribution system which is consist of MOV and
T type low-pass filter. Books, journal articles and e-sources related
to surge protection of power supply used for automation devices in
power distribution system were consulted, and the useful information
was organized, analyzed and developed into five parts: characteristics
of surge wave, protection against surge wave, impedance characteristics
of target, using Matlab to simulate circuit response after
5kV,1.2/50s surge wave and suggestions for surge protection. The
results indicate that various types of load situation have great impact
on the effectiveness of surge protective device. Therefore, type and
parameters of surge protective device need to be carefully selected,
and load matching is also vital to be concerned.
Abstract: This paper presents the application of a signal
intensity independent registration criterion for 2D rigid body
registration of medical images using 1D binary projections. The
criterion is defined as the weighted ratio of two projections. The ratio
is computed on a pixel per pixel basis and weighting is performed by
setting the ratios between one and zero pixels to a standard high
value. The mean squared value of the weighted ratio is computed
over the union of the one areas of the two projections and it is
minimized using the Chebyshev polynomial approximation using
n=5 points. The sum of x and y projections is used for translational
adjustment and a 45deg projection for rotational adjustment. 20 T1-
T2 registration experiments were performed and gave mean errors
1.19deg and 1.78 pixels. The method is suitable for contour/surface
matching. Further research is necessary to determine the robustness
of the method with regards to threshold, shape and missing data.
Abstract: We provide a maximum norm analysis of a finite
element Schwarz alternating method for a nonlinear elliptic boundary
value problem of the form -Δu = f(u), on two overlapping sub
domains with non matching grids. We consider a domain which is
the union of two overlapping sub domains where each sub domain
has its own independently generated grid. The two meshes being
mutually independent on the overlap region, a triangle belonging to
one triangulation does not necessarily belong to the other one. Under
a Lipschitz assumption on the nonlinearity, we establish, on each sub
domain, an optimal L∞ error estimate between the discrete Schwarz
sequence and the exact solution of the boundary value problem.
Abstract: The paper describes a new approach for fingerprint
classification, based on the distribution of local features (minute
details or minutiae) of the fingerprints. The main advantage is that
fingerprint classification provides an indexing scheme to facilitate
efficient matching in a large fingerprint database. A set of rules based
on heuristic approach has been proposed. The area around the core
point is treated as the area of interest for extracting the minutiae
features as there are substantial variations around the core point as
compared to the areas away from the core point. The core point in a
fingerprint has been located at a point where there is maximum
curvature. The experimental results report an overall average
accuracy of 86.57 % in fingerprint classification.
Abstract: This paper proposes a new solution to string matching problem. This solution constructs an inverted list representing a string pattern to be searched for. It then uses a new algorithm to process an input string in a single pass. The preprocessing phase takes 1) time complexity O(m) 2) space complexity O(1) where m is the length of pattern. The searching phase time complexity takes 1) O(m+α ) in average case 2) O(n/m) in the best case and 3) O(n) in the worst case, where α is the number of comparing leading to mismatch and n is the length of input text.
Abstract: This paper presents a robust method to detect obstacles in stereo images using shadow removal technique and color information. Stereo vision based obstacle detection is an algorithm that aims to detect and compute obstacle depth using stereo matching and disparity map. The proposed advanced method is divided into three phases, the first phase is detecting obstacles and removing shadows, the second one is matching and the last phase is depth computing. We propose a robust method for detecting obstacles in stereo images using a shadow removal technique based on color information in HIS space, at the first phase. In this paper we use Normalized Cross Correlation (NCC) function matching with a 5 × 5 window and prepare an empty matching table τ and start growing disparity components by drawing a seed s from S which is computed using canny edge detector, and adding it to τ. In this way we achieve higher performance than the previous works [2,17]. A fast stereo matching algorithm is proposed that visits only a small fraction of disparity space in order to find a semi-dense disparity map. It works by growing from a small set of correspondence seeds. The obstacle identified in phase one which appears in the disparity map of phase two enters to the third phase of depth computing. Finally, experimental results are presented to show the effectiveness of the proposed method.
Abstract: This paper presents a hand vein authentication system
using fast spatial correlation of hand vein patterns. In order to
evaluate the system performance, a prototype was designed and a
dataset of 50 persons of different ages above 16 and of different
gender, each has 10 images per person was acquired at different
intervals, 5 images for left hand and 5 images for right hand. In
verification testing analysis, we used 3 images to represent the
templates and 2 images for testing. Each of the 2 images is matched
with the existing 3 templates. FAR of 0.02% and FRR of 3.00 %
were reported at threshold 80. The system efficiency at this threshold
was found to be 99.95%. The system can operate at a 97% genuine
acceptance rate and 99.98 % genuine reject rate, at corresponding
threshold of 80. The EER was reported as 0.25 % at threshold 77. We
verified that no similarity exists between right and left hand vein
patterns for the same person over the acquired dataset sample.
Finally, this distinct 100 hand vein patterns dataset sample can be
accessed by researchers and students upon request for testing other
methods of hand veins matching.
Abstract: Automatic face detection is a complex problem in
image processing. Many methods exist to solve this problem such as
template matching, Fisher Linear Discriminate, Neural Networks,
SVM, and MRC. Success has been achieved with each method to
varying degrees and complexities. In proposed algorithm we used
upright, frontal faces for single gray scale images with decent
resolution and under good lighting condition. In the field of face
recognition technique the single face is matched with single face
from the training dataset. The author proposed a neural network
based face detection algorithm from the photographs as well as if any
test data appears it check from the online scanned training dataset.
Experimental result shows that the algorithm detected up to 95%
accuracy for any image.
Abstract: Online trading is an alternative to conventional shopping method. People trade goods which are new or pre-owned before. However, there are times when a user is not able to search the items wanted online. This is because the items may not be posted as yet, thus ending the search. Conventional search mechanism only works by searching and matching search criteria (requirement) with data available in a particular database. This research aims to match current search requirements with future postings. This would involve the time factor in the conventional search method. A Car Matching Alert System (CMAS) prototype was developed to test the matching algorithm. When a buyer-s search returns no result, the system saves the search and the buyer will be alerted if there is a match found based on future postings. The algorithm developed is useful and as it can be applied in other search context.
Abstract: The P-Bigram method is a string comparison methods
base on an internal two characters-based similarity measure. The edit
distance between two strings is the minimal number of elementary
editing operations required to transform one string into the other. The
elementary editing operations include deletion, insertion, substitution
two characters. In this paper, we address the P-Bigram method to
sole the similarity problem in DNA sequence. This method provided
an efficient algorithm that locates all minimum operation in a string.
We have been implemented algorithm and found that our program
calculated that smaller distance than one string. We develop PBigram
edit distance and show that edit distance or the similarity and
implementation using dynamic programming. The performance of
the proposed approach is evaluated using number edit and percentage
similarity measures.
Abstract: To illustrate diversity of methods used to extract relevant (where the concept of relevance can be differently defined for different applications) visual data, the paper discusses three groups of such methods. They have been selected from a range of alternatives to highlight how hardware and software tools can be complementarily used in order to achieve various functionalities in case of different specifications of “relevant data". First, principles of gated imaging are presented (where relevance is determined by the range). The second methodology is intended for intelligent intrusion detection, while the last one is used for content-based image matching and retrieval. All methods have been developed within projects supervised by the author.
Abstract: This paper presents a new feature based dense stereo
matching algorithm to obtain the dense disparity map via dynamic
programming. After extraction of some proper features, we use some
matching constraints such as epipolar line, disparity limit, ordering
and limit of directional derivative of disparity as well. Also, a coarseto-
fine multiresolution strategy is used to decrease the search space
and therefore increase the accuracy and processing speed. The
proposed method links the detected feature points into the chains and
compares some of the feature points from different chains, to
increase the matching speed. We also employ color stereo matching
to increase the accuracy of the algorithm. Then after feature
matching, we use the dynamic programming to obtain the dense
disparity map. It differs from the classical DP methods in the stereo
vision, since it employs sparse disparity map obtained from the
feature based matching stage. The DP is also performed further on a
scan line, between any matched two feature points on that scan line.
Thus our algorithm is truly an optimization method. Our algorithm
offers a good trade off in terms of accuracy and computational
efficiency. Regarding the results of our experiments, the proposed
algorithm increases the accuracy from 20 to 70%, and reduces the
running time of the algorithm almost 70%.