Abstract: Graph decompositions are vital in the study of
combinatorial design theory. A decomposition of a graph G is a
partition of its edge set. An n-sun graph is a cycle Cn with an edge
terminating in a vertex of degree one attached to each vertex. In this
paper, we define n-sun decomposition of some even order graphs
with a perfect matching. We have proved that the complete graph
K2n, complete bipartite graph K2n, 2n and the Harary graph H4, 2n have
n-sun decompositions. A labeling scheme is used to construct the n-suns.
Abstract: This paper examined the influence of matching
students- learning preferences with the teaching methodology
adopted, on their academic performance in an accounting course in
two types of learning environment in one university in Lebanon:
classes with PowerPoint (PPT) vs. conventional classes. Learning
preferences were either for PPT or for Conventional methodology. A
statistically significant increase in academic achievement is found in
the conventionally instructed group as compared to the group taught
with PPT. This low effectiveness of PPT might be attributed to the
learning preferences of Lebanese students. In the PPT group, better
academic performance was found among students with
learning/teaching match as compared with students with
learning/teaching mismatch. Since the majority of students display a
preference for the conventional methodology, the result might
suggest that Lebanese students- performance is not optimized by PPT
in the accounting classrooms, not because of PPT itself, but because
it is not matching the Lebanese students- learning preferences in such
a quantitative course.
Abstract: A structural study of an aqueous electrolyte whose
experimental results are available. It is a solution of LiCl-6H2O type
at glassy state (120K) contrasted with pure water at room temperature
by means of Partial Distribution Functions (PDF) issue from neutron
scattering technique. Based on these partial functions, the Reverse
Monte Carlo method (RMC) computes radial and angular correlation
functions which allow exploring a number of structural features of
the system. The obtained curves include some artifacts. To remedy
this, we propose to introduce a screened potential as an additional
constraint. Obtained results show a good matching between
experimental and computed functions and a significant improvement
in PDFs curves with potential constraint. It suggests an efficient fit of
pair distribution functions curves.
Abstract: A human verification system is presented in this
paper. The system consists of several steps: background subtraction,
thresholding, line connection, region growing, morphlogy, star
skelatonization, feature extraction, feature matching, and decision
making. The proposed system combines an advantage of star
skeletonization and simple statistic features. A correlation matching
and probability voting have been used for verification, followed by a
logical operation in a decision making stage. The proposed system
uses small number of features and the system reliability is
convincing.
Abstract: In recent years, rapid advances in software and hardware in the field of information technology along with a digital imaging revolution in the medical domain facilitate the generation and storage of large collections of images by hospitals and clinics. To search these large image collections effectively and efficiently poses significant technical challenges, and it raises the necessity of constructing intelligent retrieval systems. Content-based Image Retrieval (CBIR) consists of retrieving the most visually similar images to a given query image from a database of images[5]. Medical CBIR (content-based image retrieval) applications pose unique challenges but at the same time offer many new opportunities. On one hand, while one can easily understand news or sports videos, a medical image is often completely incomprehensible to untrained eyes.
Abstract: A way of generating millimeter wave I/Q signal using inductive resonator matched poly-phase filter is suggested. Normally the poly-phase filter generates quite accurate I/Q phase and magnitude but the loss of the filter is considerable due to series connection of passive RC components. This loss term directly increases system noise figure when the poly-phase filter is used in RF Front-end. The proposed matching method eliminates above mentioned loss and in addition provides gain on the passive filter. The working algorithm is illustrated by mathematical analysis. The generated I/Q signal is used in implementing millimeter wave phase shifter for the 60 GHz communication system to verify its effectiveness. The circuit is fabricated in 90 nm TSMC RF CMOS process under 1.2 V supply voltage. The measurement results showed that the suggested method improved gain by 6.5 dB and noise by 2.3 dB. The summary of the proposed I/Q generation is compared with previous works.
Abstract: The literature reports a large number of approaches for
measuring the similarity between protein sequences. Most of these
approaches estimate this similarity using alignment-based techniques
that do not necessarily yield biologically plausible results, for two
reasons.
First, for the case of non-alignable (i.e., not yet definitively aligned
and biologically approved) sequences such as multi-domain, circular
permutation and tandem repeat protein sequences, alignment-based
approaches do not succeed in producing biologically plausible results.
This is due to the nature of the alignment, which is based on the
matching of subsequences in equivalent positions, while non-alignable
proteins often have similar and conserved domains in non-equivalent
positions.
Second, the alignment-based approaches lead to similarity measures
that depend heavily on the parameters set by the user for the alignment
(e.g., gap penalties and substitution matrices). For easily alignable
protein sequences, it's possible to supply a suitable combination of
input parameters that allows such an approach to yield biologically
plausible results. However, for difficult-to-align protein sequences,
supplying different combinations of input parameters yields different
results. Such variable results create ambiguities and complicate the
similarity measurement task.
To overcome these drawbacks, this paper describes a novel and
effective approach for measuring the similarity between protein
sequences, called SAF for Substitution and Alignment Free. Without
resorting either to the alignment of protein sequences or to substitution
relations between amino acids, SAF is able to efficiently detect the
significant subsequences that best represent the intrinsic properties of
protein sequences, those underlying the chronological dependencies of
structural features and biochemical activities of protein sequences.
Moreover, by using a new efficient subsequence matching scheme,
SAF more efficiently handles protein sequences that contain similar
structural features with significant meaning in chronologically
non-equivalent positions. To show the effectiveness of SAF, extensive
experiments were performed on protein datasets from different
databases, and the results were compared with those obtained by
several mainstream algorithms.
Abstract: In this paper, two very different optimization
algorithms, Genetic and DIRECT algorithms, are used to history
match a bottomhole pressure response for a reservoir with wellbore
storage and skin with the best possible analytical model. No initial
guesses are available for reservoir parameters. The results show that
the matching process is much faster and more accurate for DIRECT
method in comparison with Genetic algorithm. It is furthermore
concluded that the DIRECT algorithm does not need any initial
guesses, whereas Genetic algorithm needs to be tuned according to
initial guesses.
Abstract: This paper describes an optimal approach for feature
subset selection to classify the leaves based on Genetic Algorithm
(GA) and Kernel Based Principle Component Analysis (KPCA). Due
to high complexity in the selection of the optimal features, the
classification has become a critical task to analyse the leaf image
data. Initially the shape, texture and colour features are extracted
from the leaf images. These extracted features are optimized through
the separate functioning of GA and KPCA. This approach performs
an intersection operation over the subsets obtained from the
optimization process. Finally, the most common matching subset is
forwarded to train the Support Vector Machine (SVM). Our
experimental results successfully prove that the application of GA
and KPCA for feature subset selection using SVM as a classifier is
computationally effective and improves the accuracy of the classifier.
Abstract: Personal name matching system is the core of
essential task in national citizen database, text and web mining,
information retrieval, online library system, e-commerce and record
linkage system. It has necessitated to the all embracing research in
the vicinity of name matching. Traditional name matching methods
are suitable for English and other Latin based language. Asian
languages which have no word boundary such as Myanmar language
still requires sounds alike matching system in Unicode based
application. Hence we proposed matching algorithm to get analogous
sounds alike (phonetic) pattern that is convenient for Myanmar
character spelling. According to the nature of Myanmar character, we
consider for word boundary fragmentation, collation of character.
Thus we use pattern conversion algorithm which fabricates words in
pattern with fragmented and collated. We create the Myanmar sounds
alike phonetic group to help in the phonetic matching. The
experimental results show that fragmentation accuracy in 99.32% and
processing time in 1.72 ms.
Abstract: Graph decompositions are vital in the study of combinatorial design theory. Given two graphs G and H, an H-decomposition of G is a partition of the edge set of G into disjoint isomorphic copies of H. An n-sun is a cycle Cn with an edge terminating in a vertex of degree one attached to each vertex. In this paper we have proved that the complete graph of order 2n, K2n can be decomposed into n-2 n-suns, a Hamilton cycle and a perfect matching, when n is even and for odd case, the decomposition is n-1 n-suns and a perfect matching. For an odd order complete graph K2n+1, delete the star subgraph K1, 2n and the resultant graph K2n is decomposed as in the case of even order. The method of building n-suns uses Walecki's construction for the Hamilton decomposition of complete graphs. A spanning tree decomposition of even order complete graphs is also discussed using the labeling scheme of n-sun decomposition. A complete bipartite graph Kn, n can be decomposed into n/2 n-suns when n/2 is even. When n/2 is odd, Kn, n can be decomposed into (n-2)/2 n-suns and a Hamilton cycle.
Abstract: Different methods containing biometric algorithms are
presented for the representation of eigenfaces detection including
face recognition, are identification and verification. Our theme of this
research is to manage the critical processing stages (accuracy, speed,
security and monitoring) of face activities with the flexibility of
searching and edit the secure authorized database. In this paper we
implement different techniques such as eigenfaces vector reduction
by using texture and shape vector phenomenon for complexity
removal, while density matching score with Face Boundary Fixation
(FBF) extracted the most likelihood characteristics in this media
processing contents. We examine the development and performance
efficiency of the database by applying our creative algorithms in both
recognition and detection phenomenon. Our results show the
performance accuracy and security gain with better achievement than
a number of previous approaches in all the above processes in an
encouraging mode.
Abstract: The rapid growth of e-Commerce services is
significantly observed in the past decade. However, the method to
verify the authenticated users still widely depends on numeric
approaches. A new search on other verification methods suitable for
online e-Commerce is an interesting issue. In this paper, a new online
signature-verification method using angular transformation is
presented. Delay shifts existing in online signatures are estimated by
the estimation method relying on angle representation. In the
proposed signature-verification algorithm, all components of input
signature are extracted by considering the discontinuous break points
on the stream of angular values. Then the estimated delay shift is
captured by comparing with the selected reference signature and the
error matching can be computed as a main feature used for verifying
process. The threshold offsets are calculated by two types of error
characteristics of the signature verification problem, False Rejection
Rate (FRR) and False Acceptance Rate (FAR). The level of these two
error rates depends on the decision threshold chosen whose value is
such as to realize the Equal Error Rate (EER; FAR = FRR). The
experimental results show that through the simple programming,
employed on Internet for demonstrating e-Commerce services, the
proposed method can provide 95.39% correct verifications and 7%
better than DP matching based signature-verification method. In
addition, the signature verification with extracting components
provides more reliable results than using a whole decision making.
Abstract: The paper discusses the mathematics of pattern
indexing and its applications to recognition of visual patterns that are
found in video clips. It is shown that (a) pattern indexes can be
represented by collections of inverted patterns, (b) solutions to
pattern classification problems can be found as intersections and
histograms of inverted patterns and, thus, matching of original
patterns avoided.
Abstract: Automatic reading of handwritten cheque is a computationally
complex process and it plays an important role in financial
risk management. Machine vision and learning provide a viable
solution to this problem. Research effort has mostly been focused
on recognizing diverse pitches of cheques and demand drafts with an
identical outline. However most of these methods employ templatematching
to localize the pitches and such schemes could potentially
fail when applied to different types of outline maintained by the
bank. In this paper, the so-called outline problem is resolved by
a cheque information tree (CIT), which generalizes the localizing
method to extract active-region-of-entities. In addition, the weight
based density plot (WBDP) is performed to isolate text entities and
read complete pitches. Recognition is based on texture features using
neural classifiers. Legal amount is subsequently recognized by both
texture and perceptual features. A post-processing phase is invoked
to detect the incorrect readings by Type-2 grammar using the Turing
machine. The performance of the proposed system was evaluated
using cheque and demand drafts of 22 different banks. The test data
consists of a collection of 1540 leafs obtained from 10 different
account holders from each bank. Results show that this approach
can easily be deployed without significant design amendments.
Abstract: Many researchers are working on information hiding
techniques using different ideas and areas to hide their secrete data.
This paper introduces a robust technique of hiding secret data in
image based on LSB insertion and RSA encryption technique. The
key of the proposed technique is to encrypt the secret data. Then the
encrypted data will be converted into a bit stream and divided it into
number of segments. However, the cover image will also be divided
into the same number of segments. Each segment of data will be
compared with each segment of image to find the best match
segment, in order to create a new random sequence of segments to be
inserted then in a cover image. Experimental results show that the
proposed technique has a high security level and produced better
stego-image quality.
Abstract: A new digital watermarking technique for images that
are sensitive to blocking artifacts is presented. Experimental results
show that the proposed MDCT based approach produces highly
imperceptible watermarked images and is robust to attacks such as
compression, noise, filtering and geometric transformations. The
proposed MDCT watermarking technique is applied to fingerprints
for ensuring security. The face image and demographic text data of
an individual are used as multiple watermarks. An AFIS system was
used to quantitatively evaluate the matching performance of the
MDCT-based watermarked fingerprint. The high fingerprint
matching scores show that the MDCT approach is resilient to
blocking artifacts. The quality of the extracted face and extracted text
images was computed using two human visual system metrics and
the results show that the image quality was high.
Abstract: The complexity of today-s software systems makes
collaborative development necessary to accomplish tasks.
Frameworks are necessary to allow developers perform their tasks
independently yet collaboratively. Similarity detection is one of the
major issues to consider when developing such frameworks. It allows
developers to mine existing repositories when developing their own
views of a software artifact, and it is necessary for identifying the
correspondences between the views to allow merging them and
checking their consistency. Due to the importance of the
requirements specification stage in software development, this paper
proposes a framework for collaborative development of Object-
Oriented formal specifications along with a similarity detection
approach to support the creation, merging and consistency checking
of specifications. The paper also explores the impact of using
additional concepts on improving the matching results. Finally, the
proposed approach is empirically evaluated.
Abstract: A computationally simple approach of model order
reduction for single input single output (SISO) and linear timeinvariant
discrete systems modeled in frequency domain is proposed
in this paper. Denominator of the reduced order model is determined
using fuzzy C-means clustering while the numerator parameters are
found by matching time moments and Markov parameters of high
order system.
Abstract: Motion estimation is a key problem in video
processing and computer vision. Optical flow motion estimation can
achieve high estimation accuracy when motion vector is small.
Three-step search algorithm can handle large motion vector but not
very accurate. A joint algorithm was proposed in this paper to
achieve high estimation accuracy disregarding whether the motion
vector is small or large, and keep the computation cost much lower
than full search.