Abstract: Multimedia security is an incredibly significant area
of concern. A number of papers on robust digital watermarking have
been presented, but there are no standards that have been defined so
far. Thus multimedia security is still a posing problem. The aim of
this paper is to design a robust image-watermarking scheme, which
can withstand a different set of attacks. The proposed scheme
provides a robust solution integrating image moment normalization,
content dependent watermark and discrete wavelet transformation.
Moment normalization is useful to recover the watermark even in
case of geometrical attacks. Content dependent watermarks are a
powerful means of authentication as the data is watermarked with its
own features. Discrete wavelet transforms have been used as they
describe image features in a better manner. The proposed scheme
finds its place in validating identification cards and financial
instruments.
Abstract: The task of face recognition has been actively
researched in recent years. This paper provides an up-to-date review of major human face recognition research. We first present an
overview of face recognition and its applications. Then, a literature review of the most recent face recognition techniques is presented.
Description and limitations of face databases which are used to test
the performance of these face recognition algorithms are given. A
brief summary of the face recognition vendor test (FRVT) 2002, a
large scale evaluation of automatic face recognition technology, and
its conclusions are also given. Finally, we give a summary of the research results.
Abstract: While compressing text files is useful, compressing
still image files is almost a necessity. A typical image takes up much
more storage than a typical text message and without compression
images would be extremely clumsy to store and distribute. The
amount of information required to store pictures on modern
computers is quite large in relation to the amount of bandwidth
commonly available to transmit them over the Internet and
applications. Image compression addresses the problem of reducing
the amount of data required to represent a digital image. Performance
of any image compression method can be evaluated by measuring the
root-mean-square-error & peak signal to noise ratio. The method of
image compression that will be analyzed in this paper is based on the
lossy JPEG image compression technique, the most popular
compression technique for color images. JPEG compression is able to
greatly reduce file size with minimal image degradation by throwing
away the least “important" information. In JPEG, both color
components are downsampled simultaneously, but in this paper we
will compare the results when the compression is done by
downsampling the single chroma part. In this paper we will
demonstrate more compression ratio is achieved when the
chrominance blue is downsampled as compared to downsampling the
chrominance red in JPEG compression. But the peak signal to noise
ratio is more when the chrominance red is downsampled as compared
to downsampling the chrominance blue in JPEG compression. In
particular we will use the hats.jpg as a demonstration of JPEG
compression using low pass filter and demonstrate that the image is
compressed with barely any visual differences with both methods.
Abstract: One important problem in today organizations is the
existence of non-integrated information systems, inconsistency and
lack of suitable correlations between legacy and modern systems.
One main solution is to transfer the local databases into a global one.
In this regards we need to extract the data structures from the legacy
systems and integrate them with the new technology systems. In
legacy systems, huge amounts of a data are stored in legacy
databases. They require particular attention since they need more
efforts to be normalized, reformatted and moved to the modern
database environments. Designing the new integrated (global)
database architecture and applying the reverse engineering requires
data normalization. This paper proposes the use of database reverse
engineering in order to integrate legacy and modern databases in
organizations. The suggested approach consists of methods and
techniques for generating data transformation rules needed for the
data structure normalization.
Abstract: In this paper we present a novel approach for human
Body configuration based on the Silhouette. We propose to address
this problem under the Bayesian framework. We use an effective
Model based MCMC (Markov Chain Monte Carlo) method to solve
the configuration problem, in which the best configuration could be
defined as MAP (maximize a posteriori probability) in Bayesian
model. This model based MCMC utilizes the human body model to
drive the MCMC sampling from the solution space. It converses the
original high dimension space into a restricted sub-space constructed
by the human model and uses a hybrid sampling algorithm. We
choose an explicit human model and carefully select the likelihood
functions to represent the best configuration solution. The
experiments show that this method could get an accurate
configuration and timesaving for different human from multi-views.
Abstract: In any distributed systems, process scheduling plays a
vital role in determining the efficiency of the system. Process scheduling algorithms are used to ensure that the components of the
system would be able to maximize its utilization and able to complete all the processes assigned in a specified period of time.
This paper focuses on the development of comparative simulator for distributed process scheduling algorithms. The objectives of the works that have been carried out include the development of the
comparative simulator, as well as to implement a comparative study
between three distributed process scheduling algorithms; senderinitiated,
receiver-initiated and hybrid sender-receiver-initiated
algorithms. The comparative study was done based on the Average Waiting Time (AWT) and Average Turnaround Time (ATT) of the
processes involved. The simulation results show that the performance of the algorithms depends on the number of nodes in the system.
Abstract: In synchronized games players make their moves simultaneously
rather than alternately. Synchronized Triomineering
and Synchronized Tridomineering are respectively the synchronized
versions of Triomineering and Tridomineering, two variants of a
classic two-player combinatorial game called Domineering. Experimental
results for small m × n boards (with m + n ≤ 12 for
Synchronized Triomineering and m + n ≤ 10 for Synchronized
Tridomineering) and some theoretical results for general k×n boards
(with k = 3, 4, 5 for Synchronized Triomineering and k = 3
for Synchronized Tridomineering) are presented. Future research is
indicated.
Abstract: In this paper, we present an approach for soccer video
edition using a multimodal annotation. We propose to associate with
each video sequence of a soccer match a textual document to be used
for further exploitation like search, browsing and abstract edition.
The textual document contains video meta data, match meta data, and
match data. This document, generated automatically while the video
is analyzed, segmented and classified, can be enriched semi
automatically according to the user type and/or a specialized
recommendation system.
Abstract: We introduce an effective approach for automatic offline au- thentication of handwritten samples where the forgeries are skillfully done, i.e., the true and forgery sample appearances are almost alike. Subtle details of temporal information used in online verification are not available offline and are also hard to recover robustly. Thus the spatial dynamic information like the pen-tip pressure characteristics are considered, emphasizing on the extraction of low density pixels. The points result from the ballistic rhythm of a genuine signature which a forgery, however skillful that may be, always lacks. Ten effective features, including these low density points and den- sity ratio, are proposed to make the distinction between a true and a forgery sample. An adaptive decision criteria is also derived for better verification judgements.
Abstract: In this paper, a semi-fragile watermarking scheme is proposed for color image authentication. In this particular scheme, the color image is first transformed from RGB to YST color space, suitable for watermarking the color media. Each channel is divided into 4×4 non-overlapping blocks and its each 2×2 sub-block is selected. The embedding space is created by setting the two LSBs of selected sub-block to zero, which will hold the authentication and recovery information. For verification of work authentication and parity bits denoted by 'a' & 'p' are computed for each 2×2 subblock. For recovery, intensity mean of each 2×2 sub-block is computed and encoded upto six to eight bits depending upon the channel selection. The size of sub-block is important for correct localization and fast computation. For watermark distribution 2DTorus Automorphism is implemented using a private key to have a secure mapping of blocks. The perceptibility of watermarked image is quite reasonable both subjectively and objectively. Our scheme is oblivious, correctly localizes the tampering and able to recovery the original work with probability of near one.
Abstract: Heterogeneity has to be taken into account when
integrating a set of existing information sources into a distributed
information system that are nowadays often based on Service-
Oriented Architectures (SOA). This is also particularly applicable to
distributed services such as event monitoring, which are useful in the
context of Event Driven Architectures (EDA) and Complex Event
Processing (CEP). Web services deal with this heterogeneity at a
technical level, also providing little support for event processing. Our
central thesis is that such a fully generic solution cannot provide
complete support for event monitoring; instead, source specific
semantics such as certain event types or support for certain event
monitoring techniques have to be taken into account. Our core result
is the design of a configurable event monitoring (Web) service that
allows us to trade genericity for the exploitation of source specific
characteristics. It thus delivers results for the areas of SOA, Web
services, CEP and EDA.
Abstract: This paper proposes a method, combining color and
layout features, for identifying documents captured from lowresolution
handheld devices. On one hand, the document image color
density surface is estimated and represented with an equivalent
ellipse and on the other hand, the document shallow layout structure
is computed and hierarchically represented. The combined color and
layout features are arranged in a symbolic file, which is unique for
each document and is called the document-s visual signature. Our
identification method first uses the color information in the
signatures in order to focus the search space on documents having a
similar color distribution, and finally selects the document having the
most similar layout structure in the remaining search space. Finally,
our experiment considers slide documents, which are often captured
using handheld devices.
Abstract: This paper presents a novel iris recognition system
using 1D log polar Gabor wavelet and Euler numbers. 1D log polar
Gabor wavelet is used to extract the textural features, and Euler
numbers are used to extract topological features of the iris. The
proposed decision strategy uses these features to authenticate an
individual-s identity while maintaining a low false rejection rate. The
algorithm was tested on CASIA iris image database and found to
perform better than existing approaches with an overall accuracy of
99.93%.
Abstract: In this paper a new approach to face recognition is
presented that achieves double dimension reduction, making the
system computationally efficient with better recognition results and
out perform common DCT technique of face recognition. In pattern
recognition techniques, discriminative information of image
increases with increase in resolution to a certain extent, consequently
face recognition results change with change in face image resolution
and provide optimal results when arriving at a certain resolution
level. In the proposed model of face recognition, initially image
decimation algorithm is applied on face image for dimension
reduction to a certain resolution level which provides best
recognition results. Due to increased computational speed and feature
extraction potential of Discrete Cosine Transform (DCT), it is
applied on face image. A subset of coefficients of DCT from low to
mid frequencies that represent the face adequately and provides best
recognition results is retained. A tradeoff between decimation factor,
number of DCT coefficients retained and recognition rate with
minimum computation is obtained. Preprocessing of the image is
carried out to increase its robustness against variations in poses and
illumination level. This new model has been tested on different
databases which include ORL , Yale and EME color database.
Abstract: Many measures have been proposed for machine
translation evaluation (MTE) while little research has been done on
the performance of MTE methods. This paper is an effort for MTE
performance analysis. A general frame is proposed for the description
of the MTE measure and the test suite, including whether the
automatic measure is consistent with human evaluation, whether
different results from various measures or test suites are consistent,
whether the content of the test suite is suitable for performance
evaluation, the degree of difficulty of the test suite and its influence
on the MTE, the relationship of MTE result significance and the size
of the test suite, etc. For a better clarification of the frame, several
experiment results are analyzed relating human evaluation, BLEU
evaluation, and typological MTE. A visualization method is
introduced for better presentation of the results. The study aims for
aid in construction of test suite and method selection in MTE
practice.
Abstract: Although Face detection is not a recent activity in the
field of image processing, it is still an open area for research. The
greatest step in this field is the work reported by Viola and its recent
analogous is Huang et al. Both of them use similar features and also
similar training process. The former is just for detecting upright
faces, but the latter can detect multi-view faces in still grayscale
images using new features called 'sparse feature'. Finding these
features is very time consuming and inefficient by proposed methods.
Here, we propose a new approach for finding sparse features using a
genetic algorithm system. This method requires less computational
cost and gets more effective features in learning process for face
detection that causes more accuracy.
Abstract: In this paper, we present a system for content-based
retrieval of large database of classified satellite images, based on
user's relevance feedback (RF).Through our proposed system, we
divide each satellite image scene into small subimages, which stored
in the database. The modified radial basis functions neural network
has important role in clustering the subimages of database according
to the Euclidean distance between the query feature vector and the
other subimages feature vectors. The advantage of using RF
technique in such queries is demonstrated by analyzing the database
retrieval results.
Abstract: An important step in studying the statistics of
fingerprint minutia features is to reliably extract minutia features from
the fingerprint images. A new reliable method of computation for
minutiae feature extraction from fingerprint images is presented. A
fingerprint image is treated as a textured image. An orientation flow
field of the ridges is computed for the fingerprint image. To
accurately locate ridges, a new ridge orientation based computation
method is proposed. After ridge segmentation a new method of
computation is proposed for smoothing the ridges. The ridge skeleton
image is obtained and then smoothed using morphological operators
to detect the features. A post processing stage eliminates a large
number of false features from the detected set of minutiae features.
The detected features are observed to be reliable and accurate.
Abstract: Workload and resource management are two essential functions provided at the service level of the grid software infrastructure. To improve the global throughput of these software environments, workloads have to be evenly scheduled among the available resources. To realize this goal several load balancing strategies and algorithms have been proposed. Most strategies were developed in mind, assuming homogeneous set of sites linked with homogeneous and fast networks. However for computational grids we must address main new issues, namely: heterogeneity, scalability and adaptability. In this paper, we propose a layered algorithm which achieve dynamic load balancing in grid computing. Based on a tree model, our algorithm presents the following main features: (i) it is layered; (ii) it supports heterogeneity and scalability; and, (iii) it is totally independent from any physical architecture of a grid.
Abstract: In this paper, a robust digital image watermarking
scheme for copyright protection applications using the singular value
decomposition (SVD) is proposed. In this scheme, an entropy
masking model has been applied on the host image for the texture
segmentation. Moreover, the local luminance and textures of the host
image are considered for watermark embedding procedure to
increase the robustness of the watermarking scheme. In contrast to all
existing SVD-based watermarking systems that have been designed
to embed visual watermarks, our system uses a pseudo-random
sequence as a watermark. We have tested the performance of our
method using a wide variety of image processing attacks on different
test images. A comparison is made between the results of our
proposed algorithm with those of a wavelet-based method to
demonstrate the superior performance of our algorithm.