Abstract: Many-core GPUs provide high computing ability and
substantial bandwidth; however, optimizing irregular applications
like SpMV on GPUs becomes a difficult but meaningful task. In this
paper, we propose a novel method to improve the performance of
SpMV on GPUs. A new storage format called HYB-R is proposed to
exploit GPU architecture more efficiently. The COO portion of the
matrix is partitioned recursively into a ELL portion and a COO
portion in the process of creating HYB-R format to ensure that there
are as many non-zeros as possible in ELL format. The method of
partitioning the matrix is an important problem for HYB-R kernel, so
we also try to tune the parameters to partition the matrix for higher
performance. Experimental results show that our method can get
better performance than the fastest kernel (HYB) in NVIDIA-s
SpMV library with as high as 17% speedup.
Abstract: This paper introduces a technique of distortion
estimation in image watermarking using Genetic Programming (GP).
The distortion is estimated by considering the problem of obtaining a
distorted watermarked signal from the original watermarked signal as
a function regression problem. This function regression problem is
solved using GP, where the original watermarked signal is
considered as an independent variable. GP-based distortion
estimation scheme is checked for Gaussian attack and Jpeg
compression attack. We have used Gaussian attacks of different
strengths by changing the standard deviation. JPEG compression
attack is also varied by adding various distortions. Experimental
results demonstrate that the proposed technique is able to detect the
watermark even in the case of strong distortions and is more robust
against attacks.
Abstract: Distance visualization of large datasets often takes the direction of remote viewing and zooming techniques of stored static images. However, the continuous increase in the size of datasets and visualization operation causes insufficient performance with traditional desktop computers. Additionally, the visualization techniques such as Isosurface depend on the available resources of the running machine and the size of datasets. Moreover, the continuous demand for powerful computing powers and continuous increase in the size of datasets results an urgent need for a grid computing infrastructure. However, some issues arise in current grid such as resources availability at the client machines which are not sufficient enough to process large datasets. On top of that, different output devices and different network bandwidth between the visualization pipeline components often result output suitable for one machine and not suitable for another. In this paper we investigate how the grid services could be used to support remote visualization of large datasets and to break the constraint of physical co-location of the resources by applying the grid computing technologies. We show our grid enabled architecture to visualize large medical datasets (circa 5 million polygons) for remote interactive visualization on modest resources clients.
Abstract: The explosive growth of World Wide Web has posed
a challenging problem in extracting relevant data. Traditional web
crawlers focus only on the surface web while the deep web keeps
expanding behind the scene. Deep web pages are created
dynamically as a result of queries posed to specific web databases.
The structure of the deep web pages makes it impossible for
traditional web crawlers to access deep web contents. This paper,
Deep iCrawl, gives a novel and vision-based approach for extracting
data from the deep web. Deep iCrawl splits the process into two
phases. The first phase includes Query analysis and Query translation
and the second covers vision-based extraction of data from the
dynamically created deep web pages. There are several established
approaches for the extraction of deep web pages but the proposed
method aims at overcoming the inherent limitations of the former.
This paper also aims at comparing the data items and presenting them
in the required order.
Abstract: This paper details the application of a genetic
programming framework for induction of useful classification rules
from a database of income statements, balance sheets, and cash flow
statements for North American public companies. Potentially
interesting classification rules are discovered. Anomalies in the
discovery process merit further investigation of the application of
genetic programming to the dataset for the problem domain.
Abstract: A new approach for the improvement of coding gain
in channel coding using Advanced Encryption Standard (AES) and
Maximum A Posteriori (MAP) algorithm is proposed. This new
approach uses the avalanche effect of block cipher algorithm AES
and soft output values of MAP decoding algorithm. The performance
of proposed approach is evaluated in the presence of Additive White
Gaussian Noise (AWGN). For the verification of proposed approach,
computer simulation results are included.
Abstract: Mobile Ad hoc networks (MANETs) are collections
of wireless mobile nodes dynamically reconfiguring and collectively
forming a temporary network. These types of networks assume
existence of no fixed infrastructure and are often useful in battle-field
tactical operations or emergency search-and-rescue type of
operations where fixed infrastructure is neither feasible nor practical.
They also find use in ad hoc conferences, campus networks and
commercial recreational applications carrying multimedia traffic. All
of the above applications of MANETs require guaranteed levels of
performance as experienced by the end-user. This paper focuses on
key challenges in provisioning predetermined levels of such Quality
of Service (QoS). It also identifies functional areas where QoS
models are currently defined and used. Evolving functional areas
where performance and QoS provisioning may be applied are also
identified and some suggestions are provided for further research in
this area. Although each of the above functional areas have been
discussed separately in recent research studies, since these QoS
functional areas are highly correlated and interdependent, a
comprehensive and comparative analysis of these areas and their
interrelationships is desired. In this paper we have attempted to
provide such an overview.
Abstract: In this paper we introduce the notion of protein interaction network. This is a graph whose vertices are the protein-s amino acids and whose edges are the interactions between them. Using a graph theory approach, we observe that according to their structural roles, the nodes interact differently. By leading a community structure detection, we confirm this specific behavior and describe thecommunities composition to finally propose a new approach to fold a protein interaction network.
Abstract: Quality evaluation of an image is an important task in image processing applications. In case of image compression, quality of decompressed image is also the criterion for evaluation of given coding scheme. In the process of compression -decompression various artifacts such as blocking artifacts, blur artifact, ringing or edge artifact are observed. However quantification of these artifacts is a difficult task. We propose here novel method to quantify blur and ringing artifact in an image.
Abstract: This paper presents a text clustering system developed based on a k-means type subspace clustering algorithm to cluster large, high dimensional and sparse text data. In this algorithm, a new step is added in the k-means clustering process to automatically calculate the weights of keywords in each cluster so that the important words of a cluster can be identified by the weight values. For understanding and interpretation of clustering results, a few keywords that can best represent the semantic topic are extracted from each cluster. Two methods are used to extract the representative words. The candidate words are first selected according to their weights calculated by our new algorithm. Then, the candidates are fed to the WordNet to identify the set of noun words and consolidate the synonymy and hyponymy words. Experimental results have shown that the clustering algorithm is superior to the other subspace clustering algorithms, such as PROCLUS and HARP and kmeans type algorithm, e.g., Bisecting-KMeans. Furthermore, the word extraction method is effective in selection of the words to represent the topics of the clusters.
Abstract: Intelligent Video-Surveillance (IVS) systems are
being more and more popular in security applications. The analysis
and recognition of abnormal behaviours in a video sequence has
gradually drawn the attention in the field of IVS, since it allows
filtering out a large number of useless information, which guarantees
the high efficiency in the security protection, and save a lot of human
and material resources. We present in this paper ADABeV, an
intelligent video-surveillance framework for event recognition in
crowded scene to detect the abnormal human behaviour. This
framework is attended to be able to achieve real-time alarming,
reducing the lags in traditional monitoring systems. This architecture
proposal addresses four main challenges: behaviour understanding in
crowded scenes, hard lighting conditions, multiple input kinds of
sensors and contextual-based adaptability to recognize the active
context of the scene.
Abstract: e-mail has become an important means of electronic
communication but the viability of its usage is marred by Unsolicited
Bulk e-mail (UBE) messages. UBE consists of many types
like pornographic, virus infected and 'cry-for-help' messages as well
as fake and fraudulent offers for jobs, winnings and medicines. UBE
poses technical and socio-economic challenges to usage of e-mails.
To meet this challenge and combat this menace, we need to
understand UBE. Towards this end, the current paper presents a
content-based textual analysis of nearly 3000 winnings-announcing
UBE. Technically, this is an application of Text Parsing and
Tokenization for an un-structured textual document and we approach
it using Bag Of Words (BOW) and Vector Space Document Model
techniques. We have attempted to identify the most frequently
occurring lexis in the winnings-announcing UBE documents. The
analysis of such top 100 lexis is also presented. We exhibit the
relationship between occurrence of a word from the identified lexisset
in the given UBE and the probability that the given UBE will be
the one announcing fake winnings. To the best of our knowledge and
survey of related literature, this is the first formal attempt for
identification of most frequently occurring lexis in winningsannouncing
UBE by its textual analysis. Finally, this is a sincere
attempt to bring about alertness against and mitigate the threat of
such luring but fake UBE.
Abstract: An optimal solution for a large number of constraint
satisfaction problems can be found using the technique of
substitution and elimination of variables analogous to the technique
that is used to solve systems of equations. A decision function
f(A)=max(A2) is used to determine which variables to eliminate. The
algorithm can be expressed in six lines and is remarkable in both its
simplicity and its ability to find an optimal solution. However it is
inefficient in that it needs to square the updated A matrix after each
variable elimination. To overcome this inefficiency the algorithm is
analyzed and it is shown that the A matrix only needs to be squared
once at the first step of the algorithm and then incrementally updated
for subsequent steps, resulting in significant improvement and an
algorithm complexity of O(n3).
Abstract: Optical network uses a tool for routing which is called
Latin router. These routers use particular algorithms for routing. In this paper, we present algorithm for configuration of optical network that is optimized regarding previous algorithm. We show that by decreasing the number of hops for source-destination in lightpath number of satisfied request is less. Also we had shown that more than
single-hop lightpath relating single-hop lightpath is better.
Abstract: The intermittent connectivity modifies the “always
on" network assumption made by all the distributed query processing
systems. In modern- day systems, the absence of network
connectivity is considered as a fault. Since the last upload, it might
not be feasible to transmit all the data accumulated right away over
the available connection. It is possible that vital information may be
delayed excessively when the less important information takes place
of the vital information. Owing to the restricted and uneven
bandwidth, it is vital that the mobile nodes make the most
advantageous use of the connectivity when it arrives. Hence, in order
to select the data that needs to be transmitted first, some sort of data
prioritization is essential. A continuous query processing system for
intermittently connected mobile networks that comprises of a delaytolerant
continuous query processor distributed across the mobile
hosts has been proposed in this paper. In addition, a mechanism for
prioritizing query results has been designed that guarantees enhanced
accuracy and reduced delay. It is illustrated that our architecture
reduces the client power consumption, increases query efficiency by
the extensive simulation results.
Abstract: The purpose of this paper is to demonstrate the ability
of a genetic programming (GP) algorithm to evolve a team of data
classification models. The GP algorithm used in this work is
“multigene" in nature, i.e. there are multiple tree structures (genes)
that are used to represent team members. Each team member assigns
a data sample to one of a fixed set of output classes. A majority vote,
determined using the mode (highest occurrence) of classes predicted
by the individual genes, is used to determine the final class
prediction. The algorithm is tested on a binary classification problem.
For the case study investigated, compact classification models are
obtained with comparable accuracy to alternative approaches.
Abstract: The behavior of Radial Basis Function (RBF) Networks greatly depends on how the center points of the basis functions are selected. In this work we investigate the use of instance reduction techniques, originally developed to reduce the storage requirements of instance based learners, for this purpose. Five Instance-Based Reduction Techniques were used to determine the set of center points, and RBF networks were trained using these sets of centers. The performance of the RBF networks is studied in terms of classification accuracy and training time. The results obtained were compared with two Radial Basis Function Networks: RBF networks that use all instances of the training set as center points (RBF-ALL) and Probabilistic Neural Networks (PNN). The former achieves high classification accuracies and the latter requires smaller training time. Results showed that RBF networks trained using sets of centers located by noise-filtering techniques (ALLKNN and ENN) rather than pure reduction techniques produce the best results in terms of classification accuracy. The results show that these networks require smaller training time than that of RBF-ALL and higher classification accuracy than that of PNN. Thus, using ALLKNN and ENN to select center points gives better combination of classification accuracy and training time. Our experiments also show that using the reduced sets to train the networks is beneficial especially in the presence of noise in the original training sets.
Abstract: The utility of expert system generators has been
widely recognized in many applications. Several generators based on
concept of the paradigm object, have been recently proposed. The
generator of oriented object expert system (GSEOO) offers
languages that are often complex and difficult to use. We propose in
this paper an extension of the expert system generator, JESS, which
permits a friendly use of this expert system. The new tool, called
VISUAL JESS, bring two main improvements to JESS. The first
improvement concerns the easiness of its utilization while giving
back transparency to the syntax and semantic aspects of the JESS
programming language. The second improvement permits an easy
access and modification of the JESS knowledge basis. The
implementation of VISUAL JESS is made so that it is extensible and
portable.
Abstract: Modular multiplication is the basic operation
in most public key cryptosystems, such as RSA, DSA, ECC,
and DH key exchange. Unfortunately, very large operands
(in order of 1024 or 2048 bits) must be used to provide
sufficient security strength. The use of such big numbers
dramatically slows down the whole cipher system, especially
when running on embedded processors.
So far, customized hardware accelerators - developed on
FPGAs or ASICs - were the best choice for accelerating
modular multiplication in embedded environments. On the
other hand, many algorithms have been developed to speed
up such operations. Examples are the Montgomery modular
multiplication and the interleaved modular multiplication
algorithms. Combining both customized hardware with
an efficient algorithm is expected to provide a much faster
cipher system.
This paper introduces an enhanced architecture for computing
the modular multiplication of two large numbers X
and Y modulo a given modulus M. The proposed design is
compared with three previous architectures depending on
carry save adders and look up tables. Look up tables should
be loaded with a set of pre-computed values. Our proposed
architecture uses the same carry save addition, but replaces
both look up tables and pre-computations with an enhanced
version of sign detection techniques. The proposed architecture
supports higher frequencies than other architectures.
It also has a better overall absolute time for a single operation.
Abstract: This paper applies fuzzy AHP to evaluate the service
quality of online auction. Service quality is a composition of various
criteria. Among them many intangible attributes are difficult to
measure. This characteristic introduces the obstacles for respondents
on reply in the survey. So as to overcome this problem, we invite
fuzzy set theory into the measurement of performance and use AHP in
obtaining criteria. We found the most concerned dimension of service
quality is Transaction Safety Mechanism and the least is Charge Item.
Other criteria such as information security, accuracy and information
are too vital.