Abstract: Recently, lots of researchers are attracted to retrieving
multimedia database by using some impression words and their values.
Ikezoe-s research is one of the representatives and uses eight pairs of
opposite impression words. We had modified its retrieval interface and
proposed '2D-RIB'. In '2D-RIB', after a retrieval person selects a
single basic music, the system visually shows some other music
around the basic one along relative position. He/she can select one of
them fitting to his/her intention, as a retrieval result. The purpose of
this paper is to improve his/her satisfaction level to the retrieval result
in 2D-RIB. One of our extensions is to define and introduce the
following two measures: 'melody goodness' and 'general acceptance'.
We implement them in different five combinations. According to an
evaluation experiment, both of these two measures can contribute to
the improvement. Another extension is three types of customization.
We have implemented them and clarified which customization is
effective.
Abstract: The state-of-the-art Bag of Words model in Content-
Based Image Retrieval has been used for years but the relevance
feedback strategies for this model are not fully investigated. Inspired
from text retrieval, the Bag of Words model has the ability to use the
wealth of knowledge and practices available in text retrieval. We
study and experiment the relevance feedback model in text retrieval
for adapting it to image retrieval. The experiments show that the
techniques from text retrieval give good results for image retrieval
and that further improvements is possible.
Abstract: Currently, web usage make a huge data from a lot of
user attention. In general, proxy server is a system to support web
usage from user and can manage system by using hit rates. This
research tries to improve hit rates in proxy system by applying data
mining technique. The data set are collected from proxy servers in the
university and are investigated relationship based on several features.
The model is used to predict the future access websites. Association
rule technique is applied to get the relation among Date, Time, Main
Group web, Sub Group web, and Domain name for created model.
The results showed that this technique can predict web content for the
next day, moreover the future accesses of websites increased from
38.15% to 85.57 %.
This model can predict web page access which tends to increase
the efficient of proxy servers as a result. In additional, the
performance of internet access will be improved and help to reduce
traffic in networks.
Abstract: Delay and Disruption Tolerant Networking is part of
the Inter Planetary Internet with primary application being Deep
Space Networks. Its Terrestrial form has interesting research
applications such as Alagappa University Delay Tolerant Water
Monitoring Network which doubles as test beds for improvising its
routing scheme. DTNs depend on node mobility to deliver packets
using a store-carry-and forward paradigm. Throwboxes are small and
inexpensive stationary devices equipped with wireless interfaces and
storage. We propose the use of Throwboxes to enhance the contact
opportunities of the nodes and hence improve the Throughput. The
enhancement is evaluated using Alunivdtnsim, a desktop simulator in
C language and the results are graphically presented.
Abstract: This paper proposes a new approach to offer a private
cloud service in HPC clusters. In particular, our approach relies on
automatically scheduling users- customized environment request as a
normal job in batch system. After finishing virtualization request jobs,
those guest operating systems will dismiss so that compute nodes will
be released again for computing. We present initial work on the
innovative integration of HPC batch system and virtualization tools
that aims at coexistence such that they suffice for meeting the
minimizing interference required by a traditional HPC cluster. Given
the design of initial infrastructure, the proposed effort has the potential
to positively impact on synergy model. The results from the
experiment concluded that goal for provisioning customized cluster
environment indeed can be fulfilled by using virtual machines, and
efficiency can be improved with proper setup and arrangements.
Abstract: Testing accounts for the major percentage of technical
contribution in the software development process. Typically, it
consumes more than 50 percent of the total cost of developing a
piece of software. The selection of software tests is a very important
activity within this process to ensure the software reliability
requirements are met. Generally tests are run to achieve maximum
coverage of the software code and very little attention is given to the
achieved reliability of the software. Using an existing methodology,
this paper describes how to use Bayesian Belief Networks (BBNs) to
select unit tests based on their contribution to the reliability of the
module under consideration. In particular the work examines how the
approach can enhance test-first development by assessing the quality
of test suites resulting from this development methodology and
providing insight into additional tests that can significantly reduce
the achieved reliability. In this way the method can produce an
optimal selection of inputs and the order in which the tests are
executed to maximize the software reliability. To illustrate this
approach, a belief network is constructed for a modern software
system incorporating the expert opinion, expressed through
probabilities of the relative quality of the elements of the software,
and the potential effectiveness of the software tests. The steps
involved in constructing the Bayesian Network are explained as is a
method to allow for the test suite resulting from test-driven
development.
Abstract: The shortest path (SP) problem concerns with finding the shortest path from a specific origin to a specified destination in a given network while minimizing the total cost associated with the path. This problem has widespread applications. Important applications of the SP problem include vehicle routing in transportation systems particularly in the field of in-vehicle Route Guidance System (RGS) and traffic assignment problem (in transportation planning). Well known applications of evolutionary methods like Genetic Algorithms (GA), Ant Colony Optimization, Particle Swarm Optimization (PSO) have come up to solve complex optimization problems to overcome the shortcomings of existing shortest path analysis methods. It has been reported by various researchers that PSO performs better than other evolutionary optimization algorithms in terms of success rate and solution quality. Further Geographic Information Systems (GIS) have emerged as key information systems for geospatial data analysis and visualization. This research paper is focused towards the application of PSO for solving the shortest path problem between multiple points of interest (POI) based on spatial data of Allahabad City and traffic speed data collected using GPS. Geovisualization of results of analysis is carried out in GIS.
Abstract: In recent years, many researches to mine the exploding Web world, especially User Generated Content (UGC) such as
weblogs, for knowledge about various phenomena and events in the physical world have been done actively, and also Web services
with the Web-mined knowledge have begun to be developed for
the public. However, there are few detailed investigations on how accurately Web-mined data reflect physical-world data. It must be
problematic to idolatrously utilize the Web-mined data in public Web services without ensuring their accuracy sufficiently. Therefore,
this paper introduces the simplest Web Sensor and spatiotemporallynormalized
Web Sensor to extract spatiotemporal data about a target
phenomenon from weblogs searched by keyword(s) representing the
target phenomenon, and tries to validate the potential and reliability of the Web-sensed spatiotemporal data by four kinds of granularity
analyses of coefficient correlation with temperature, rainfall, snowfall,
and earthquake statistics per day by region of Japan Meteorological
Agency as physical-world data: spatial granularity (region-s population
density), temporal granularity (time period, e.g., per day vs. per week), representation granularity (e.g., “rain" vs. “heavy rain"), and
media granularity (weblogs vs. microblogs such as Tweets).
Abstract: In this paper, an artificial intelligent technique for
robust digital image watermarking in multiwavelet domain is
proposed. The embedding technique is based on the quantization
index modulation technique and the watermark extraction process
does not require the original image. We have developed an
optimization technique using the genetic algorithms to search for
optimal quantization steps to improve the quality of watermarked
image and robustness of the watermark. In addition, we construct a
prediction model based on image moments and back propagation
neural network to correct an attacked image geometrically before the
watermark extraction process begins. The experimental results show
that the proposed watermarking algorithm yields watermarked image
with good imperceptibility and very robust watermark against various
image processing attacks.
Abstract: A number of routing algorithms based on learning
automata technique have been proposed for communication
networks. How ever, there has been little work on the effects of
variation of graph scarcity on the performance of these algorithms. In
this paper, a comprehensive study is launched to investigate the
performance of LASPA, the first learning automata based solution to
the dynamic shortest path routing, across different graph structures
with varying scarcities. The sensitivity of three main performance
parameters of the algorithm, being average number of processed
nodes, scanned edges and average time per update, to variation in
graph scarcity is reported. Simulation results indicate that the LASPA
algorithm can adapt well to the scarcity variation in graph structure
and gives much better outputs than the existing dynamic and fixed
algorithms in terms of performance criteria.
Abstract: The Major Depressive Disorder has been a burden of
medical expense in Taiwan as well as the situation around the world.
Major Depressive Disorder can be defined into different categories by
previous human activities. According to machine learning, we can
classify emotion in correct textual language in advance. It can help
medical diagnosis to recognize the variance in Major Depressive
Disorder automatically. Association language incremental is the
characteristic and relationship that can discovery words in sentence.
There is an overlapping-category problem for classification. In this
paper, we would like to improve the performance in classification in
principle of no overlapping-category problems. We present an
approach that to discovery words in sentence and it can find in high
frequency in the same time and can-t overlap in each category, called
Association Language Features by its Category (ALFC).
Experimental results show that ALFC distinguish well in Major
Depressive Disorder and have better performance. We also compare
the approach with baseline and mutual information that use single
words alone or correlation measure.
Abstract: Chess is one of the indoor games, which improves the
level of human confidence, concentration, planning skills and
knowledge. The main objective of this paper is to help the chess
players to improve their chess openings using data mining
techniques. Budding Chess Players usually do practices by analyzing
various existing openings. When they analyze and correlate
thousands of openings it becomes tedious and complex for them. The
work done in this paper is to analyze the best lines of Blackmar-
Diemer Gambit(BDG) which opens with White D4... using data
mining analysis. It is carried out on the collection of winning games
by applying association rules. The first step of this analysis is
assigning variables to each different sequence moves. In the second
step, the sequence association rules were generated to calculate
support and confidence factor which help us to find the best
subsequence chess moves that may lead to winning position.
Abstract: A Web-services based grid infrastructure is evolving to be readily available in the near future. In this approach, the Web services are inherited (encapsulated or functioned) into the same existing Grid services class. In practice there is not much difference between the existing Web and grid infrastructure. Grid services emerged as stateful web services. In this paper, we present the key components of web-services based grid and also how the resource discovery is performed on web-services based grid considering resource discovery, as a critical service, to be provided by any type of grid.
Abstract: In this paper, we propose a method to extract the road
signs. Firstly, the grabbed image is converted into the HSV color space
to detect the road signs. Secondly, the morphological operations are
used to reduce noise. Finally, extract the road sign using the geometric
property. The feature extraction of road sign is done by using the color
information. The proposed method has been tested for the real
situations. From the experimental results, it is seen that the proposed
method can extract the road sign features effectively.
Abstract: Clustering is a very well known technique in data mining. One of the most widely used clustering techniques is the kmeans algorithm. Solutions obtained from this technique depend on the initialization of cluster centers and the final solution converges to local minima. In order to overcome K-means algorithm shortcomings, this paper proposes a hybrid evolutionary algorithm based on the combination of PSO, SA and K-means algorithms, called PSO-SA-K, which can find better cluster partition. The performance is evaluated through several benchmark data sets. The simulation results show that the proposed algorithm outperforms previous approaches, such as PSO, SA and K-means for partitional clustering problem.
Abstract: The clustering ensembles combine multiple partitions
generated by different clustering algorithms into a single clustering
solution. Clustering ensembles have emerged as a prominent method
for improving robustness, stability and accuracy of unsupervised
classification solutions. So far, many contributions have been done to
find consensus clustering. One of the major problems in clustering
ensembles is the consensus function. In this paper, firstly, we
introduce clustering ensembles, representation of multiple partitions,
its challenges and present taxonomy of combination algorithms.
Secondly, we describe consensus functions in clustering ensembles
including Hypergraph partitioning, Voting approach, Mutual
information, Co-association based functions and Finite mixture
model, and next explain their advantages, disadvantages and
computational complexity. Finally, we compare the characteristics of
clustering ensembles algorithms such as computational complexity,
robustness, simplicity and accuracy on different datasets in previous
techniques.
Abstract: A new algorithm called Character-Comparison to Character-Access (CCCA) is developed to test the effect of both: 1) converting character-comparison and number-comparison into character-access and 2) the starting point of checking on the performance of the checking operation in string searching. An experiment is performed using both English text and DNA text with different sizes. The results are compared with five algorithms, namely, Naive, BM, Inf_Suf_Pref, Raita, and Cycle. With the CCCA algorithm, the results suggest that the evaluation criteria of the average number of total comparisons are improved up to 35%. Furthermore, the results suggest that the clock time required by the other algorithms is improved in range from 22.13% to 42.33% by the new CCCA algorithm.
Abstract: Image-based Rendering(IBR) techniques recently
reached in broad fields which leads to a critical challenge to build up
IBR-Driven visualization platform where meets requirement of high
performance, large bounds of distributed visualization resource
aggregation and concentration, multiple operators deploying and
CSCW design employing. This paper presents an unique IBR-based
visualization dataflow model refer to specific characters of IBR
techniques and then discusses prominent feature of IBR-Driven
distributed collaborative visualization (DCV) system before finally
proposing an novel prototype. The prototype provides a well-defined
three level modules especially work as Central Visualization Server,
Local Proxy Server and Visualization Aid Environment, by which
data and control for collaboration move through them followed the
previous dataflow model. With aid of this triple hierarchy architecture
of that, IBR oriented application construction turns to be easy. The
employed augmented collaboration strategy not only achieve
convenient multiple users synchronous control and stable processing
management, but also is extendable and scalable.
Abstract: This paper describes a complex energy signal model
that is isomorphic with digital human fingerprint images. By using
signal models, the problem of fingerprint matching is transformed
into the signal processing problem of finding a correlation between
two complex signals that differ by phase-rotation and time-scaling. A
technique for minutiae matching that is independent of image
translation, rotation and linear-scaling, and is resistant to missing
minutiae is proposed. The method was tested using random data
points. The results show that for matching prints the scaling and
rotation angles are closely estimated and a stronger match will have a
higher correlation.
Abstract: The model-based approach to user interface design relies on developing separate models that are capturing various aspects about users, tasks, application domain, presentation and dialog representations. This paper presents a task modeling approach for user interface design and aims at exploring the mappings between task, domain and presentation models. The basic idea of our approach is to identify typical configurations in task and domain models and to investigate how they relate each other. A special emphasis is put on application-specific functions and mappings between domain objects and operational task structures. In this respect, we will distinguish between three layers in the task decomposition: a functional layer, a planning layer, and an operational layer.