Abstract: Knowledge-based e-mail systems focus on
incorporating knowledge management approach in order to enhance
the traditional e-mail systems. In this paper, we present a knowledgebased
e-mail system called KS-Mail where people do not only send
and receive e-mail conventionally but are also able to create a sense
of knowledge flow. We introduce semantic processing on the e-mail
contents by automatically assigning categories and providing links to
semantically related e-mails. This is done to enrich the knowledge
value of each e-mail as well as to ease the organization of the e-mails
and their contents. At the application level, we have also built
components like the service manager, evaluation engine and search
engine to handle the e-mail processes efficiently by providing the
means to share and reuse knowledge. For this purpose, we present the
KS-Mail architecture, and elaborate on the details of the e-mail
server and the application server. We present the ontology mapping
technique used to achieve the e-mail content-s categorization as well
as the protocols that we have developed to handle the transactions in
the e-mail system. Finally, we discuss further on the implementation
of the modules presented in the KS-Mail architecture.
Abstract: Proteins or genes that have similar sequences are likely to perform the same function. One of the most widely used techniques for sequence comparison is sequence alignment. Sequence alignment allows mismatches and insertion/deletion, which represents biological mutations. Sequence alignment is usually performed only on two sequences. Multiple sequence alignment, is a natural extension of two-sequence alignment. In multiple sequence alignment, the emphasis is to find optimal alignment for a group of sequences. Several applicable techniques were observed in this research, from traditional method such as dynamic programming to the extend of widely used stochastic optimization method such as Genetic Algorithms (GAs) and Simulated Annealing. A framework with combination of Genetic Algorithm and Simulated Annealing is presented to solve Multiple Sequence Alignment problem. The Genetic Algorithm phase will try to find new region of solution while Simulated Annealing can be considered as an alignment improver for any near optimal solution produced by GAs.
Abstract: Camera calibration plays an important role in the domain of the analysis of sports video. Considering soccer video, in most cases, the cross-points can be used for calibration at the center of the soccer field are not sufficient, so this paper introduces a new automatic camera calibration algorithm focus on solving this problem by using the properties of images of the center circle, halfway line and a touch line. After the theoretical analysis, a practicable automatic algorithm is proposed. Very little information used though, results of experiments with both synthetic data and real data show that the algorithm is applicable.
Abstract: In 2011, Debiao et al. pointed out that S-3PAKE protocol proposed by Lu and Cao for password-authenticated key exchange in the three-party setting is vulnerable to an off-line dictionary attack. Then, they proposed some countermeasures to eliminate the security vulnerability of the S-3PAKE. Nevertheless, this paper points out their enhanced S-3PAKE protocol is still vulnerable to undetectable on-line dictionary attacks unlike their claim.
Abstract: Writer identification is one of the areas in pattern
recognition that attract many researchers to work in, particularly in
forensic and biometric application, where the writing style can be
used as biometric features for authenticating an identity. The
challenging task in writer identification is the extraction of unique
features, in which the individualistic of such handwriting styles
can be adopted into bio-inspired generalized global shape for
writer identification. In this paper, the feasibility of generalized
global shape concept of complimentary binding in Artificial
Immune System (AIS) for writer identification is explored. An
experiment based on the proposed framework has been conducted
to proof the validity and feasibility of the proposed approach for
off-line writer identification.
Abstract: This paper proposes a Web service and serviceoriented
architecture (SOA) for a computer-adaptive testing (CAT)
process on e-learning systems. The proposed architecture is
developed to solve an interoperability problem of the CAT process by
using Web service. The proposed SOA and Web service define all
services needed for the interactions between systems in order to
deliver items and essential data from Web service to the CAT Webbased
application. These services are implemented in a XML-based
architecture, platform independence and interoperability between the
Web service and CAT Web-based applications.
Abstract: Segmentation in ultrasound images is challenging due to the interference from speckle noise and fuzziness of boundaries. In this paper, a segmentation scheme using fuzzy c-means (FCM) clustering incorporating both intensity and texture information of images is proposed to extract breast lesions in ultrasound images. Firstly, the nonlinear structure tensor, which can facilitate to refine the edges detected by intensity, is used to extract speckle texture. And then, a spatial FCM clustering is applied on the image feature space for segmentation. In the experiments with simulated and clinical ultrasound images, the spatial FCM clustering with both intensity and texture information gets more accurate results than the conventional FCM or spatial FCM without texture information.
Abstract: In this paper a simple watermarking method for
color images is proposed. The proposed method is based on
watermark embedding for the histograms of the HSV planes
using visual cryptography watermarking. The method has
been proved to be robust for various image processing
operations such as filtering, compression, additive noise, and
various geometrical attacks such as rotation, scaling, cropping,
flipping, and shearing.
Abstract: Granular computing deals with representation of information in the form of some aggregates and related methods for transformation and analysis for problem solving. A granulation scheme based on clustering and Rough Set Theory is presented with focus on structured conceptualization of information has been presented in this paper. Experiments for the proposed method on four labeled data exhibit good result with reference to classification problem. The proposed granulation technique is semi-supervised imbibing global as well as local information granulation. To represent the results of the attribute oriented granulation a tree structure is proposed in this paper.
Abstract: A new method of adaptation in a partially integrated learning environment that includes electronic textbook (ET) and integrated tutoring system (ITS) is described. The algorithm of adaptation is described in detail. It includes: establishment of Interconnections of operations and concepts; estimate of the concept mastering level (for all concepts); estimate of student-s non-mastering level on the current learning step of information on each page of ET; creation of a rank-order list of links to the e-manual pages containing information that require repeated work.
Abstract: A large number of semantic web service composition
approaches are developed by the research community and one is
more efficient than the other one depending on the particular
situation of use. So a close look at the requirements of ones particular
situation is necessary to find a suitable approach to use. In this paper,
we present a Technique Recommendation System (TRS) which using
a classification of state-of-art semantic web service composition
approaches, can provide the user of the system with the
recommendations regarding the use of service composition approach
based on some parameters regarding situation of use. TRS has
modular architecture and uses the production-rules for knowledge
representation.
Abstract: This paper presents a comparative analysis of a new
unsupervised PCA-based technique for steel plates texture segmentation
towards defect detection. The proposed scheme called Variance
Based Component Analysis or VBCA employs PCA for feature
extraction, applies a feature reduction algorithm based on variance of
eigenpictures and classifies the pixels as defective and normal. While
the classic PCA uses a clusterer like Kmeans for pixel clustering,
VBCA employs thresholding and some post processing operations to
label pixels as defective and normal. The experimental results show
that proposed algorithm called VBCA is 12.46% more accurate and
78.85% faster than the classic PCA.
Abstract: Today many developers use the Java components
collected from the Internet as external LIBs to design and
develop their own software. However, some unknown security
bugs may exist in these components, such as SQL injection bug
may comes from the components which have no specific check
for the input string by users. To check these bugs out is very
difficult without source code. So a novel method to check the
bugs in Java bytecode based on points-to dataflow analysis is in
need, which is different to the common analysis techniques base
on the vulnerability pattern check. It can be used as an assistant
tool for security analysis of Java bytecode from unknown
softwares which will be used as extern LIBs.
Abstract: Nowadays, we are facing with network threats that
cause enormous damage to the Internet community day by day. In
this situation, more and more people try to prevent their network
security using some traditional mechanisms including firewall,
Intrusion Detection System, etc. Among them honeypot is a versatile
tool for a security practitioner, of course, they are tools that are meant
to be attacked or interacted with to more information about attackers,
their motives and tools. In this paper, we will describe usefulness of
low-interaction honeypot and high-interaction honeypot and
comparison between them. And then we propose hybrid honeypot
architecture that combines low and high -interaction honeypot to
mitigate the drawback. In this architecture, low-interaction honeypot
is used as a traffic filter. Activities like port scanning can be
effectively detected by low-interaction honeypot and stop there.
Traffic that cannot be handled by low-interaction honeypot is handed
over to high-interaction honeypot. In this case, low-interaction
honeypot is used as proxy whereas high-interaction honeypot offers
the optimal level realism. To prevent the high-interaction honeypot
from infections, containment environment (VMware) is used.
Abstract: Web sites are rapidly becoming the preferred media
choice for our daily works such as information search, company
presentation, shopping, and so on. At the same time, we live in a
period where visual appearances play an increasingly important
role in our daily life. In spite of designers- effort to develop a web
site which be both user-friendly and attractive, it would be difficult
to ensure the outcome-s aesthetic quality, since the visual
appearance is a matter of an individual self perception and opinion.
In this study, it is attempted to develop an automatic system for
web pages aesthetic evaluation which are the building blocks of
web sites. Based on the image processing techniques and artificial
neural networks, the proposed method would be able to categorize
the input web page according to its visual appearance and aesthetic
quality. The employed features are multiscale/multidirectional
textural and perceptual color properties of the web pages, fed to
perceptron ANN which has been trained as the evaluator. The
method is tested using university web sites and the results
suggested that it would perform well in the web page aesthetic
evaluation tasks with around 90% correct categorization.
Abstract: Fundamental motivation of this paper is how gaze estimation can be utilized effectively regarding an application to games. In games, precise estimation is not always important in aiming targets but an ability to move a cursor to an aiming target accurately is also significant. Incidentally, from a game producing point of view, a separate expression of a head movement and gaze movement sometimes becomes advantageous to expressing sense of presence. A case that panning a background image associated with a head movement and moving a cursor according to gaze movement can be a representative example. On the other hand, widely used technique of POG estimation is based on a relative position between a center of corneal reflection of infrared light sources and a center of pupil. However, a calculation of a center of pupil requires relatively complicated image processing, and therefore, a calculation delay is a concern, since to minimize a delay of inputting data is one of the most significant requirements in games. In this paper, a method to estimate a head movement by only using corneal reflections of two infrared light sources in different locations is proposed. Furthermore, a method to control a cursor using gaze movement as well as a head movement is proposed. By using game-like-applications, proposed methods are evaluated and, as a result, a similar performance to conventional methods is confirmed and an aiming control with lower computation power and stressless intuitive operation is obtained.
Abstract: Rule Discovery is an important technique for mining
knowledge from large databases. Use of objective measures for
discovering interesting rules leads to another data mining problem,
although of reduced complexity. Data mining researchers have
studied subjective measures of interestingness to reduce the volume
of discovered rules to ultimately improve the overall efficiency of
KDD process.
In this paper we study novelty of the discovered rules as a
subjective measure of interestingness. We propose a hybrid approach
based on both objective and subjective measures to quantify novelty
of the discovered rules in terms of their deviations from the known
rules (knowledge). We analyze the types of deviation that can arise
between two rules and categorize the discovered rules according to
the user specified threshold. We implement the proposed framework
and experiment with some public datasets. The experimental results
are promising.
Abstract: This paper addresses a stock-cutting problem with rotation of items and without the guillotine cutting constraint. In order to solve the large-scale problem effectively and efficiently, we propose a simple but fast heuristic algorithm. It is shown that this heuristic outperforms the latest published algorithms for large-scale problem instances.
Abstract: Recently research on human wayfinding has focused
mainly on mental representations rather than processes of
wayfinding. The objective of this paper is to demonstrate the
rationality behind applying multi-agent simulation paradigm to the
modeling of rescuer team wayfinding in order to develop
computational theory of perceptual wayfinding in crisis situations
using image schemata and affordances, which explains how people
find a specific destination in an unfamiliar building such as a
hospital. The hypothesis of this paper is that successful navigation is
possible if the agents are able to make the correct decision through
well-defined cues in critical cases, so the design of the building
signage is evaluated through the multi-agent-based simulation. In
addition, a special case of wayfinding in a building, finding one-s
way through three hospitals, is used to demonstrate the model.
Thereby, total rescue time for rescue operation during building fire is
computed. This paper discuses the computed rescue time for various
signage localization and provides experimental result for
optimization of building signage design. Therefore the most
appropriate signage design resulted in the shortest total rescue time in
various situations.
Abstract: Many factors affect the success of Machine Learning
(ML) on a given task. The representation and quality of the instance
data is first and foremost. If there is much irrelevant and redundant
information present or noisy and unreliable data, then knowledge
discovery during the training phase is more difficult. It is well known
that data preparation and filtering steps take considerable amount of
processing time in ML problems. Data pre-processing includes data
cleaning, normalization, transformation, feature extraction and
selection, etc. The product of data pre-processing is the final training
set. It would be nice if a single sequence of data pre-processing
algorithms had the best performance for each data set but this is not
happened. Thus, we present the most well know algorithms for each
step of data pre-processing so that one achieves the best performance
for their data set.