Abstract: Web usage mining is an interesting application of data
mining which provides insight into customer behaviour on the Internet. An important technique to discover user access and navigation trails is based on sequential patterns mining. One of the
key challenges for web access patterns mining is tackling the problem
of mining richly structured patterns. This paper proposes a novel
model called Web Access Patterns Graph (WAP-Graph) to represent all of the access patterns from web mining graphically. WAP-Graph
also motivates the search for new structural relation patterns, i.e. Concurrent Access Patterns (CAP), to identify and predict more
complex web page requests. Corresponding CAP mining and modelling methods are proposed and shown to be effective in the
search for and representation of concurrency between access patterns
on the web. From experiments conducted on large-scale synthetic
sequence data as well as real web access data, it is demonstrated that
CAP mining provides a powerful method for structural knowledge discovery, which can be visualised through the CAP-Graph model.
Abstract: A new target detection technique is presented in this
paper for the identification of small boats in coastal surveillance. The
proposed technique employs an adaptive progressive thresholding (APT) scheme to first process the given input scene to separate any
objects present in the scene from the background. The preprocessing
step results in an image having only the foreground objects, such as
boats, trees and other cluttered regions, and hence reduces the search
region for the correlation step significantly. The processed image is then fed to the shifted phase-encoded fringe-adjusted joint transform
correlator (SPFJTC) technique which produces single and delta-like
correlation peak for a potential target present in the input scene. A
post-processing step involves using a peak-to-clutter ratio (PCR) to determine whether the boat in the input scene is authorized or unauthorized. Simulation results are presented to show that the
proposed technique can successfully determine the presence of an authorized boat and identify any intruding boat present in the given input scene.
Abstract: Measurement of the quality of image compression is important for image processing application. In this paper, we propose an objective image quality assessment to measure the quality of gray scale compressed image, which is correlation well with subjective quality measurement (MOS) and least time taken. The new objective image quality measurement is developed from a few fundamental of objective measurements to evaluate the compressed image quality based on JPEG and JPEG2000. The reliability between each fundamental objective measurement and subjective measurement (MOS) is found. From the experimental results, we found that the Maximum Difference measurement (MD) and a new proposed measurement, Structural Content Laplacian Mean Square Error (SCLMSE), are the suitable measurements that can be used to evaluate the quality of JPEG200 and JPEG compressed image, respectively. In addition, MD and SCLMSE measurements are scaled to make them equivalent to MOS, given the rate of compressed image quality from 1 to 5 (unacceptable to excellent quality).
Abstract: During recent years, the traditional learning
approaches have undergone fundamental changes due to the
emergence of new technologies such as multimedia, hypermedia and
telecommunication. E-learning is a modern world phenomenon that
has come into existence in the information age and in a knowledgebased
society. E-learning has developed significantly within a short
period of time. Thus it is of a great significant to secure information,
allow a confident access and prevent unauthorized accesses. Making
use of individuals- physiologic or behavioral (biometric) properties is
a confident method to make the information secure. Among the
biometrics, fingerprint is more acceptable and most countries use it as
an efficient methods of identification. This article provides a new
method to compare the fingerprint comparison by pattern recognition
and image processing techniques. To verify fingerprint, the shortest
distance method is used together with perceptronic multilayer neural
network functioning based on minutiae. This method is highly
accurate in the extraction of minutiae and it accelerates comparisons
due to elimination of false minutiae and is more reliable compared
with methods that merely use directional images.
Abstract: The Internet telephony employs a new type of Internet communication on which a mutual communication is realized by establishing sessions. Session Initiation Protocol (SIP) is used to establish sessions between end-users. For unreliable transmission (UDP), SIP message should be retransmitted when it is lost. The retransmissions increase a load of the SIP signaling network, and sometimes lead to performance degradation when a network is overloaded. The paper proposes an overload control for a SIP signaling network to protect from a performance degradation. Introducing two thresholds in a queue of a SIP proxy server, the SIP proxy server detects a congestion. Once congestion is detected, a SIP signaling network restricts to make new calls. The proposed overload control is evaluated using the network simulator (ns-2). With simulation results, the paper shows the proposed overload control works well.
Abstract: This paper describes a proposed support system which
enables applications designers to effectively create VR applications
using multiple haptic APIs. When the VR designers create
applications, it is often difficult to handle and understand many
parameters and functions that have to be set in the application program
using documentation manuals only. This complication may disrupt
creative imagination and result in inefficient coding. So, we proposed
the support application which improved the efficiency of VR
applications development and provided the interactive components of
confirmation of operations with haptic sense previously.
In this paper, we describe improvements of our former proposed
support application, which was applicable to multiple APIs and haptic
devices, and evaluate the new application by having participants
complete VR program. Results from a preliminary experiment suggest
that our application facilitates creation of VR applications.
Abstract: In this work, the primary compressive strength
components of human femur trabecular bone are qualitatively
assessed using image processing and wavelet analysis. The Primary
Compressive (PC) component in planar radiographic femur trabecular
images (N=50) is delineated by semi-automatic image processing
procedure. Auto threshold binarization algorithm is employed to
recognize the presence of mineralization in the digitized images. The
qualitative parameters such as apparent mineralization and total area
associated with the PC region are derived for normal and abnormal
images.The two-dimensional discrete wavelet transforms are utilized
to obtain appropriate features that quantify texture changes in medical
images .The normal and abnormal samples of the human femur are
comprehensively analyzed using Harr wavelet.The six statistical
parameters such as mean, median, mode, standard deviation, mean
absolute deviation and median absolute deviation are derived at level
4 decomposition for both approximation and horizontal wavelet
coefficients. The correlation coefficient of various wavelet derived
parameters with normal and abnormal for both approximated and
horizontal coefficients are estimated. It is seen that in almost all cases
the abnormal show higher degree of correlation than normals. Further
the parameters derived from approximation coefficient show more
correlation than those derived from the horizontal coefficients. The
parameters mean and median computed at the output of level 4 Harr
wavelet channel was found to be a useful predictor to delineate the
normal and the abnormal groups.
Abstract: In blended learning environments, the Internet can be combined with other technologies. The aim of this research was to design, introduce and validate a model to support synchronous and asynchronous activities by managing content domains in an Adaptive Hypermedia System (AHS). The application is based on information recovery techniques, clustering algorithms and adaptation rules to adjust the user's model to contents and objects of study. This system was applied to blended learning in higher education. The research strategy used was the case study method. Empirical studies were carried out on courses at two universities to validate the model. The results of this research show that the model had a positive effect on the learning process. The students indicated that the synchronous and asynchronous scenario is a good option, as it involves a combination of work with the lecturer and the AHS. In addition, they gave positive ratings to the system and stated that the contents were adapted to each user profile.
Abstract: Knowledge bases are basic components of expert
systems or intelligent computational programs. Knowledge bases
provide knowledge, events that serve deduction activity,
computation and control. Therefore, researching and developing of
models for knowledge representation play an important role in
computer science, especially in Artificial Intelligence Science and
intelligent educational software. In this paper, the extensive
deduction computational model is proposed to design knowledge
bases whose attributes are able to be real values or functional values.
The system can also solve problems based on knowledge bases.
Moreover, the models and algorithms are applied to produce the
educational software for solving alternating current problems or
solving set of equations automatically.
Abstract: Many natural language expressions are ambiguous, and
need to draw on other sources of information to be interpreted.
Interpretation of the e word تعاون to be considered as a noun or a verb
depends on the presence of contextual cues. To interpret words we
need to be able to discriminate between different usages. This paper
proposes a hybrid of based- rules and a machine learning method for
tagging Arabic words. The particularity of Arabic word that may be
composed of stem, plus affixes and clitics, a small number of rules
dominate the performance (affixes include inflexional markers for
tense, gender and number/ clitics include some prepositions,
conjunctions and others). Tagging is closely related to the notion of
word class used in syntax. This method is based firstly on rules (that
considered the post-position, ending of a word, and patterns), and
then the anomaly are corrected by adopting a memory-based learning
method (MBL). The memory_based learning is an efficient method to
integrate various sources of information, and handling exceptional
data in natural language processing tasks. Secondly checking the
exceptional cases of rules and more information is made available to
the learner for treating those exceptional cases. To evaluate the
proposed method a number of experiments has been run, and in
order, to improve the importance of the various information in
learning.
Abstract: There are a number of different cars for transferring hundreds of close contacts of swine influenza patients to hospital, and we need to carefully assign the passengers to those cars in order to minimize the risk of influenza spreading during transportation. The paper presents an approach to straightforward obtain the optimal solution of the relaxed problems, and develops two iterative improvement algorithms to effectively tackle the general problem.
Abstract: Expression data analysis is based mostly on the
statistical approaches that are indispensable for the study of
biological systems. Large amounts of multidimensional data resulting
from the high-throughput technologies are not completely served by
biostatistical techniques and are usually complemented with visual,
knowledge discovery and other computational tools. In many cases,
in biological systems we only speculate on the processes that are
causing the changes, and it is the visual explorative analysis of data
during which a hypothesis is formed. We would like to show the
usability of multidimensional visualization tools and promote their
use in life sciences. We survey and show some of the
multidimensional visualization tools in the process of data
exploration, such as parallel coordinates and radviz and we extend
them by combining them with the self-organizing map algorithm. We
use a time course data set of transitional cell carcinoma of the bladder
in our examples. Analysis of data with these tools has the potential to
uncover additional relationships and non-trivial structures.
Abstract: Biometric measures of one kind or another have been
used to identify people since ancient times, with handwritten
signatures, facial features, and fingerprints being the traditional
methods. Of late, Systems have been built that automate the task of
recognition, using these methods and newer ones, such as hand
geometry, voiceprints and iris patterns. These systems have different
strengths and weaknesses. This work is a two-section composition. In
the starting section, we present an analytical and comparative study
of common biometric techniques. The performance of each of them
has been viewed and then tabularized as a result. The latter section
involves the actual implementation of the techniques under
consideration that has been done using a state of the art tool called,
MATLAB. This tool aids to effectively portray the corresponding
results and effects.
Abstract: Starting from a biologically inspired framework, Gabor filters were built up from retinal filters via LMSE algorithms. Asubset of retinal filter kernels was chosen to form a particular Gabor filter by using a weighted sum. One-dimensional optimization approaches were shown to be inappropriate for the problem. All model parameters were fixed with biological or image processing constraints. Detailed analysis of the optimization procedure led to the introduction of a minimization constraint. Finally, quantization of weighting factors was investigated. This resulted in an optimized cascaded structure of a Gabor filter bank implementation with lower computational cost.
Abstract: In-place sorting algorithms play an important role in many fields such as very large database systems, data warehouses, data mining, etc. Such algorithms maximize the size of data that can be processed in main memory without input/output operations. In this paper, a novel in-place sorting algorithm is presented. The algorithm comprises two phases; rearranging the input unsorted array in place, resulting segments that are ordered relative to each other but whose elements are yet to be sorted. The first phase requires linear time, while, in the second phase, elements of each segment are sorted inplace in the order of z log (z), where z is the size of the segment, and O(1) auxiliary storage. The algorithm performs, in the worst case, for an array of size n, an O(n log z) element comparisons and O(n log z) element moves. Further, no auxiliary arithmetic operations with indices are required. Besides these theoretical achievements of this algorithm, it is of practical interest, because of its simplicity. Experimental results also show that it outperforms other in-place sorting algorithms. Finally, the analysis of time and space complexity, and required number of moves are presented, along with the auxiliary storage requirements of the proposed algorithm.
Abstract: The ability of information systems to operate in conjunction with each other encompassing communication protocols, hardware, software, application, and data compatibility layers. There has been considerable work in industry on the development of component interoperability models, such as CORBA, (D)COM and JavaBeans. These models are intended to reduce the complexity of software development and to facilitate reuse of off-the-shelf components. The focus of these models is syntactic interface specification, component packaging, inter-component communications, and bindings to a runtime environment. What these models lack is a consideration of architectural concerns – specifying systems of communicating components, explicitly representing loci of component interaction, and exploiting architectural styles that provide well-understood global design solutions. The development of complex business applications is now focused on an assembly of components available on a local area network or on the net. These components must be localized and identified in terms of available services and communication protocol before any request. The first part of the article introduces the base concepts of components and middleware while the following sections describe the different up-todate models of communication and interaction and the last section shows how different models can communicate among themselves.
Abstract: In recent years image watermarking has become an
important research area in data security, confidentiality and image
integrity. Many watermarking techniques were proposed for medical
images. However, medical images, unlike most of images, require
extreme care when embedding additional data within them because
the additional information must not affect the image quality and
readability. Also the medical records, electronic or not, are linked to
the medical secrecy, for that reason, the records must be confidential.
To fulfill those requirements, this paper presents a lossless
watermarking scheme for DICOM images. The proposed a fragile
scheme combines two reversible techniques based on difference
expansion for patient's data hiding and protecting the region of
interest (ROI) with tamper detection and recovery capability.
Patient's data are embedded into ROI, while recovery data are
embedded into region of non-interest (RONI). The experimental
results show that the original image can be exactly extracted from the
watermarked one in case of no tampering. In case of tampered ROI,
tampered area can be localized and recovered with a high quality
version of the original area.
Abstract: All Text processing systems allow their users to
search a pattern of string from a given text. String matching is
fundamental to database and text processing applications. Every text
editor must contain a mechanism to search the current document for
arbitrary strings. Spelling checkers scan an input text for words in the
dictionary and reject any strings that do not match. We store our
information in data bases so that later on we can retrieve the same
and this retrieval can be done by using various string matching
algorithms. This paper is describing a new string matching algorithm
for various applications. A new algorithm has been designed with the
help of Rabin Karp Matcher, to improve string matching process.
Abstract: A given polynomial, possibly with multiple roots, is
factored into several lower-degree distinct-root polynomials with
natural-order-integer powers. All the roots, including multiplicities,
of the original polynomial may be obtained by solving these lowerdegree
distinct-root polynomials, instead of the original high-degree
multiple-root polynomial directly.
The approach requires polynomial Greatest Common Divisor
(GCD) computation. The very simple and effective process, “Monic
polynomial subtractions" converted trickily from “Longhand
polynomial divisions" of Euclidean algorithm is employed. It
requires only simple elementary arithmetic operations without any
advanced mathematics.
Amazingly, the derived routine gives the expected results for the
test polynomials of very high degree, such as p( x) =(x+1)1000.
Abstract: This paper gives an overview of the mapping
mechanism of SEAM-a methodology for the automatic generation of
knowledge models and its mapping onto Java codes. It discusses the
rules that will be used to map the different components in the
knowledge model automatically onto Java classes, properties and
methods. The aim of developing this mechanism is to help in the
creation of a prototype which will be used to validate the knowledge
model which has been generated automatically. It will also help to
link the modeling phase with the implementation phase as existing
knowledge engineering methodologies do not provide for proper
guidelines for the transition from the knowledge modeling phase to
development phase. This will decrease the development overheads
associated to the development of Knowledge Based Systems.