Abstract: This paper presents an overview of the methodologies
and algorithms for statistical texture analysis of 2D images. Methods
for digital-image texture analysis are reviewed based on available
literature and research work either carried out or supervised by the
authors.
Abstract: In this paper, an extreme learning machine with an automatic segmentation algorithm is applied to heart disorder classification by heart sound signals. From continuous heart sound signals, the starting points of the first (S1) and the second heart pulses (S2) are extracted and corrected by utilizing an inter-pulse histogram. From the corrected pulse positions, a single period of heart sound signals is extracted and converted to a feature vector including the mel-scaled filter bank energy coefficients and the envelope coefficients of uniform-sized sub-segments. An extreme learning machine is used to classify the feature vector. In our cardiac disorder classification and detection experiments with 9 cardiac disorder categories, the proposed method shows significantly better performance than multi-layer perceptron, support vector machine, and hidden Markov model; it achieves the classification accuracy of 81.6% and the detection accuracy of 96.9%.
Abstract: Ontologies play an important role in semantic web
applications and are often developed by different groups and
continues to evolve over time. The knowledge in ontologies changes
very rapidly that make the applications outdated if they continue to
use old versions or unstable if they jump to new versions. Temporal
frames using frame versioning and slot versioning are used to take
care of dynamic nature of the ontologies. The paper proposes new
tags and restructured OWL format enabling the applications to work
with the old or new version of ontologies. Gene Ontology, a very
dynamic ontology, has been used as a case study to explain the OWL
Ontology with Temporal Tags.
Abstract: The pseudorandom number generators based on linear
feedback shift registers (LFSRs), are very quick, easy and secure in
the implementation of hardware and software. Thus they are very
popular and widely used. But LFSRs lead to fairly easy
cryptanalysis due to their completely linearity properties. In this
paper, we propose a stochastic generator, which is called Random
Feedback Shift Register (RFSR), using stochastic transformation
(Random block) with one-way and non-linearity properties.
Abstract: It has been defined that the “network is the system".
This implies providing levels of service, reliability, predictability and
availability that are commensurate with or better than those that
individual computers provide today. To provide this requires
integrated network management for interconnected networks of
heterogeneous devices covering both the local campus. In this paper
we are addressing a framework to effectively deal with this issue. It
consists of components and interactions between them which are
required to perform the service fault management. A real-world
scenario is used to derive the requirements which have been applied
to the component identification. An analysis of existing frameworks
and approaches with respect to their applicability to the framework is
also carried out.
Abstract: In this paper we propose a novel Run Time Interface
(RTI) technique to provide an efficient environment for MPI jobs on
the heterogeneous architecture of PARAM Padma. It suggests an
innovative, unified framework for the job management interface
system in parallel and distributed computing. This approach employs
proxy scheme. The implementation shows that the proposed RTI is
highly scalable and stable. Moreover RTI provides the storage access
for the MPI jobs in various operating system platforms and improve
the data access performance through high performance C-DAC
Parallel File System (C-PFS). The performance of the RTI is
evaluated by using the standard HPC benchmark suites and the
simulation results show that the proposed RTI gives good
performance on large scale supercomputing system.
Abstract: The information on the Web increases tremendously.
A number of search engines have been developed for searching Web
information and retrieving relevant documents that satisfy the
inquirers needs. Search engines provide inquirers irrelevant
documents among search results, since the search is text-based rather
than semantic-based. Information retrieval research area has
presented a number of approaches and methodologies such as
profiling, feedback, query modification, human-computer interaction,
etc for improving search results. Moreover, information retrieval has
employed artificial intelligence techniques and strategies such as
machine learning heuristics, tuning mechanisms, user and system
vocabularies, logical theory, etc for capturing user's preferences and
using them for guiding the search based on the semantic analysis
rather than syntactic analysis. Although a valuable improvement has
been recorded on search results, the survey has shown that still
search engines users are not really satisfied with their search results.
Using ontologies for semantic-based searching is likely the key
solution. Adopting profiling approach and using ontology base
characteristics, this work proposes a strategy for finding the exact
meaning of the query terms in order to retrieve relevant information
according to user needs. The evaluation of conducted experiments
has shown the effectiveness of the suggested methodology and
conclusion is presented.
Abstract: In this paper, a method to detect multiple ellipses is presented. The technique is efficient and robust against incomplete ellipses due to partial occlusion, noise or missing edges and outliers. It is an iterative technique that finds and removes the best ellipse until no reasonable ellipse is found. At each run, the best ellipse is extracted from randomly selected edge patches, its fitness calculated and compared to a fitness threshold. RANSAC algorithm is applied as a sampling process together with the Direct Least Square fitting of ellipses (DLS) as the fitting algorithm. In our experiment, the method performs very well and is robust against noise and spurious edges on both synthetic and real-world image data.
Abstract: Packet switched data network like Internet, which has
traditionally supported throughput sensitive applications such as email
and file transfer, is increasingly supporting delay-sensitive
multimedia applications such as interactive video. These delaysensitive
applications would often rather sacrifice some throughput
for better delay. Unfortunately, the current packet switched network
does not offer choices, but instead provides monolithic best-effort
service to all applications. This paper evaluates Class Based Queuing
(CBQ), Coordinated Earliest Deadline First (CEDF), Weighted
Switch Deficit Round Robin (WSDRR) and RED-Boston scheduling
schemes that is sensitive to delay bound expectations for variety of
real time applications and an enhancement of WSDRR is proposed.
Abstract: Nowadays, we are facing with network threats that
cause enormous damage to the Internet community day by day. In
this situation, more and more people try to prevent their network
security using some traditional mechanisms including firewall,
Intrusion Detection System, etc. Among them honeypot is a versatile
tool for a security practitioner, of course, they are tools that are meant
to be attacked or interacted with to more information about attackers,
their motives and tools. In this paper, we will describe usefulness of
low-interaction honeypot and high-interaction honeypot and
comparison between them. And then we propose hybrid honeypot
architecture that combines low and high -interaction honeypot to
mitigate the drawback. In this architecture, low-interaction honeypot
is used as a traffic filter. Activities like port scanning can be
effectively detected by low-interaction honeypot and stop there.
Traffic that cannot be handled by low-interaction honeypot is handed
over to high-interaction honeypot. In this case, low-interaction
honeypot is used as proxy whereas high-interaction honeypot offers
the optimal level realism. To prevent the high-interaction honeypot
from infections, containment environment (VMware) is used.
Abstract: Nowadays e-Learning is more popular, in Vietnam
especially. In e-learning, materials for studying are very important.
It is necessary to design the knowledge base systems and expert
systems which support for searching, querying, solving of
problems. The ontology, which was called Computational Object
Knowledge Base Ontology (COB-ONT), is a useful tool for
designing knowledge base systems in practice. In this paper, a
design method for knowledge base systems in education using
COKB-ONT will be presented. We also present the design of a
knowledge base system that supports studying knowledge and
solving problems in higher mathematics.
Abstract: In this paper we present an off line system for the
recognition of the handwritten numeric chains. Our work is divided
in two big parts. The first part is the realization of a recognition
system of the isolated handwritten digits. In this case the study is
based mainly on the evaluation of neural network performances,
trained with the gradient back propagation algorithm. The used
parameters to form the input vector of the neural network are
extracted on the binary images of the digits by several methods: the
distribution sequence, the Barr features and the centred moments of
the different projections and profiles. The second part is the
extension of our system for the reading of the handwritten numeric
chains constituted of a variable number of digits. The vertical
projection is used to segment the numeric chain at isolated digits and
every digit (or segment) will be presented separately to the entry of
the system achieved in the first part (recognition system of the
isolated handwritten digits). The result of the recognition of the
numeric chain will be displayed at the exit of the global system.
Abstract: Data mining, which is the exploration of
knowledge from the large set of data, generated as a result of
the various data processing activities. Frequent Pattern Mining
is a very important task in data mining. The previous
approaches applied to generate frequent set generally adopt
candidate generation and pruning techniques for the
satisfaction of the desired objective. This paper shows how
the different approaches achieve the objective of frequent
mining along with the complexities required to perform the
job. This paper will also look for hardware approach of cache
coherence to improve efficiency of the above process. The
process of data mining is helpful in generation of support
systems that can help in Management, Bioinformatics,
Biotechnology, Medical Science, Statistics, Mathematics,
Banking, Networking and other Computer related
applications. This paper proposes the use of both upward and
downward closure property for the extraction of frequent item
sets which reduces the total number of scans required for the
generation of Candidate Sets.
Abstract: XML is an important standard of data exchange and
representation. As a mature database system, using relational database
to support XML data may bring some advantages. But storing XML in
relational database has obvious redundancy that wastes disk space,
bandwidth and disk I/O when querying XML data. For the efficiency
of storage and query XML, it is necessary to use compressed XML
data in relational database. In this paper, a compressed relational
database technology supporting XML data is presented. Original
relational storage structure is adaptive to XPath query process. The
compression method keeps this feature. Besides traditional relational
database techniques, additional query process technologies on
compressed relations and for special structure for XML are presented.
In this paper, technologies for XQuery process in compressed
relational database are presented..
Abstract: Horizontal wells are proven to be better producers
because they can be extended for a long distance in the pay zone.
Engineers have the technical means to forecast the well productivity
for a given horizontal length. However, experiences have shown that
the actual production rate is often significantly less than that of
forecasted. It is a difficult task, if not impossible to identify the real
reason why a horizontal well is not producing what was forecasted.
Often the source of problem lies in the drilling of horizontal section
such as permeability reduction in the pay zone due to mud invasion
or snaky well patterns created during drilling. Although drillers aim
to drill a constant inclination hole in the pay zone, the more frequent
outcome is a sinusoidal wellbore trajectory. The two factors, which
play an important role in wellbore tortuosity, are the inclination and
side force at bit. A constant inclination horizontal well can only be
drilled if the bit face is maintained perpendicular to longitudinal axis
of bottom hole assembly (BHA) while keeping the side force nil at
the bit. This approach assumes that there exists no formation force at
bit. Hence, an appropriate BHA can be designed if bit side force and
bit tilt are determined accurately. The Artificial Neural Network
(ANN) is superior to existing analytical techniques. In this study, the
neural networks have been employed as a general approximation tool
for estimation of the bit side forces. A number of samples are
analyzed with ANN for parameters of bit side force and the results
are compared with exact analysis. Back Propagation Neural network
(BPN) is used to approximation of bit side forces. Resultant low
relative error value of the test indicates the usability of the BPN in
this area.
Abstract: We have applied new accelerated algorithm for linear
discriminate analysis (LDA) in face recognition with support vector
machine. The new algorithm has the advantage of optimal selection
of the step size. The gradient descent method and new algorithm has
been implemented in software and evaluated on the Yale face
database B. The eigenfaces of these approaches have been used to
training a KNN. Recognition rate with new algorithm is compared
with gradient.
Abstract: Using maximal consistent blocks of tolerance relation
on the universe in incomplete decision table, the concepts of join block
and meet block are introduced and studied. Including tolerance class,
other blocks such as tolerant kernel and compatible kernel of an object
are also discussed at the same time. Upper and lower approximations
based on those blocks are also defined. Default definite decision rules
acquired from incomplete decision table are proposed in the paper. An
incremental algorithm to update default definite decision rules is
suggested for effective mining tasks from incomplete decision table
into which data is appended. Through an example, we demonstrate
how default definite decision rules based on maximal consistent
blocks, join blocks and meet blocks are acquired and how optimization
is done in support of discernibility matrix and discernibility function
in the incomplete decision table.
Abstract: Vehicular communications play a substantial role in providing safety in transportation by means of safety message exchange. Researchers have proposed several solutions for securing safety messages. Protocols based on a fixed key infrastructure are more efficient in implementation and maintain stronger security in comparison with dynamic structures. These protocols utilize zone partitioning to establish distinct key infrastructure under Certificate Authority (CA) supervision in different regions. Secure anonymous broadcasting (SAB) is one of these protocols that preserves most of security aspects but it has some deficiencies in practice. A very important issue is region change of a vehicle for its mobility. Changing regions leads to change of CA and necessity of having new key set to resume communication. In this paper, we propose solutions for informing vehicles about region change to obtain new key set before entering next region. This hinders attackers- intrusion, packet loss and lessons time delay. We also make key request messages secure by confirming old CA-s public key to the message, hence stronger security for safety message broadcasting is attained.
Abstract: Functioning of a biometric system in large part
depends on the performance of the similarity measure function.
Frequently a generalized similarity distance measure function such as
Euclidian distance or Mahalanobis distance is applied to the task of
matching biometric feature vectors. However, often accuracy of a
biometric system can be greatly improved by designing a customized
matching algorithm optimized for a particular biometric application.
In this paper we propose a tailored similarity measure function for
behavioral biometric systems based on the expert knowledge of the
feature level data in the domain. We compare performance of a
proposed matching algorithm to that of other well known similarity
distance functions and demonstrate its superiority with respect to the
chosen domain.
Abstract: In this paper optimization of routing in ad-hoc
networks is surveyed and a new method for reducing the complexity
of routing algorithms is suggested. Using binary matrices for each
node in the network and updating it once the routing is done, helps
nodes to stop repeating the routing protocols in each data transfer.
The algorithm suggested can reduce the complexity of routing to the
least amount possible.