Abstract: The authors have been developing several models
based on artificial neural networks, linear regression models, Box-
Jenkins methodology and ARIMA models to predict the time series
of tourism. The time series consist in the “Monthly Number of Guest
Nights in the Hotels" of one region. Several comparisons between the
different type models have been experimented as well as the features
used at the entrance of the models. The Artificial Neural Network
(ANN) models have always had their performance at the top of the
best models. Usually the feed-forward architecture was used due to
their huge application and results. In this paper the author made a
comparison between different architectures of the ANNs using
simply the same input. Therefore, the traditional feed-forward
architecture, the cascade forwards, a recurrent Elman architecture and
a radial based architecture were discussed and compared based on the
task of predicting the mentioned time series.
Abstract: Distributed denial-of-service (DDoS) attacks pose a
serious threat to network security. There have been a lot of
methodologies and tools devised to detect DDoS attacks and reduce
the damage they cause. Still, most of the methods cannot
simultaneously achieve (1) efficient detection with a small number of
false alarms and (2) real-time transfer of packets. Here, we introduce
a method for proactive detection of DDoS attacks, by classifying the
network status, to be utilized in the detection stage of the proposed
anti-DDoS framework. Initially, we analyse the DDoS architecture
and obtain details of its phases. Then, we investigate the procedures
of DDoS attacks and select variables based on these features. Finally,
we apply the k-nearest neighbour (k-NN) method to classify the
network status into each phase of DDoS attack. The simulation result
showed that each phase of the attack scenario is classified well and
we could detect DDoS attack in the early stage.
Abstract: The identification and classification of weeds are of
major technical and economical importance in the agricultural
industry. To automate these activities, like in shape, color and
texture, weed control system is feasible. The goal of this paper is to
build a real-time, machine vision weed control system that can detect
weed locations. In order to accomplish this objective, a real-time
robotic system is developed to identify and locate outdoor plants
using machine vision technology and pattern recognition. The
algorithm is developed to classify images into broad and narrow class
for real-time selective herbicide application. The developed
algorithm has been tested on weeds at various locations, which have
shown that the algorithm to be very effectiveness in weed
identification. Further the results show a very reliable performance
on weeds under varying field conditions. The analysis of the results
shows over 90 percent classification accuracy over 140 sample
images (broad and narrow) with 70 samples from each category of
weeds.
Abstract: In order to accelerate the similarity search in highdimensional database, we propose a new hierarchical indexing method. It is composed of offline and online phases. Our contribution concerns both phases. In the offline phase, after gathering the whole of the data in clusters and constructing a hierarchical index, the main originality of our contribution consists to develop a method to construct bounding forms of clusters to avoid overlapping. For the online phase, our idea improves considerably performances of similarity search. However, for this second phase, we have also developed an adapted search algorithm. Our method baptized NOHIS (Non-Overlapping Hierarchical Index Structure) use the Principal Direction Divisive Partitioning (PDDP) as algorithm of clustering. The principle of the PDDP is to divide data recursively into two sub-clusters; division is done by using the hyper-plane orthogonal to the principal direction derived from the covariance matrix and passing through the centroid of the cluster to divide. Data of each two sub-clusters obtained are including by a minimum bounding rectangle (MBR). The two MBRs are directed according to the principal direction. Consequently, the nonoverlapping between the two forms is assured. Experiments use databases containing image descriptors. Results show that the proposed method outperforms sequential scan and SRtree in processing k-nearest neighbors.
Abstract: Location-aware computing is a type of pervasive
computing that utilizes user-s location as a dominant factor for
providing urban services and application-related usages. One of the
important urban services is navigation instruction for wayfinders in a
city especially when the user is a tourist. The services which are
presented to the tourists should provide adapted location aware
instructions. In order to achieve this goal, the main challenge is to
find spatial relevant objects and location-dependent information. The
aim of this paper is the development of a reusable location-aware
model to handle spatial relevancy parameters in urban location-aware
systems. In this way we utilized ontology as an approach which could
manage spatial relevancy by defining a generic model. Our
contribution is the introduction of an ontological model based on the
directed interval algebra principles. Indeed, it is assumed that the
basic elements of our ontology are the spatial intervals for the user
and his/her related contexts. The relationships between them would
model the spatial relevancy parameters. The implementation language
for the model is OWLs, a web ontology language. The achieved
results show that our proposed location-aware model and the
application adaptation strategies provide appropriate services for the
user.
Abstract: As the performance of the filtering system depends
upon the accuracy of the noise detection scheme, in this paper, we
present a new scheme for impulse noise detection based on two
levels of decision. In this scheme in the first stage we coarsely
identify the corrupted pixels and in the second stage we finally
decide whether the pixel under consideration is really corrupt or not.
The efficacy of the proposed filter has been confirmed by extensive
simulations.
Abstract: The world wide web coupled with the ever-increasing
sophistication of online technologies and software applications puts
greater emphasis on the need of even more sophisticated and
consistent quality requirements modeling than traditional software
applications. Web sites and Web applications (WebApps) are
becoming more information driven and content-oriented raising the
concern about their information quality (InQ). The consistent and
consolidated modeling of InQ requirements for WebApps at different
stages of the life cycle still poses a challenge. This paper proposes an
approach to specify InQ requirements for WebApps by reusing and
extending the ISO 25012:2008(E) data quality model. We also
discuss learnability aspect of information quality for the WebApps.
The proposed ISO 25012 based InQ framework is a step towards a
standardized approach to evaluate WebApps InQ.
Abstract: Many attempts have been made to strengthen Feistel based block ciphers. Among the successful proposals is the key- dependent S-box which was implemented in some of the high-profile ciphers. In this paper a key-dependent permutation box is proposed and implemented on DES as a case study. The new modified DES, MDES, was tested against Diehard Tests, avalanche test, and performance test. The results showed that in general MDES is more resistible to attacks than DES with negligible overhead. Therefore, it is believed that the proposed key-dependent permutation should be considered as a valuable primitive that can help strengthen the security of Substitution-Permutation Network which is a core design in many Feistel based block ciphers.
Abstract: Reverse Engineering is a very important process in
Software Engineering. It can be performed backwards from system
development life cycle (SDLC) in order to get back the source data
or representations of a system through analysis of its structure,
function and operation. We use reverse engineering to introduce an
automatic tool to generate system requirements from its program
source codes. The tool is able to accept the Cµ programming source
codes, scan the source codes line by line and parse the codes to
parser. Then, the engine of the tool will be able to generate system
requirements for that specific program to facilitate reuse and
enhancement of the program. The purpose of producing the tool is to
help recovering the system requirements of any system when the
system requirements document (SRD) does not exist due to
undocumented support of the system.
Abstract: In this paper, a framework is presented trying to make
the most secure web system out of the available generic and web
security technology which can be used as a guideline for
organizations building their web sites. The framework is designed to
provide necessary security services, to address the known security
threats, and to provide some cover to other security problems
especially unknown threats. The requirements for the design are
discussed which guided us to the design of secure web system. The
designed security framework is then simulated and various quality of
service (QoS) metrics are calculated to measure the performance of
this system.
Abstract: In this study, a classification-based video
super-resolution method using artificial neural network (ANN) is
proposed to enhance low-resolution (LR) to high-resolution (HR)
frames. The proposed method consists of four main steps:
classification, motion-trace volume collection, temporal adjustment,
and ANN prediction. A classifier is designed based on the edge
properties of a pixel in the LR frame to identify the spatial information.
To exploit the spatio-temporal information, a motion-trace volume is
collected using motion estimation, which can eliminate unfathomable
object motion in the LR frames. In addition, temporal lateral process is
employed for volume adjustment to reduce unnecessary temporal
features. Finally, ANN is applied to each class to learn the complicated
spatio-temporal relationship between LR and HR frames. Simulation
results show that the proposed method successfully improves both
peak signal-to-noise ratio and perceptual quality.
Abstract: In this work, we present a novel active learning approach
for learning a visual object detection system. Our system
is composed of an active learning mechanism as wrapper around
a sub-algorithm which implement an online boosting-based learning
object detector. In the core is a combination of a bootstrap procedure
and a semi automatic learning process based on the online boosting
procedure. The idea is to exploit the availability of classifier during
learning to automatically label training samples and increasingly
improves the classifier. This addresses the issue of reducing labeling
effort meanwhile obtain better performance. In addition, we propose
a verification process for further improvement of the classifier.
The idea is to allow re-update on seen data during learning for
stabilizing the detector. The main contribution of this empirical study
is a demonstration that active learning based on an online boosting
approach trained in this manner can achieve results comparable or
even outperform a framework trained in conventional manner using
much more labeling effort. Empirical experiments on challenging data
set for specific object deteciton problems show the effectiveness of
our approach.
Abstract: Processing the data by computers and performing
reasoning tasks is an important aim in Computer Science. Semantic
Web is one step towards it. The use of ontologies to enhance the
information by semantically is the current trend. Huge amount of
domain specific, unstructured on-line data needs to be expressed in
machine understandable and semantically searchable format.
Currently users are often forced to search manually in the results
returned by the keyword-based search services. They also want to use
their native languages to express what they search. In this paper, an
ontology-based automated question answering system on software
test documents domain is presented. The system allows users to enter
a question about the domain by means of natural language and
returns exact answer of the questions. Conversion of the natural
language question into the ontology based query is the challenging
part of the system. To be able to achieve this, a new algorithm
regarding free text to ontology based search engine query conversion
is proposed. The algorithm is based on investigation of suitable
question type and parsing the words of the question sentence.
Abstract: Deciding the numerous parameters involved in
designing a competent artificial neural network is a complicated task.
The existence of several options for selecting an appropriate
architecture for neural network adds to this complexity, especially
when different applications of heterogeneous natures are concerned.
Two completely different applications in engineering and medical
science were selected in the present study including prediction of
workpiece's surface roughness in ultrasonic-vibration assisted turning
and papilloma viruses oncogenicity. Several neural network
architectures with different parameters were developed for each
application and the results were compared. It was illustrated in this
paper that some applications such as the first one mentioned above
are apt to be modeled by a single network with sufficient accuracy,
whereas others such as the second application can be best modeled
by different expert networks for different ranges of output.
Development of knowledge about the essentials of neural networks
for different applications is regarded as the cornerstone of
multidisciplinary network design programs to be developed as a
means of reducing inconsistencies and the burden of the user
intervention.
Abstract: Grobner basis calculation forms a key part of computational
commutative algebra and many other areas. One important
ramification of the theory of Grobner basis provides a means to solve
a system of non-linear equations. This is why it has become very
important in the areas where the solution of non-linear equations is
needed, for instance in algebraic cryptanalysis and coding theory. This
paper explores on a parallel-distributed implementation for Grobner
basis calculation over GF(2). For doing so Buchberger algorithm is
used. OpenMP and MPI-C language constructs have been used to
implement the scheme. Some relevant results have been furnished
to compare the performances between the standalone and hybrid
(parallel-distributed) implementation.
Abstract: In this paper, a novel scheme is proposed for Ownership Identification and Color Image Authentication by deploying Cryptography & Digital Watermarking. The color image is first transformed from RGB to YST color space exclusively designed for watermarking. Followed by color space transformation, each channel is divided into 4×4 non-overlapping blocks with selection of central 2×2 sub-blocks. Depending upon the channel selected two to three LSBs of each central 2×2 sub-block are set to zero to hold the ownership, authentication and recovery information. The size & position of sub-block is important for correct localization, enhanced security & fast computation. As YS ÔèÑ T so it is suitable to embed the recovery information apart from the ownership and authentication information, therefore 4×4 block of T channel along with ownership information is then deployed by SHA160 to compute the content based hash that is unique and invulnerable to birthday attack or hash collision instead of using MD5 that may raise the condition i.e. H(m)=H(m'). For recovery, intensity mean of 4x4 block of each channel is computed and encoded upto eight bits. For watermark embedding, key based mapping of blocks is performed using 2DTorus Automorphism. Our scheme is oblivious, generates highly imperceptible images with correct localization of tampering within reasonable time and has the ability to recover the original work with probability of near one.
Abstract: Nowadays, organizations and business has several motivating factors to protect an individual-s privacy. Confidentiality refers to type of sharing information to third parties. This is always referring to private information, especially for personal information that usually needs to keep as a private. Because of the important of privacy concerns today, we need to design a database system that suits with privacy. Agrawal et. al. has introduced Hippocratic Database also we refer here as a privacy-aware database. This paper will explain how HD can be a future trend for web-based application to enhance their privacy level of trustworthiness among internet users.
Abstract: This article describes a Web pages automatic filtering system. It is an open and dynamic system based on multi agents architecture. This system is built up by a set of agents having each a quite precise filtering task of to carry out (filtering process broken up into several elementary treatments working each one a partial solution). New criteria can be added to the system without stopping its execution or modifying its environment. We want to show applicability and adaptability of the multi-agents approach to the networks information automatic filtering. In practice, most of existing filtering systems are based on modular conception approaches which are limited to centralized applications which role is to resolve static data flow problems. Web pages filtering systems are characterized by a data flow which varies dynamically.
Abstract: Face authentication for access control is a face
membership authentication which passes the person of the incoming
face if he turns out to be one of an enrolled person based on face
recognition or rejects if not. Face membership authentication belongs
to the two class classification problem where SVM(Support Vector
Machine) has been successfully applied and shows better performance
compared to the conventional threshold-based classification. However,
most of previous SVMs have been trained using image feature vectors
extracted from face images of each class member(enrolled
class/unenrolled class) so that they are not robust to variations in
illuminations, poses, and facial expressions and much affected by
changes in member configuration of the enrolled class
In this paper, we propose an effective face membership
authentication method based on SVM using class discriminating
features which represent an incoming face image-s associability with
each class distinctively. These class discriminating features are weakly
related with image features so that they are less affected by variations
in illuminations, poses and facial expression.
Through experiments, it is shown that the proposed face
membership authentication method performs better than the threshold
rule-based or the conventional SVM-based authentication methods and
is relatively less affected by changes in member size and membership.
Abstract: Recent scientific investigations indicate that
multimodal biometrics overcome the technical limitations of
unimodal biometrics, making them ideally suited for everyday life
applications that require a reliable authentication system. However,
for a successful adoption of multimodal biometrics, such systems
would require large heterogeneous datasets with complex multimodal
fusion and privacy schemes spanning various distributed
environments. From experimental investigations of current
multimodal systems, this paper reports the various issues related to
speed, error-recovery and privacy that impede the diffusion of such
systems in real-life. This calls for a robust mechanism that caters to
the desired real-time performance, robust fusion schemes,
interoperability and adaptable privacy policies.
The main objective of this paper is to present a framework that
addresses the abovementioned issues by leveraging on the
heterogeneous resource sharing capacities of Grid services and the
efficient machine learning capabilities of artificial neural networks
(ANN). Hence, this paper proposes a Grid-based neural network
framework for adopting multimodal biometrics with the view of
overcoming the barriers of performance, privacy and risk issues that
are associated with shared heterogeneous multimodal data centres.
The framework combines the concept of Grid services for reliable
brokering and privacy policy management of shared biometric
resources along with a momentum back propagation ANN (MBPANN)
model of machine learning for efficient multimodal fusion and
authentication schemes. Real-life applications would be able to adopt
the proposed framework to cater to the varying business requirements
and user privacies for a successful diffusion of multimodal
biometrics in various day-to-day transactions.