Abstract: Our program compares French and Italian translations of Homer’s Odyssey, from the XVIth to the XXth century. We focus on the third point, showing how distributional semantics systems can be used both to improve alignment between different French translations as well as between the Greek text and a French translation. Although we focus on French examples, the techniques we display are completely language independent.
Abstract: This paper presents the results of a study to test whether the Javanese character manuscript image preprocessing model that have been more widely applied, can also be applied to segment of the Batak characters manuscripts. The treatment process begins by converting the input image into a binary image. After the binary image is cleaned of noise, then the segmentation lines using projection profile is conducted. If unclear histogram projection is found, then the smoothing process before production indexes line segments is conducted. For each line image which has been produced, then the segmentation scripts in the line is applied, with regard of the connectivity between pixels which making up the letters that there is no characters are truncated. From the results of manuscript preprocessing system prototype testing, it is obtained the information about the system truth percentage value on pieces of Pustaka Batak Podani Ma AjiMamisinon manuscript ranged from 65% to 87.68% with a confidence level of 95%. The value indicates the truth percentage shown the initial processing model in Javanese characters manuscript image can be applied also to the image of the Batak characters manuscript.
Abstract: In rough set models, tolerance relation, similarity
relation and limited tolerance relation solve different situation
problems for incomplete information systems in which there exists a
phenomenon of missing value. If two objects have the same few
known attributes and more unknown attributes, they cannot
distinguish them well. In order to solve this problem, we presented two
improved limited and variable precision rough set models. One is
symmetric, the other one is non-symmetric. They all use more
stringent condition to separate two small probability equivalent objects
into different classes. The two models are needed to engage further
study in detail. In the present paper, we newly form object classes with
a different respect comparing to the first suggested model. We
overcome disadvantages of non-symmetry regarding to the second
suggested model. We discuss relationships between or among several
models and also make rule generation. The obtained results by
applying the second model are more accurate and reasonable.
Abstract: In this era of online communication, which transacts data in 0s and 1s, confidentiality is a priced commodity. Ensuring safe transmission of encrypted data and their uncorrupted recovery is a matter of prime concern. Among the several techniques for secure sharing of images, this paper proposes a k out of n region incrementing image sharing scheme for color images. The highlight of this scheme is the use of simple Boolean and arithmetic operations for generating shares and the Lagrange interpolation polynomial for authenticating shares. Additionally, this scheme addresses problems faced by existing algorithms such as color reversal and pixel expansion. This paper regenerates the original secret image whereas the existing systems regenerates only the half toned secret image.
Abstract: Web application architecture is important to achieve the desired performance for the application. Performance analysis studies are conducted to evaluate existing or planned systems. Web applications are used by hundreds of thousands of users simultaneously, which sometimes increases the risk of server failure in real time operations. We use Coloured Petri Net (CPN), a very powerful tool for modelling dynamic behaviour of a web application system. CPNs extend the vocabulary of ordinary Petri nets and add features that make them suitable for modelling large systems. The major focus of this work is on server side of web applications. The presented work focuses on modelling restructuring aspects, with major focus on concurrency and architecture, using CPN. It also focuses on bringing out the appropriate architecture for web and database servers given the number of concurrent users.
Abstract: Accurate software reliability prediction not only enables developers to improve the quality of software but also provides useful information to help them for planning valuable resources. This paper examines the performance of three well-known data mining techniques (CART, TreeNet and Random Forest) for predicting software reliability. We evaluate and compare the performance of proposed models with Cascade Correlation Neural Network (CCNN) using sixteen empirical databases from the Data and Analysis Center for Software. The goal of our study is to help project managers to concentrate their testing efforts to minimize the software failures in order to improve the reliability of the software systems. Two performance measures, Normalized Root Mean Squared Error (NRMSE) and Mean Absolute Errors (MAE), illustrate that CART model is accurate than the models predicted using Random Forest, TreeNet and CCNN in all datasets used in our study. Finally, we conclude that such methods can help in reliability prediction using real-life failure datasets.
Abstract: Information technology has been gaining more and
more space whether in industry, commerce or even for personal use,
but the misuse of it brings harm to the environment and human health
as a result. Contribute to the sustainability of the planet is to
compensate the environment, all or part of what withdraws it. The
green computing also came to propose practical for use in IT in an
environmentally correct way in aid of strategic management and
communication. This work focuses on showing how a mobile
application can help businesses reduce costs and reduced
environmental impacts caused by its processes, through a case study
of a public company in Brazil.
Abstract: Finding the optimal 3D path of an aerial vehicle under
flight mechanics constraints is a major challenge, especially when
the algorithm has to produce real time results in flight. Kinematics
models and Pythagorian Hodograph curves have been widely used
in mobile robotics to solve this problematic. The level of difficulty
is mainly driven by the number of constraints to be saturated at the
same time while minimizing the total length of the path. In this paper,
we suggest a pragmatic algorithm capable of saturating at the same
time most of dimensioning helicopter 3D trajectories’ constraints
like: curvature, curvature derivative, torsion, torsion derivative, climb
angle, climb angle derivative, positions. The trajectories generation
algorithm is able to generate versatile complex 3D motion primitives
feasible by a helicopter with parameterization of the curvature and the
climb angle. An upper ”motion primitives’ concatenation” algorithm
is presented based. In this article we introduce a new way of designing
three-dimensional trajectories based on what we call the ”Dubins
gliding symmetry conjecture”. This extremely performing algorithm
will be soon integrated to a real-time decisional system dealing with
inflight safety issues.
Abstract: In this paper, we propose the variational EM inference
algorithm for the multi-class Gaussian process classification model
that can be used in the field of human behavior recognition. This
algorithm can drive simultaneously both a posterior distribution of a
latent function and estimators of hyper-parameters in a Gaussian
process classification model with multiclass. Our algorithm is based
on the Laplace approximation (LA) technique and variational EM
framework. This is performed in two steps: called expectation and
maximization steps. First, in the expectation step, using the Bayesian
formula and LA technique, we derive approximately the posterior
distribution of the latent function indicating the possibility that each
observation belongs to a certain class in the Gaussian process
classification model. Second, in the maximization step, using a derived
posterior distribution of latent function, we compute the maximum
likelihood estimator for hyper-parameters of a covariance matrix
necessary to define prior distribution for latent function. These two
steps iteratively repeat until a convergence condition satisfies.
Moreover, we apply the proposed algorithm with human action
classification problem using a public database, namely, the KTH
human action data set. Experimental results reveal that the proposed
algorithm shows good performance on this data set.
Abstract: In this paper, the unstable angle of attack of a
FOXTROT aircraft is controlled by using Genetic Algorithm based
flight controller and the result is compared with the conventional
techniques like Tyreus-Luyben (TL), Ziegler-Nichols (ZN) and
Interpolation Rule (IR) for tuning the PID controller. In addition, the
performance indices like Mean Square Error (MSE), Integral Square
Error (ISE), and Integral Absolute Time Error (IATE) etc. are
improved by using Genetic Algorithm. It was established that the
error by using GA is very less as compared to the conventional
techniques thereby improving the performance indices of the
dynamic system.
Abstract: Given the increase in the number of e-commerce sites,
the number of competitors has become very important. This means
that companies have to take appropriate decisions in order to meet the
expectations of their customers and satisfy their needs. In this paper,
we present a case study of applying LRFM (length, recency,
frequency and monetary) model and clustering techniques in the
sector of electronic commerce with a view to evaluating customers’
values of the Moroccan e-commerce websites and then developing
effective marketing strategies. To achieve these objectives, we adopt
LRFM model by applying a two-stage clustering method. In the first
stage, the self-organizing maps method is used to determine the best
number of clusters and the initial centroid. In the second stage, kmeans
method is applied to segment 730 customers into nine clusters
according to their L, R, F and M values. The results show that the
cluster 6 is the most important cluster because the average values of
L, R, F and M are higher than the overall average value. In addition,
this study has considered another variable that describes the mode of
payment used by customers to improve and strengthen clusters’
analysis. The clusters’ analysis demonstrates that the payment method is
one of the key indicators of a new index which allows to assess the
level of customers’ confidence in the company's Website.
Abstract: In this paper genetic based test data compression is
targeted for improving the compression ratio and for reducing the
computation time. The genetic algorithm is based on extended pattern
run-length coding. The test set contains a large number of X value
that can be effectively exploited to improve the test data
compression. In this coding method, a reference pattern is set and its
compatibility is checked. For this process, a genetic algorithm is
proposed to reduce the computation time of encoding algorithm. This
coding technique encodes the 2n compatible pattern or the inversely
compatible pattern into a single test data segment or multiple test data
segment. The experimental result shows that the compression ratio
and computation time is reduced.
Abstract: In this paper, we describe an application for face
recognition. Many studies have used local descriptors to characterize
a face, the performance of these local descriptors remain low by
global descriptors (working on the entire image). The application of
local descriptors (cutting image into blocks) must be able to store
both the advantages of global and local methods in the Discrete
Cosine Transform (DCT) domain. This system uses neural network
techniques. The letter method provides a good compromise between
the two approaches in terms of simplifying of calculation and
classifying performance. Finally, we compare our results with those
obtained from other local and global conventional approaches.
Abstract: In the context of the handwriting recognition, we
propose an off line system for the recognition of the Arabic
handwritten words of the Algerian departments. The study is based
mainly on the evaluation of neural network performances, trained
with the gradient back propagation algorithm. The used parameters to
form the input vector of the neural network are extracted on the
binary images of the handwritten word by several methods. The
Distribution parameters, the centered moments of the different
projections of the different segments, the centered moments of the
word image coding according to the directions of Freeman, and the
Barr features applied binary image of the word and on its different
segments. The classification is achieved by a multi layers perceptron.
A detailed experiment is carried and satisfactory recognition results
are reported.
Abstract: Digital reference service is when a traditional library
reference service is provided electronically. In most cases users do
not get full satisfaction from using digital reference service due to
variety of reasons. This paper discusses the formal specification of
web services applications for digital reference services (WSDRS).
WSDRS is an informal model that claims to reduce the problems of
digital reference services in libraries. It uses web services technology
to provide efficient digital way of satisfying users’ need in the
reference section of libraries. Informal model is in natural language
which is inconsistent and ambiguous that may cause difficulties to the
developers of the system. In order to solve this problem we decided
to convert the informal specifications into formal specifications. This
is supposed to reduce the overall development time and cost. We use
Z language to develop the formal model and verify it with Z/EVES
theorem prover tool.
Abstract: Elliptic curve discrete logarithm problem(ECDLP) is
one of problems on which the security of pairing-based cryptography
is based. This paper considers Pollard’s rho method to evaluate
the security of ECDLP on Barreto-Naehrig(BN) curve that is an
efficient pairing-friendly curve. Some techniques are proposed to
make the rho method efficient. Especially, the group structure on
BN curve, distinguished point method, and Montgomery trick are
well-known techniques. This paper applies these techniques and
shows its optimization. According to the experimental results for
which a large-scale parallel system with MySQL is applied, 94-bit
ECDLP was solved about 28 hours by parallelizing 71 computers.
Abstract: In some applications, such as image recognition or
compression, segmentation refers to the process of partitioning a
digital image into multiple segments. Image segmentation is typically
used to locate objects and boundaries (lines, curves, etc.) in images.
Image segmentation is to classify or cluster an image into several
parts (regions) according to the feature of image, for example, the
pixel value or the frequency response. More precisely, image
segmentation is the process of assigning a label to every pixel in an
image such that pixels with the same label share certain visual
characteristics. The result of image segmentation is a set of segments
that collectively cover the entire image, or a set of contours extracted
from the image. Several image segmentation algorithms were
proposed to segment an image before recognition or compression. Up
to now, many image segmentation algorithms exist and be
extensively applied in science and daily life. According to their
segmentation method, we can approximately categorize them into
region-based segmentation, data clustering, and edge-base
segmentation. In this paper, we give a study of several popular image
segmentation algorithms that are available.
Abstract: The web services applications for digital reference
service (WSDRS) of LIS model is an informal model that claims to
reduce the problems of digital reference services in libraries. It uses
web services technology to provide efficient way of satisfying users’
needs in the reference section of libraries. The formal WSDRS model
consists of the Z specifications of all the informal specifications of
the model. This paper discusses the formal validation of the Z
specifications of WSDRS model. The authors formally verify and
thus validate the properties of the model using Z/EVES theorem
prover.
Abstract: Mobile Ad Hoc Networks (MANETs) is a collection
of mobile devices forming a communication network without
infrastructure. MANET is vulnerable to security threats due to
network’s limited security, dynamic topology, scalability and the lack
of central management. The Quality of Service (QoS) routing in such
networks is limited by network breakage caused by node mobility or
nodes energy depletions. The impact of node mobility on trust
establishment is considered and its use to propagate trust through a
network is investigated in this paper. This work proposes an
enhanced Associativity Based Routing (ABR) with Fuzzy based
Trust (Fuzzy- ABR) routing protocol for MANET to improve QoS
and to mitigate network attacks.
Abstract: Over the past few years, a lot of research has been
conducted to bring Automatic Speech Recognition (ASR) into various
areas of Air Traffic Control (ATC), such as air traffic control
simulation and training, monitoring live operators for with the aim
of safety improvements, air traffic controller workload measurement
and conducting analysis on large quantities controller-pilot speech.
Due to the high accuracy requirements of the ATC context and its
unique challenges, automatic speech recognition has not been widely
adopted in this field. With the aim of providing a good starting
point for researchers who are interested bringing automatic speech
recognition into ATC, this paper gives an overview of possibilities
and challenges of applying automatic speech recognition in air traffic
control. To provide this overview, we present an updated literature
review of speech recognition technologies in general, as well as
specific approaches relevant to the ATC context. Based on this
literature review, criteria for selecting speech recognition approaches
for the ATC domain are presented, and remaining challenges and
possible solutions are discussed.