Abstract: Digital libraries become more and more necessary in
order to support users with powerful and easy-to-use tools for
searching, browsing and retrieving media information. The starting
point for these tasks is the segmentation of video content into shots.
To segment MPEG video streams into shots, a fully automatic
procedure to detect both abrupt and gradual transitions (dissolve and
fade-groups) with minimal decoding in real time is developed in this
study. Each was explored through two phases: macro-block type's
analysis in B-frames, and on-demand intensity information analysis.
The experimental results show remarkable performance in
detecting gradual transitions of some kinds of input data and
comparable results of the rest of the examined video streams. Almost
all abrupt transitions could be detected with very few false positive
alarms.
Abstract: OLAP uses multidimensional structures, to provide
access to data for analysis. Traditionally, OLAP operations are more
focused on retrieving data from a single data mart. An exception is
the drill across operator. This, however, is restricted to retrieving
facts on common dimensions of the multiple data marts. Our concern
is to define further operations while retrieving data from multiple
data marts. Towards this, we have defined six operations which
coalesce data marts. While doing so we consider the common as well
as the non-common dimensions of the data marts.
Abstract: For a spatiotemporal database management system,
I/O cost of queries and other operations is an important performance
criterion. In order to optimize this cost, an intense research on
designing robust index structures has been done in the past decade.
With these major considerations, there are still other design issues
that deserve addressing due to their direct impact on the I/O cost.
Having said this, an efficient buffer management strategy plays a key
role on reducing redundant disk access. In this paper, we proposed an
efficient buffer strategy for a spatiotemporal database index
structure, specifically indexing objects moving over a network of
roads. The proposed strategy, namely MONPAR, is based on the data
type (i.e. spatiotemporal data) and the structure of the index
structure. For the purpose of an experimental evaluation, we set up a
simulation environment that counts the number of disk accesses
while executing a number of spatiotemporal range-queries over the
index. We reiterated simulations with query sets with different
distributions, such as uniform query distribution and skewed query
distribution. Based on the comparison of our strategy with wellknown
page-replacement techniques, like LRU-based and Prioritybased
buffers, we conclude that MONPAR behaves better than its
competitors for small and medium size buffers under all used query-distributions.
Abstract: The paper presents an optimization study based on
genetic algorithms (GA-s) for a radio-frequency applicator used in
heating dielectric band products. The weakly coupled electro-thermal
problem is analyzed using 2D-FEM. The design variables in the
optimization process are: the voltage of a supplementary “guard"
electrode and six geometric parameters of the applicator. Two
objective functions are used: temperature uniformity and total active
power absorbed by the dielectric. Both mono-objective and multiobjective
formulations are implemented in GA optimization.
Abstract: A large number of semantic web service composition
approaches are developed by the research community and one is
more efficient than the other one depending on the particular
situation of use. So a close look at the requirements of ones particular
situation is necessary to find a suitable approach to use. In this paper,
we present a Technique Recommendation System (TRS) which using
a classification of state-of-art semantic web service composition
approaches, can provide the user of the system with the
recommendations regarding the use of service composition approach
based on some parameters regarding situation of use. TRS has
modular architecture and uses the production-rules for knowledge
representation.
Abstract: This paper proposes a novel approach to the question of lithofacies classification based on an assessment of the uncertainty in the classification results. The proposed approach has multiple neural networks (NN), and interval neutrosophic sets (INS) are used to classify the input well log data into outputs of multiple classes of lithofacies. A pair of n-class neural networks are used to predict n-degree of truth memberships and n-degree of false memberships. Indeterminacy memberships or uncertainties in the predictions are estimated using a multidimensional interpolation method. These three memberships form the INS used to support the confidence in results of multiclass classification. Based on the experimental data, our approach improves the classification performance as compared to an existing technique applied only to the truth membership. In addition, our approach has the capability to provide a measure of uncertainty in the problem of multiclass classification.
Abstract: Software project effort estimation is frequently seen
as complex and expensive for individual software engineers.
Software production is in a crisis. It suffers from excessive costs.
Software production is often out of control. It has been suggested that
software production is out of control because we do not measure.
You cannot control what you cannot measure. During last decade, a
number of researches on cost estimation have been conducted. The
metric-set selection has a vital role in software cost estimation
studies; its importance has been ignored especially in neural network
based studies. In this study we have explored the reasons of those
disappointing results and implemented different neural network
models using augmented new metrics. The results obtained are
compared with previous studies using traditional metrics. To be able
to make comparisons, two types of data have been used. The first
part of the data is taken from the Constructive Cost Model
(COCOMO'81) which is commonly used in previous studies and the
second part is collected according to new metrics in a leading
international company in Turkey. The accuracy of the selected
metrics and the data samples are verified using statistical techniques.
The model presented here is based on Multi-Layer Perceptron
(MLP). Another difficulty associated with the cost estimation studies
is the fact that the data collection requires time and care. To make a
more thorough use of the samples collected, k-fold, cross validation
method is also implemented. It is concluded that, as long as an
accurate and quantifiable set of metrics are defined and measured
correctly, neural networks can be applied in software cost estimation
studies with success
Abstract: The security of their network remains the priorities of almost all companies. Existing security systems have shown their limit; thus a new type of security systems was born: honeypots. Honeypots are defined as programs or intended servers which have to attract pirates to study theirs behaviours. It is in this context that the leurre.com project of gathering about twenty platforms was born. This article aims to specify a model of honeypots attack. Our model describes, on a given platform, the evolution of attacks according to theirs hours. Afterward, we show the most attacked services by the studies of attacks on the various ports. It is advisable to note that this article was elaborated within the framework of the research projects on honeyspots within the LABTIC (Laboratory of Information Technologies and Communication).
Abstract: Modeling of the distributed systems allows us to
represent the whole its functionality. The working system instance
rarely fulfils the whole functionality represented by model; usually
some parts of this functionality should be accessible periodically.
The reporting system based on the Data Warehouse concept seams to
be an intuitive example of the system that some of its functionality is
required only from time to time. Analyzing an enterprise risk
associated with the periodical change of the system functionality, we
should consider not only the inaccessibility of the components
(object) but also their functions (methods), and the impact of such a
situation on the system functionality from the business point of view.
In the paper we suggest that the risk attributes should be estimated
from risk attributes specified at the requirements level (Use Case in
the UML model) on the base of the information about the structure of
the model (presented at other levels of the UML model). We argue
that it is desirable to consider the influence of periodical changes in
requirements on the enterprise risk estimation. Finally, the
proposition of such a solution basing on the UML system model is
presented.
Abstract: The increasing development of wireless networks and
the widespread popularity of handheld devices such as Personal
Digital Assistants (PDAs), mobile phones and wireless tablets
represents an incredible opportunity to enable mobile devices as a
universal payment method, involving daily financial transactions.
Unfortunately, some issues hampering the widespread acceptance of
mobile payment such as accountability properties, privacy protection,
limitation of wireless network and mobile device. Recently, many
public-key cryptography based mobile payment protocol have been
proposed. However, limited capabilities of mobile devices and
wireless networks make these protocols are unsuitable for mobile
network. Moreover, these protocols were designed to preserve
traditional flow of payment data, which is vulnerable to attack and
increase the user-s risk. In this paper, we propose a private mobile
payment protocol which based on client centric model and by
employing symmetric key operations. The proposed mobile payment
protocol not only minimizes the computational operations and
communication passes between the engaging parties, but also
achieves a completely privacy protection for the payer. The future
work will concentrate on improving the verification solution to
support mobile user authentication and authorization for mobile
payment transactions.
Abstract: We study the problem of decision making with Dempster-Shafer belief structure. We analyze the previous work developed by Yager about using the ordered weighted averaging (OWA) operator in the aggregation of the Dempster-Shafer decision process. We discuss the possibility of aggregating with an ascending order in the OWA operator for the cases where the smallest value is the best result. We suggest the introduction of the ordered weighted geometric (OWG) operator in the Dempster-Shafer framework. In this case, we also discuss the possibility of aggregating with an ascending order and we find that it is completely necessary as the OWG operator cannot aggregate negative numbers. Finally, we give an illustrative example where we can see the different results obtained by using the OWA, the Ascending OWA (AOWA), the OWG and the Ascending OWG (AOWG) operator.
Abstract: In this paper we propose a robust adaptive fuzzy
controller for a class of nonlinear system with unknown dynamic.
The method is based on type-2 fuzzy logic system to approximate
unknown non-linear function. The design of the on-line adaptive
scheme of the proposed controller is based on Lyapunov technique.
Simulation results are given to illustrate the effectiveness of the
proposed approach.
Abstract: In this paper, we propose an approach for the classification of fingerprint databases. It is based on the fact that a fingerprint image is composed of regular texture regions that can be successfully represented by co-occurrence matrices. So, we first extract the features based on certain characteristics of the cooccurrence matrix and then we use these features to train a neural network for classifying fingerprints into four common classes. The obtained results compared with the existing approaches demonstrate the superior performance of our proposed approach.
Abstract: There are two major variants of the Simplex
Algorithm: the revised method and the standard, or tableau method.
Today, all serious implementations are based on the revised method
because it is more efficient for sparse linear programming problems.
Moreover, there are a number of applications that lead to dense linear
problems so our aim in this paper is to present some computational
results on parallel implementation of dense Simplex Method. Our
implementation is implemented on a SMP cluster using C
programming language and the Message Passing Interface MPI.
Preliminary computational results on randomly generated dense
linear programs support our results.
Abstract: This paper presents an Extended Kaman Filter
implementation of a single-camera Visual Simultaneous Localization
and Mapping algorithm, a novel algorithm for simultaneous
localization and mapping problem widely studied in mobile robotics
field. The algorithm is vision and odometry-based, The odometry
data is incremental, and therefore it will accumulate error over time,
since the robot may slip or may be lifted, consequently if the
odometry is used alone we can not accurately estimate the robot
position, in this paper we show that a combination of odometry and
visual landmark via the extended Kalman filter can improve the robot
position estimate. We use a Pioneer II robot and motorized pan tilt
camera models to implement the algorithm.
Abstract: One of the most growing areas in the embedded community is multimedia devices. Multimedia devices incorporate a number of complicated functions for their operation, like motion estimation. A multitude of different implementations have been proposed to reduce motion estimation complexity, such as spiral search. We have studied the implementations of spiral search and identified areas of improvement. We propose a modified spiral search algorithm, with lower computational complexity compared to the original spiral search. We have implemented our algorithm on an embedded ARM based architecture, with custom memory hierarchy. The resulting system yields energy consumption reduction up to 64% and performance increase up to 77%, with a small penalty of 2.3 dB, in average, of video quality compared with the original spiral search algorithm.
Abstract: This article presents the evolution and technological changes implemented on the full scale simulators developed by the Simulation Department of the Instituto de Investigaciones Eléctricas1 (Mexican Electric Research Institute) and located at different training centers around the Mexican territory, and allows US to know the last updates, basically from the input/output view point, of the current simulators at some facilities of the electrical sector as well as the compatible industry of the electrical manufactures and industries such as Comision Federal de Electricidad (CFE*, The utility Mexican company). Tendencies of these developments and impact within the operators- scope are also presented.
Abstract: An optimal control of Reverse Osmosis (RO) plant is
studied in this paper utilizing the auto tuning concept in conjunction
with PID controller. A control scheme composing an auto tuning
stochastic technique based on an improved Genetic Algorithm (GA) is
proposed. For better evaluation of the process in GA, objective
function defined newly in sense of root mean square error has been
used. Also in order to achieve better performance of GA, more
pureness and longer period of random number generation in operation
are sought. The main improvement is made by replacing the uniform
distribution random number generator in conventional GA technique
to newly designed hybrid random generator composed of Cauchy
distribution and linear congruential generator, which provides
independent and different random numbers at each individual steps in
Genetic operation. The performance of newly proposed GA tuned
controller is compared with those of conventional ones via simulation.
Abstract: This paper presents a general trainable framework
for fast and robust upright human face and non-human object
detection and verification in static images. To enhance the
performance of the detection process, the technique we develop is
based on the combination of fast neural network (FNN) and
classical neural network (CNN). In FNN, a useful correlation is
exploited to sustain high level of detection accuracy between input
image and the weight of the hidden neurons. This is to enable the
use of Fourier transform that significantly speed up the time
detection. The combination of CNN is responsible to verify the
face region. A bootstrap algorithm is used to collect non human
object, which adds the false detection to the training process of the
human and non-human object. Experimental results on test images
with both simple and complex background demonstrate that the
proposed method has obtained high detection rate and low false
positive rate in detecting both human face and non-human object.
Abstract: This paper proposes a method that discovers sequential patterns corresponding to user-s interests from sequential data. This method expresses the interests as constraint patterns. The constraint patterns can define relationships among attributes of the items composing the data. The method recursively decomposes the constraint patterns into constraint subpatterns. The method evaluates the constraint subpatterns in order to efficiently discover sequential patterns satisfying the constraint patterns. Also, this paper applies the method to the sequential data composed of stock price indexes and verifies its effectiveness through comparing it with a method without using the constraint patterns.