Abstract: A data warehouse (DW) is a system which has value and role for decision-making by querying. Queries to DW are critical regarding to their complexity and length. They often access millions of tuples, and involve joins between relations and aggregations. Materialized views are able to provide the better performance for DW queries. However, these views have maintenance cost, so materialization of all views is not possible. An important challenge of DW environment is materialized view selection because we have to realize the trade-off between performance and view maintenance. Therefore, in this paper, we introduce a new approach aimed to solve this challenge based on Two-Phase Optimization (2PO), which is a combination of Simulated Annealing (SA) and Iterative Improvement (II), with the use of Multiple View Processing Plan (MVPP). Our experiments show that 2PO outperform the original algorithms in terms of query processing cost and view maintenance cost.
Abstract: Game theory could be used to analyze the conflicted
issues in the field of information hiding. In this paper, 2-phase game
can be used to build the embedder-attacker system to analyze the
limits of hiding capacity of embedding algorithms: the embedder
minimizes the expected damage and the attacker maximizes it. In the
system, the embedder first consumes its resource to build embedded
units (EU) and insert the secret information into EU. Then the attacker
distributes its resource evenly to the attacked EU. The expected
equilibrium damage, which is maximum damage in value from the
point of view of the attacker and minimum from the embedder against
the attacker, is evaluated by the case when the attacker attacks a
subset from all the EU. Furthermore, the optimal equilibrium capacity
of hiding information is calculated through the optimal number of EU
with the embedded secret information. Finally, illustrative examples
of the optimal equilibrium capacity are presented.
Abstract: Matching algorithms have significant importance in
speaker recognition. Feature vectors of the unknown utterance are
compared to feature vectors of the modeled speakers as a last step in
speaker recognition. A similarity score is found for every model in
the speaker database. Depending on the type of speaker recognition,
these scores are used to determine the author of unknown speech
samples. For speaker verification, similarity score is tested against a
predefined threshold and either acceptance or rejection result is
obtained. In the case of speaker identification, the result depends on
whether the identification is open set or closed set. In closed set
identification, the model that yields the best similarity score is
accepted. In open set identification, the best score is tested against a
threshold, so there is one more possible output satisfying the
condition that the speaker is not one of the registered speakers in
existing database. This paper focuses on closed set speaker
identification using a modified version of a well known matching
algorithm. The results of new matching algorithm indicated better
performance on YOHO international speaker recognition database.
Abstract: In this paper we propose a new approach to constructing the Delaunay Triangulation and the optimum algorithm for the case of multidimensional spaces (d ≥ 2). Analysing the modern state, it is possible to draw a conclusion, that the ideas for the existing effective algorithms developed for the case of d ≥ 2 are not simple to generalize on a multidimensional case, without the loss of efficiency. We offer for the solving this problem an effective algorithm that satisfies all the given requirements. But theoretical complexity of the problem it is impossible to improve as the Worst - Case Optimality for algorithms of solving such a problem is proved.
Abstract: Modern spatial database management systems require a unique Spatial Access Method (SAM) in order solve complex spatial quires efficiently. In this case the spatial data structure takes a prominent place in the SAM. Inadequate data structure leads forming poor algorithmic choices and forging deficient understandings of algorithm behavior on the spatial database. A key step in developing a better semantic spatial object data structure is to quantify the performance effects of semantic and outlier detections that are not reflected in the previous tree structures (R-Tree and its variants). This paper explores a novel SSRO-Tree on SAM to the Topo-Semantic approach. The paper shows how to identify and handle the semantic spatial objects with outlier objects during page overflow/underflow, using gain/loss metrics. We introduce a new SSRO-Tree algorithm which facilitates the achievement of better performance in practice over algorithms that are superior in the R*-Tree and RO-Tree by considering selection queries.
Abstract: As the network based technologies become
omnipresent, demands to secure networks/systems against threat
increase. One of the effective ways to achieve higher security is
through the use of intrusion detection systems (IDS), which are a
software tool to detect anomalous in the computer or network. In this
paper, an IDS has been developed using an improved machine
learning based algorithm, Locally Linear Neuro Fuzzy Model
(LLNF) for classification whereas this model is originally used for
system identification. A key technical challenge in IDS and LLNF
learning is the curse of high dimensionality. Therefore a feature
selection phase is proposed which is applicable to any IDS. While
investigating the use of three feature selection algorithms, in this
model, it is shown that adding feature selection phase reduces
computational complexity of our model. Feature selection algorithms
require the use of a feature goodness measure. The use of both a
linear and a non-linear measure - linear correlation coefficient and
mutual information- is investigated respectively
Abstract: This paper presents a software quality support tool, a
Java source code evaluator and a code profiler based on
computational intelligence techniques. It is Java prototype software
developed by AI Group [1] from the Research Laboratories at
Universidad de Palermo: an Intelligent Java Analyzer (in Spanish:
Analizador Java Inteligente, AJI). It represents a new approach to
evaluate and identify inaccurate source code usage and transitively,
the software product itself.
The aim of this project is to provide the software development
industry with a new tool to increase software quality by extending
the value of source code metrics through computational intelligence.
Abstract: The paper proposes a unified model for multimedia data retrieval which includes data representatives, content representatives, index structure, and search algorithms. The multimedia data are defined as k-dimensional signals indexed in a multidimensional k-tree structure. The benefits of using the k-tree unified model were demonstrated by running the data retrieval application on a six networked nodes test bed cluster. The tests were performed with two retrieval algorithms, one that allows parallel searching using a single feature, the second that performs a weighted cascade search for multiple features querying. The experiments show a significant reduction of retrieval time while maintaining the quality of results.
Abstract: Duplicated region detection is a technical method to
expose copy-paste forgeries on digital images. Copy-paste is one
of the common types of forgeries to clone portion of an image
in order to conceal or duplicate special object. In this type of
forgery detection, extracting robust block feature and also high
time complexity of matching step are two main open problems.
This paper concentrates on computational time and proposes a local
block matching algorithm based on block clustering to enhance time
complexity. Time complexity of the proposed algorithm is formulated
and effects of two parameter, block size and number of cluster, on
efficiency of this algorithm are considered. The experimental results
and mathematical analysis demonstrate this algorithm is more costeffective
than lexicographically algorithms in time complexity issue
when the image is complex.
Abstract: detecting the deadlock is one of the important
problems in distributed systems and different solutions have been
proposed for it. Among the many deadlock detection algorithms,
Edge-chasing has been the most widely used. In Edge-chasing
algorithm, a special message called probe is made and sent along
dependency edges. When the initiator of a probe receives the probe
back the existence of a deadlock is revealed. But these algorithms are
not problem-free. One of the problems associated with them is that
they cannot detect some deadlocks and they even identify false
deadlocks. A key point not mentioned in the literature is that when
the process is waiting to obtain the required resources and its
execution has been blocked, how it can actually respond to probe
messages in the system. Also the question of 'which process should
be victimized in order to achieve a better performance when multiple
cycles exist within one single process in the system' has received
little attention. In this paper, one of the basic concepts of the
operating system - daemon - will be used to solve the problems
mentioned. The proposed Algorithm becomes engaged in sending
probe messages to the mandatory daemons and collects enough
information to effectively identify and resolve multi-cycle deadlocks
in distributed systems.
Abstract: In this paper a new Genetic Algorithm based on a heuristic operator and Centre of Mass selection operator (CMGA) is designed for the unbounded knapsack problem(UKP), which is NP-Hard combinatorial optimization problem. The proposed genetic algorithm is based on a heuristic operator, which utilizes problem specific knowledge. This center of mass operator when combined with other Genetic Operators forms a competitive algorithm to the existing ones. Computational results show that the proposed algorithm is capable of obtaining high quality solutions for problems of standard randomly generated knapsack instances. Comparative study of CMGA with simple GA in terms of results for unbounded knapsack instances of size up to 200 show the superiority of CMGA. Thus CMGA is an efficient tool of solving UKP and this algorithm is competitive with other Genetic Algorithms also.
Abstract: Power System Security is a major concern in real time
operation. Conventional method of security evaluation consists of
performing continuous load flow and transient stability studies by
simulation program. This is highly time consuming and infeasible
for on-line application. Pattern Recognition (PR) is a promising
tool for on-line security evaluation. This paper proposes a Support
Vector Machine (SVM) based binary classification for static and
transient security evaluation. The proposed SVM based PR approach
is implemented on New England 39 Bus and IEEE 57 Bus systems.
The simulation results of SVM classifier is compared with the other
classifier algorithms like Method of Least Squares (MLS), Multi-
Layer Perceptron (MLP) and Linear Discriminant Analysis (LDA)
classifiers.
Abstract: Since the presentation of the backpropagation algorithm, a vast variety of improvements of the technique for training a feed forward neural networks have been proposed. This article focuses on two classes of acceleration techniques, one is known as Local Adaptive Techniques that are based on weightspecific only, such as the temporal behavior of the partial derivative of the current weight. The other, known as Dynamic Adaptation Methods, which dynamically adapts the momentum factors, α, and learning rate, η, with respect to the iteration number or gradient. Some of most popular learning algorithms are described. These techniques have been implemented and tested on several problems and measured in terms of gradient and error function evaluation, and percentage of success. Numerical evidence shows that these techniques improve the convergence of the Backpropagation algorithm.
Abstract: Internet Access Technologies (IAT) provide a means
through which Internet can be accessed. The choice of a suitable
Internet technology is increasingly becoming an important issue to
ISP clients. Currently, the choice of IAT is based on discretion and
intuition of the concerned managers and the reliance on ISPs. In this
paper we propose a model and designs algorithms that are used in the
Internet access technology specification. In the proposed model, three
ranking approaches are introduced; concurrent ranking, stepwise
ranking and weighted ranking. The model ranks the IAT based on
distance measures computed in ascending order while the global
ranking system assigns weights to each IAT according to the position
held in each ranking technique, determines the total weight of a
particular IAT and ranks them in descending order. The final output
is an objective ranking of IAT in descending order.
Abstract: To create a solution for a specific problem in machine
learning, the solution is constructed from the data or by use a search
method. Genetic algorithms are a model of machine learning that can
be used to find nearest optimal solution. While the great advantage of
genetic algorithms is the fact that they find a solution through
evolution, this is also the biggest disadvantage. Evolution is inductive,
in nature life does not evolve towards a good solution but it evolves
away from bad circumstances. This can cause a species to evolve into
an evolutionary dead end. In order to reduce the effect of this
disadvantage we propose a new a learning tool (criteria) which can be
included into the genetic algorithms generations to compare the
previous population and the current population and then decide
whether is effective to continue with the previous population or the
current population, the proposed learning tool is called as Keeping
Efficient Population (KEP). We applied a GA based on KEP to the
production line layout problem, as a result KEP keep the evaluation
direction increases and stops any deviation in the evaluation.
Abstract: The objective of this paper is the introduction to a
unified optimization framework for research and education. The
OPTILIB framework implements different general purpose algorithms
for combinatorial optimization and minimum search on standard continuous
test functions. The preferences of this library are the straightforward
integration of new optimization algorithms and problems
as well as the visualization of the optimization process of different
methods exploring the search space exclusively or for the real time
visualization of different methods in parallel. Further the usage of
several implemented methods is presented on the basis of two use
cases, where the focus is especially on the algorithm visualization.
First it is demonstrated how different methods can be compared
conveniently using OPTILIB on the example of different iterative
improvement schemes for the TRAVELING SALESMAN PROBLEM.
A second study emphasizes how the framework can be used to find
global minima in the continuous domain.
Abstract: Program slicing is the task of finding all statements in a program that directly or indirectly influence the value of a variable occurrence. The set of statements that can affect the value of a variable at some point in a program is called a program slice. In several software engineering applications, such as program debugging and measuring program cohesion and parallelism, several slices are computed at different program points. In this paper, algorithms are introduced to compute all backward and forward static slices of a computer program by traversing the program representation graph once. The program representation graph used in this paper is called Program Dependence Graph (PDG). We have conducted an experimental comparison study using 25 software modules to show the effectiveness of the introduced algorithm for computing all backward static slices over single-point slicing approaches in computing the parallelism and functional cohesion of program modules. The effectiveness of the algorithm is measured in terms of time execution and number of traversed PDG edges. The comparison study results indicate that using the introduced algorithm considerably saves the slicing time and effort required to measure module parallelism and functional cohesion.
Abstract: In this paper, a solution is presented for a robotic
manipulation problem in industrial settings. The problem is sensing
objects on a conveyor belt, identifying the target, planning and
tracking an interception trajectory between end effector and the
target. Such a problem could be formulated as combining object
recognition, tracking and interception. For this purpose, we integrated
a vision system to the manipulation system and employed tracking
algorithms. The control approach is implemented on a real industrial
manipulation setting, which consists of a conveyor belt, objects
moving on it, a robotic manipulator, and a visual sensor above the
conveyor. The trjectory for robotic interception at a rendezvous point
on the conveyor belt is analytically calculated. Test results show that
tracking the raget along this trajectory results in interception and
grabbing of the target object.
Abstract: In this contribution an innovative platform is being
presented that integrates intelligent agents in legacy e-learning environments. It introduces the design and development of a scalable
and interoperable integration platform supporting various assessment agents for e-learning environments. The agents are implemented in
order to provide intelligent assessment services to computational intelligent techniques such as Bayesian Networks and Genetic
Algorithms. The utilization of new and emerging technologies like web services allows integrating the provided services to any web
based legacy e-learning environment.
Abstract: An iterative algorithm is proposed and tested in Cournot Game models, which is based on the convergence of sequential best responses and the utilization of a genetic algorithm for determining each player-s best response to a given strategy profile of its opponents. An extra outer loop is used, to address the problem of finite accuracy, which is inherent in genetic algorithms, since the set of feasible values in such an algorithm is finite. The algorithm is tested in five Cournot models, three of which have convergent best replies sequence, one with divergent sequential best replies and one with “local NE traps"[14], where classical local search algorithms fail to identify the Nash Equilibrium. After a series of simulations, we conclude that the algorithm proposed converges to the Nash Equilibrium, with any level of accuracy needed, in all but the case where the sequential best replies process diverges.