Abstract: This paper presents an architecture of current filesystem
implementations as well as our new filesystem SpadFS and operating
system Spad with rewritten VFS layer targeted at high performance
I/O applications. The paper presents microbenchmarks and real-world
benchmarks of different filesystems on the same kernel as well as
benchmarks of the same filesystem on different kernels – enabling
the reader to make conclusion how much is the performance of
various tasks affected by operating system and how much by physical
layout of data on disk. The paper describes our novel features–most
notably continuous allocation of directories and cross-file readahead
– and shows their impact on performance.
Abstract: Data mining is an extraordinarily demanding field referring to extraction of implicit knowledge and relationships, which are not explicitly stored in databases. A wide variety of methods of data mining have been introduced (classification, characterization, generalization...). Each one of these methods includes more than algorithm. A system of data mining implies different user categories,, which mean that the user-s behavior must be a component of the system. The problem at this level is to know which algorithm of which method to employ for an exploratory end, which one for a decisional end, and how can they collaborate and communicate. Agent paradigm presents a new way of conception and realizing of data mining system. The purpose is to combine different algorithms of data mining to prepare elements for decision-makers, benefiting from the possibilities offered by the multi-agent systems. In this paper the agent framework for data mining is introduced, and its overall architecture and functionality are presented. The validation is made on spatial data. Principal results will be presented.
Abstract: The need to have standards has always been a priority
of all the disciplines in the world. Today, standards such as XML and
USB are trying to create a universal interface for their respective
areas. The information regarding every family in the discipline
addressed, must have a lot in common, known as Metadata. A lot of
work has been done in specific domains such as IEEE LOM and
MPEG-7 but they do not appeal to the universality of creating
Metadata for all entities, where we take an entity (object) as, not
restricted to Software Terms. This paper tries to address this problem
of universal Metadata Definition which may lead to increase in
precision of search.
Abstract: This paper introduces a temporal epistemic logic
CBCTL that updates agent-s belief states through communications
in them, based on computational tree logic (CTL). In practical
environments, communication channels between agents may not be
secure, and in bad cases agents might suffer blackouts. In this study,
we provide inform* protocol based on ACL of FIPA, and declare the
presence of secure channels between two agents, dependent on time.
Thus, the belief state of each agent is updated along with the progress
of time. We show a prover, that is a reasoning system for a given
formula in a given a situation of an agent ; if it is directly provable
or if it could be validated through the chains of communications, the
system returns the proof.
Abstract: In this paper we designed and implemented a new
ensemble of classifiers based on a sequence of classifiers which were
specialized in regions of the training dataset where errors of its
trained homologous are concentrated. In order to separate this
regions, and to determine the aptitude of each classifier to properly
respond to a new case, it was used another set of classifiers built
hierarchically. We explored a selection based variant to combine the
base classifiers. We validated this model with different base
classifiers using 37 training datasets. It was carried out a statistical
comparison of these models with the well known Bagging and
Boosting, obtaining significantly superior results with the
hierarchical ensemble using Multilayer Perceptron as base classifier.
Therefore, we demonstrated the efficacy of the proposed ensemble,
as well as its applicability to general problems.
Abstract: A neuron can emit spikes in an irregular time basis and by averaging over a certain time window one would ignore a lot of information. It is known that in the context of fast information processing there is no sufficient time to sample an average firing rate of the spiking neurons. The present work shows that the spiking neurons are capable of computing the radial basis functions by storing the relevant information in the neurons' delays. One of the fundamental findings of the this research also is that when using overlapping receptive fields to encode the data patterns it increases the network-s clustering capacity. The clustering algorithm that is discussed here is interesting from computer science and neuroscience point of view as well as from a perspective.
Abstract: A new method for low complexity image coding is presented, that permits different settings and great scalability in the generation of the final bit stream. This coding presents a continuoustone still image compression system that groups loss and lossless compression making use of finite arithmetic reversible transforms. Both transformation in the space of color and wavelet transformation are reversible. The transformed coefficients are coded by means of a coding system in depending on a subdivision into smaller components (CFDS) similar to the bit importance codification. The subcomponents so obtained are reordered by means of a highly configure alignment system depending on the application that makes possible the re-configure of the elements of the image and obtaining different levels of importance from which the bit stream will be generated. The subcomponents of each level of importance are coded using a variable length entropy coding system (VBLm) that permits the generation of an embedded bit stream. This bit stream supposes itself a bit stream that codes a compressed still image. However, the use of a packing system on the bit stream after the VBLm allows the realization of a final highly scalable bit stream from a basic image level and one or several enhance levels.
Abstract: Training neural networks to capture an intrinsic
property of a large volume of high dimensional data is a difficult
task, as the training process is computationally expensive. Input
attributes should be carefully selected to keep the dimensionality of
input vectors relatively small.
Technical indexes commonly used for stock market prediction
using neural networks are investigated to determine its effectiveness
as inputs. The feed forward neural network of Levenberg-Marquardt
algorithm is applied to perform one step ahead forecasting of
NASDAQ and Dow stock prices.
Abstract: Finding the minimal logical functions has important applications in the design of logical circuits. This task is solved by many different methods but, frequently, they are not suitable for a computer implementation. We briefly summarise the well-known Quine-McCluskey method, which gives a unique procedure of computing and thus can be simply implemented, but, even for simple examples, does not guarantee an optimal solution. Since the Petrick extension of the Quine-McCluskey method does not give a generally usable method for finding an optimum for logical functions with a high number of values, we focus on interpretation of the result of the Quine-McCluskey method and show that it represents a set covering problem that, unfortunately, is an NP-hard combinatorial problem. Therefore it must be solved by heuristic or approximation methods. We propose an approach based on genetic algorithms and show suitable parameter settings.
Abstract: In this paper we analyze the application of a formal proof system to the discrete logarithm problem used in publickey cryptography. That means, we explore a computer verification of the ElGamal encryption scheme with the formal proof system Isabelle/HOL. More precisely, the functional correctness of this algorithm is formally verified with computer support. Besides, we present a formalization of the DSA signature scheme in the Isabelle/HOL system. We show that this scheme is correct what is a necessary condition for the usefulness of any cryptographic signature scheme.
Abstract: The development of many measurement and inspection systems of products based on real-time image processing can not be carried out totally in a laboratory due to the size or the temperature of the manufactured products. Those systems must be developed in successive phases. Firstly, the system is installed in the production line with only an operational service to acquire images of the products and other complementary signals. Next, a recording service of the image and signals must be developed and integrated in the system. Only after a large set of images of products is available, the development of the real-time image processing algorithms for measurement or inspection of the products can be accomplished under realistic conditions. Finally, the recording service is turned off or eliminated and the system operates only with the real-time services for the acquisition and processing of the images. This article presents a systematic performance evaluation of the image compression algorithms currently available to implement a real-time recording service. The results allow establishing a trade off between the reduction or compression of the image size and the CPU time required to get that compression level.
Abstract: The most reliable and accurate description of the actual behavior of a software system is its source code. However, not all questions about the system can be answered directly by resorting to this repository of information. What the reverse engineering methodology aims at is the extraction of abstract, goal-oriented “views" of the system, able to summarize relevant properties of the computation performed by the program. While concentrating on reverse engineering we had modeled the C++ files by designing the translator.
Abstract: During more than a decade, many proposals and standards have been designed to deal with the mobility issues; however, there are still some serious limitations in basing solutions on them. In this paper we discuss the possibility of handling mobility at the application layer. We do this while revisiting the conventional implementation of the Two Phase Commit (2PC) protocol which is a fundamental asset of transactional technology for ensuring the consistent commitment of distributed transactions. The solution is based on an execution framework providing an efficient extension that is aware of the mobility and preserves the 2PC principle.
Abstract: Knowledge Discovery in Databases (KDD) is the process of extracting previously unknown, hidden and interesting patterns from a huge amount of data stored in databases. Data mining is a stage of the KDD process that aims at selecting and applying a particular data mining algorithm to extract an interesting and useful knowledge. It is highly expected that data mining methods will find interesting patterns according to some measures, from databases. It is of vital importance to define good measures of interestingness that would allow the system to discover only the useful patterns. Measures of interestingness are divided into objective and subjective measures. Objective measures are those that depend only on the structure of a pattern and which can be quantified by using statistical methods. While, subjective measures depend only on the subjectivity and understandability of the user who examine the patterns. These subjective measures are further divided into actionable, unexpected and novel. The key issues that faces data mining community is how to make actions on the basis of discovered knowledge. For a pattern to be actionable, the user subjectivity is captured by providing his/her background knowledge about domain. Here, we consider the actionability of the discovered knowledge as a measure of interestingness and raise important issues which need to be addressed to discover actionable knowledge.
Abstract: Grid computing is growing rapidly in the distributed
heterogeneous systems for utilizing and sharing large-scale resources
to solve complex scientific problems. Scheduling is the most recent
topic used to achieve high performance in grid environments. It aims
to find a suitable allocation of resources for each job. A typical
problem which arises during this task is the decision of scheduling. It
is about an effective utilization of processor to minimize tardiness
time of a job, when it is being scheduled. This paper, therefore,
addresses the problem by developing a general framework of grid
scheduling using dynamic information and an ant colony
optimization algorithm to improve the decision of scheduling. The
performance of various dispatching rules such as First Come First
Served (FCFS), Earliest Due Date (EDD), Earliest Release Date
(ERD), and an Ant Colony Optimization (ACO) are compared.
Moreover, the benefit of using an Ant Colony Optimization for
performance improvement of the grid Scheduling is also discussed. It
is found that the scheduling system using an Ant Colony
Optimization algorithm can efficiently and effectively allocate jobs
to proper resources.
Abstract: In this paper, we probe into the traffic assignment problem by the chromosome-learning-based path finding method in simulation, which is to model the driver' behavior in the with-in-a-day process. By simply making a combination and a change of the traffic route chromosomes, the driver at the intersection chooses his next route. The various crossover and mutation rules are proposed with extensive examples.
Abstract: In this paper we propose a framework for
multisensor intrusion detection called Fuzzy Agent-Based Intrusion
Detection System. A unique feature of this model is that the agent
uses data from multiple sensors and the fuzzy logic to process log
files. Use of this feature reduces the overhead in a distributed
intrusion detection system. We have developed an agent
communication architecture that provides a prototype
implementation. This paper discusses also the issues of combining
intelligent agent technology with the intrusion detection domain.
Abstract: Surface metrology with image processing is a challenging task having wide applications in industry. Surface roughness can be evaluated using texture classification approach. Important aspect here is appropriate selection of features that characterize the surface. We propose an effective combination of features for multi-scale and multi-directional analysis of engineering surfaces. The features include standard deviation, kurtosis and the Canny edge detector. We apply the method by analyzing the surfaces with Discrete Wavelet Transform (DWT) and Dual-Tree Complex Wavelet Transform (DT-CWT). We used Canberra distance metric for similarity comparison between the surface classes. Our database includes the surface textures manufactured by three machining processes namely Milling, Casting and Shaping. The comparative study shows that DT-CWT outperforms DWT giving correct classification performance of 91.27% with Canberra distance metric.
Abstract: This Paper proposes a new facial feature extraction approach, Wash-Hadamard Transform (WHT). This approach is based on correlation between local pixels of the face image. Its primary advantage is the simplicity of its computation. The paper compares the proposed approach, WHT, which was traditionally used in data compression with two other known approaches: the Principal Component Analysis (PCA) and the Discrete Cosine Transform (DCT) using the face database of Olivetti Research Laboratory (ORL). In spite of its simple computation, the proposed algorithm (WHT) gave very close results to those obtained by the PCA and DCT. This paper initiates the research into WHT and the family of frequency transforms and examines their suitability for feature extraction in face recognition applications.
Abstract: Instead of representing individual cognition only, population cognition is represented using artificial neural networks whilst maintaining individuality. This population network trains continuously, simulating adaptation. An implementation of two coexisting populations is compared to the Lotka-Volterra model of predator-prey interaction. Applications include multi-agent systems such as artificial life or computer games.