Abstract: The objective of the paper is twofold. First, to develop a
formal framework for planning for mobile agents. A logical language
based on a temporal logic is proposed that can express a type of
tasks which often arise in network management. Second, to design a
planning algorithm for such tasks. The aim of this paper is to study
the importance of finding plans for mobile agents. Although there
has been a lot of research in mobile agents, not much work has been
done to incorporate planning ideas for such agents. This paper makes
an attempt in this direction. A theoretical study of finding plans for
mobile agents is undertaken. A planning algorithm (based on the
paradigm of mobile computing) is proposed and its space, time, and
communication complexity is analyzed. The algorithm is illustrated
by working out an example in detail.
Abstract: Statistical selection procedures are used to select the
best simulated system from a finite set of alternatives. In this paper,
we present a procedure that can be used to select the best system
when the number of alternatives is large. The proposed procedure
consists a combination between Ranking and Selection, and Ordinal
Optimization procedures. In order to improve the performance of Ordinal
Optimization, Optimal Computing Budget Allocation technique
is used to determine the best simulation lengths for all simulation
systems and to reduce the total computation time. We also argue
the effect of increment in simulation samples for the combined
procedure. The results of numerical illustration show clearly the effect
of increment in simulation samples on the proposed combination of
selection procedure.
Abstract: This paper presents a VLSI design approach of a highspeed
and real-time 2-D Discrete Wavelet Transform computing. The
proposed architecture, based on new and fast convolution approach,
reduces the hardware complexity in addition to reduce the critical
path to the multiplier delay. Furthermore, an advanced twodimensional
(2-D) discrete wavelet transform (DWT)
implementation, with an efficient memory area, is designed to
produce one output in every clock cycle. As a result, a very highspeed
is attained. The system is verified, using JPEG2000
coefficients filters, on Xilinx Virtex-II Field Programmable Gate
Array (FPGA) device without accessing any external memory. The
resulting computing rate is up to 270 M samples/s and the (9,7) 2-D
wavelet filter uses only 18 kb of memory (16 kb of first-in-first-out
memory) with 256×256 image size. In this way, the developed design
requests reduced memory and provide very high-speed processing as
well as high PSNR quality.
Abstract: One of the most used assumptions in logic programming
and deductive databases is the so-called Closed World Assumption
(CWA), according to which the atoms that cannot be inferred
from the programs are considered to be false (i.e. a pessimistic
assumption). One of the most successful semantics of conventional
logic programs based on the CWA is the well-founded semantics.
However, the CWA is not applicable in all circumstances when
information is handled. That is, the well-founded semantics, if
conventionally defined, would behave inadequately in different cases.
The solution we adopt in this paper is to extend the well-founded
semantics in order for it to be based also on other assumptions. The
basis of (default) negative information in the well-founded semantics
is given by the so-called unfounded sets. We extend this concept
by considering optimistic, pessimistic, skeptical and paraconsistent
assumptions, used to complete missing information from a program.
Our semantics, called extended well-founded semantics, expresses
also imperfect information considered to be missing/incomplete,
uncertain and/or inconsistent, by using bilattices as multivalued
logics. We provide a method of computing the extended well-founded
semantics and show that Kripke-Kleene semantics is captured by
considering a skeptical assumption. We show also that the complexity
of the computation of our semantics is polynomial time.
Abstract: Advances in computing applications in recent years
have prompted the demand for more flexible scheduling models for
QoS demand. Moreover, in practical applications, partly violated
temporal constraints can be tolerated if the violation meets certain
distribution. So we need extend the traditional Liu and Lanland model
to adapt to these circumstances. There are two extensions, which are
the (m, k)-firm model and Window-Constrained model. This paper
researches on weakly hard real-time constraints and their combination
to support QoS. The fact that a practical application can tolerate some
violations of temporal constraint under certain distribution is
employed to support adaptive QoS on the open real-time system. The
experiment results show these approaches are effective compared to
traditional scheduling algorithms.
Abstract: For complete support of Quality of Service, it is better that environment itself predicts resource requirements of a job by using special methods in the Grid computing. The exact and correct prediction causes exact matching of required resources with available resources. After the execution of each job, the used resources will be saved in the active database named "History". At first some of the attributes will be exploit from the main job and according to a defined similarity algorithm the most similar executed job will be exploited from "History" using statistic terms such as linear regression or average, resource requirements will be predicted. The new idea in this research is based on active database and centralized history maintenance. Implementation and testing of the proposed architecture results in accuracy percentage of 96.68% to predict CPU usage of jobs and 91.29% of memory usage and 89.80% of the band width usage.
Abstract: In this paper is shown that the probability-statistic methods application, especially at the early stage of the aviation gas turbine engine (GTE) technical condition diagnosing, when the flight information has property of the fuzzy, limitation and uncertainty is unfounded. Hence is considered the efficiency of application of new technology Soft Computing at these diagnosing stages with the using of the Fuzzy Logic and Neural Networks methods. Training with high accuracy of fuzzy multiple linear and non-linear models (fuzzy regression equations) which received on the statistical fuzzy data basis is made. Thus for GTE technical condition more adequate model making are analysed dynamics of skewness and kurtosis coefficients' changes. Researches of skewness and kurtosis coefficients values- changes show that, distributions of GTE work parameters have fuzzy character. Hence consideration of fuzzy skewness and kurtosis coefficients is expedient. Investigation of the basic characteristics changes- dynamics of GTE work parameters allows to draw conclusion on necessity of the Fuzzy Statistical Analysis at preliminary identification of the engines' technical condition. Researches of correlation coefficients values- changes shows also on their fuzzy character. Therefore for models choice the application of the Fuzzy Correlation Analysis results is offered. For checking of models adequacy is considered the Fuzzy Multiple Correlation Coefficient of Fuzzy Multiple Regression. At the information sufficiency is offered to use recurrent algorithm of aviation GTE technical condition identification (Hard Computing technology is used) on measurements of input and output parameters of the multiple linear and non-linear generalised models at presence of noise measured (the new recursive Least Squares Method (LSM)). The developed GTE condition monitoring system provides stage-bystage estimation of engine technical conditions. As application of the given technique the estimation of the new operating aviation engine temperature condition was made.
Abstract: Recently, there have been considerable efforts towards the convergence between P2P and Grid computing in order to reach a solution that takes the best of both worlds by exploiting the advantages that each offers. Augmenting the peer-to-peer model to the services of the Grid promises to eliminate bottlenecks and ensure greater scalability, availability, and fault-tolerance. The Grid Information Service (GIS) directly influences quality of service for grid platforms. Most of the proposed solutions for decentralizing the GIS are based on completely flat overlays. The main contributions for this paper are: the investigation of a novel resource discovery framework for Grid implementations based on a hierarchy of structured peer-to-peer overlay networks, and introducing a discovery algorithm utilizing the proposed framework. Validation of the framework-s performance is done via simulation. Experimental results show that the proposed organization has the advantage of being scalable while providing fault-isolation, effective bandwidth utilization, and hierarchical access control. In addition, it will lead to a reliable, guaranteed sub-linear search which returns results within a bounded interval of time and with a smaller amount of generated traffic within each domain.
Abstract: The projection methods, usually viewed as the methods
for computing eigenvalues, can also be used to estimate pseudospectra.
This paper proposes a kind of projection methods for computing
the pseudospectra of large scale matrices, including orthogonalization
projection method and oblique projection method respectively. This
possibility may be of practical importance in applications involving
large scale highly nonnormal matrices. Numerical algorithms are
given and some numerical experiments illustrate the efficiency of
the new algorithms.
Abstract: Statistical learning theory was developed by Vapnik. It
is a learning theory based on Vapnik-Chervonenkis dimension. It also
has been used in learning models as good analytical tools. In general, a
learning theory has had several problems. Some of them are local
optima and over-fitting problems. As well, statistical learning theory
has same problems because the kernel type, kernel parameters, and
regularization constant C are determined subjectively by the art of
researchers. So, we propose an evolutionary statistical learning theory
to settle the problems of original statistical learning theory.
Combining evolutionary computing into statistical learning theory,
our theory is constructed. We verify improved performances of an
evolutionary statistical learning theory using data sets from KDD cup.
Abstract: The paper proposes an approach using genetic algorithm for computing the region based image similarity. The image is denoted using a set of segmented regions reflecting color and texture properties of an image. An image is associated with a family of image features corresponding to the regions. The resemblance of two images is then defined as the overall similarity between two families of image features, and quantified by a similarity measure, which integrates properties of all the regions in the images. A genetic algorithm is applied to decide the most plausible matching. The performance of the proposed method is illustrated using examples from an image database of general-purpose images, and is shown to produce good results.
Abstract: Heterogeneity of solid waste characteristics as well as the complex processes taking place within the landfill ecosystem motivated the implementation of soft computing methodologies such as artificial neural networks (ANN), fuzzy logic (FL), and their combination. The present work uses a hybrid ANN-FL model that employs knowledge-based FL to describe the process qualitatively and implements the learning algorithm of ANN to optimize model parameters. The model was developed to simulate and predict the landfill gas production at a given time based on operational parameters. The experimental data used were compiled from lab-scale experiment that involved various operating scenarios. The developed model was validated and statistically analyzed using F-test, linear regression between actual and predicted data, and mean squared error measures. Overall, the simulated landfill gas production rates demonstrated reasonable agreement with actual data. The discussion focused on the effect of the size of training datasets and number of training epochs.
Abstract: A neuron can emit spikes in an irregular time basis and by averaging over a certain time window one would ignore a lot of information. It is known that in the context of fast information processing there is no sufficient time to sample an average firing rate of the spiking neurons. The present work shows that the spiking neurons are capable of computing the radial basis functions by storing the relevant information in the neurons' delays. One of the fundamental findings of the this research also is that when using overlapping receptive fields to encode the data patterns it increases the network-s clustering capacity. The clustering algorithm that is discussed here is interesting from computer science and neuroscience point of view as well as from a perspective.
Abstract: The more recent satellite projects/programs makes
extensive usage of real – time embedded systems. 16 bit processors
which meet the Mil-Std-1750 standard architecture have been used in
on-board systems. Most of the Space Applications have been written
in ADA. From a futuristic point of view, 32 bit/ 64 bit processors are
needed in the area of spacecraft computing and therefore an effort is
desirable in the study and survey of 64 bit architectures for space
applications. This will also result in significant technology
development in terms of VLSI and software tools for ADA (as the
legacy code is in ADA).
There are several basic requirements for a special processor for
this purpose. They include Radiation Hardened (RadHard) devices,
very low power dissipation, compatibility with existing operational
systems, scalable architectures for higher computational needs,
reliability, higher memory and I/O bandwidth, predictability, realtime
operating system and manufacturability of such processors.
Further on, these may include selection of FPGA devices, selection
of EDA tool chains, design flow, partitioning of the design, pin
count, performance evaluation, timing analysis etc.
This project deals with a brief study of 32 and 64 bit processors
readily available in the market and designing/ fabricating a 64 bit
RISC processor named RISC MicroProcessor with added
functionalities of an extended double precision floating point unit
and a 32 bit signal processing unit acting as co-processors. In this
paper, we emphasize the ease and importance of using Open Core
(OpenSparc T1 Verilog RTL) and Open “Source" EDA tools such as
Icarus to develop FPGA based prototypes quickly. Commercial tools
such as Xilinx ISE for Synthesis are also used when appropriate.
Abstract: Economic dispatch problem is an optimization problem where objective function is highly non linear, non-convex, non-differentiable and may have multiple local minima. Therefore, classical optimization methods may not converge or get trapped to any local minima. This paper presents a comparative study of four different evolutionary algorithms i.e. genetic algorithm, bacteria foraging optimization, ant colony optimization and particle swarm optimization for solving the economic dispatch problem. All the methods are tested on IEEE 30 bus test system. Simulation results are presented to show the comparative performance of these methods.
Abstract: Distributed Computing Systems are usually considered the most suitable model for practical solutions of many parallel algorithms. In this paper an enhanced distributed system is presented to improve the time complexity of Binary Indexed Trees (BIT). The proposed system uses multi-uniform processors with identical architectures and a specially designed distributed memory system. The analysis of this system has shown that it has reduced the time complexity of the read query to O(Log(Log(N))), and the update query to constant complexity, while the naive solution has a time complexity of O(Log(N)) for both queries. The system was implemented and simulated using VHDL and Verilog Hardware Description Languages, with xilinx ISE 10.1, as the development environment and ModelSim 6.1c, similarly as the simulation tool. The simulation has shown that the overhead resulting by the wiring and communication between the system fragments could be fairly neglected, which makes it applicable to practically reach the maximum speed up offered by the proposed model.
Abstract: Very Large and/or computationally complex optimization problems sometimes require parallel or highperformance computing for achieving a reasonable time for computation. One of the most popular and most complicate problems of this family is “Traveling Salesman Problem". In this paper we have introduced a Branch & Bound based algorithm for the solution of such complicated problems. The main focus of the algorithm is to solve the “symmetric traveling salesman problem". We reviewed some of already available algorithms and felt that there is need of new algorithm which should give optimal solution or near to the optimal solution. On the basis of the use of logarithmic sampling, it was found that the proposed algorithm produced a relatively optimal solution for the problem and results excellent performance as compared with the traditional algorithms of this series.
Abstract: On a such wide-area environment as a Grid, data
placement is an important aspect of distributed database systems. In
this paper, we address the problem of initial placement of database
no-replicated fragments in Grid architecture. We propose a graph
based approach that considers resource restrictions. The goal is to
optimize the use of computing, storage and communication
resources. The proposed approach is developed in two phases: in the
first phase, we perform fragment grouping using knowledge about
fragments dependency and, in the second phase, we determine an
efficient placement of the fragment groups on the Grid. We also
show, via experimental analysis that our approach gives solutions
that are close to being optimal for different databases and Grid
configurations.
Abstract: Because of the low maintenance and robustness induction motors have many applications in the industries. The speed control of induction motor is more important to achieve maximum torque and efficiency. Various speed control techniques like, Direct Torque Control, Sensorless Vector Control and Field Oriented Control are discussed in this paper. Soft computing technique – Fuzzy logic is applied in this paper for the speed control of induction motor to achieve maximum torque with minimum loss. The fuzzy logic controller is implemented using the Field Oriented Control technique as it provides better control of motor torque with high dynamic performance. The motor model is designed and membership functions are chosen according to the parameters of the motor model. The simulated design is tested using various tool boxes in MATLAB. The result concludes that the efficiency and reliability of the proposed speed controller is good.
Abstract: This paper deals with the combination of OSGi and
cloud computing. Both technologies are mainly placed in the field of
distributed computing. Therefore, it is discussed how different
approaches from different institutions work. In addition, the
approaches are compared to each other.