Abstract: This paper investigates the problem of sampling from transactional data streams. We introduce CFISDS as a content based sampling algorithm that works on a landmark window model of data streams and preserve more informed sample in sample space. This algorithm that work based on closed frequent itemset mining tasks, first initiate a concept lattice using initial data, then update lattice structure using an incremental mechanism.Incremental mechanism insert, update and delete nodes in/from concept lattice in batch manner. Presented algorithm extracts the final samples on demand of user. Experimental results show the accuracy of CFISDS on synthetic and real datasets, despite on CFISDS algorithm is not faster than exist sampling algorithms such as Z and DSS.
Abstract: This study was to search for the desirable direction of
the sidewalk planning in Korea by establishing the concepts of
walking and pedestrian space, and analyzing the advanced precedents
in and out of country. Also, based on the precedent studies and
relevant laws, regulations, and systems, it aimed for the following
sequential process: firstly, to derive design elements from the
functions and characteristics of sidewalk and cluster the similar
elements by each characteristics, sampling representative
characteristics and making them hierarchical; then, to analyze their
significances via the first questionnaire survey, and the relative
weights and priorities of each elements via the Analytic Hierarchy
Process(AHP); finally, based on the analysis result, to establish the
frame of suggesting the direction of policy to improve the pedestrian
environment of sidewalk in urban commercial district for the future
planning and design of pedestrian space.
Abstract: Logic based methods for learning from structured data
is limited w.r.t. handling large search spaces, preventing large-sized
substructures from being considered by the resulting classifiers. A
novel approach to learning from structured data is introduced that
employs a structure transformation method, called finger printing, for
addressing these limitations. The method, which generates features
corresponding to arbitrarily complex substructures, is implemented in
a system, called DIFFER. The method is demonstrated to perform
comparably to an existing state-of-art method on some benchmark
data sets without requiring restrictions on the search space.
Furthermore, learning from the union of features generated by finger
printing and the previous method outperforms learning from each
individual set of features on all benchmark data sets, demonstrating
the benefit of developing complementary, rather than competing,
methods for structure classification.
Abstract: We present here the results for a comparative study of
some techniques, available in the literature, related to the relevance
feedback mechanism in the case of a short-term learning. Only one
method among those considered here is belonging to the data mining
field which is the K-nearest neighbors algorithm (KNN) while the
rest of the methods is related purely to the information retrieval field
and they fall under the purview of the following three major axes:
Shifting query, Feature Weighting and the optimization of the
parameters of similarity metric. As a contribution, and in addition to
the comparative purpose, we propose a new version of the KNN
algorithm referred to as an incremental KNN which is distinct from
the original version in the sense that besides the influence of the
seeds, the rate of the actual target image is influenced also by the
images already rated. The results presented here have been obtained
after experiments conducted on the Wang database for one iteration
and utilizing color moments on the RGB space. This compact
descriptor, Color Moments, is adequate for the efficiency purposes
needed in the case of interactive systems. The results obtained allow
us to claim that the proposed algorithm proves good results; it even
outperforms a wide range of techniques available in the literature.
Abstract: In the last years numerous applications of Human-
Computer Interaction have exploited the capabilities of Time-of-
Flight cameras for achieving more and more comfortable and precise
interactions. In particular, gesture recognition is one of the most active
fields. This work presents a new method for interacting with a virtual
object in a 3D space. Our approach is based on the fusion of depth
data, supplied by a ToF camera, with color information, supplied
by a HD webcam. The hand detection procedure does not require
any learning phase and is able to concurrently manage gestures of
two hands. The system is robust to the presence in the scene of
other objects or people, thanks to the use of the Kalman filter for
maintaining the tracking of the hands.
Abstract: We depend upon explanation in order to “make sense"
out of our world. And, making sense is all the more important when
dealing with change. But, what happens if our explanations are
wrong? This question is examined with respect to two types of
explanatory model. Models based on labels and categories we shall
refer to as “representations." More complex models involving
stories, multiple algorithms, rules of thumb, questions, ambiguity we
shall refer to as “compressions." Both compressions and
representations are reductions. But representations are far more
reductive than compressions. Representations can be treated as a set
of defined meanings – coherence with regard to a representation is
the degree of fidelity between the item in question and the definition
of the representation, of the label. By contrast, compressions contain
enough degrees of freedom and ambiguity to allow us to make
internal predictions so that we may determine our potential actions in
the possibility space. Compressions are explanatory via mechanism.
Representations are explanatory via category. Managers are often
confusing their evocation of a representation (category inclusion) as
the creation of a context of compression (description of mechanism).
When this type of explanatory error occurs, more errors follow. In
the drive for efficiency such substitutions are all too often proclaimed
– at the manager-s peril..
Abstract: A model-free robust control (MFRC) approach is proposed for position control of robot manipulators in the state space. The control approach is verified analytically to be robust subject to uncertainties including external disturbances, unmodeled dynamics, and parametric uncertainties. There is a high flexibility to work on different systems including actuators by the use of the proposed control approach. The proposed control approach can guarantee the robustness of control system. A PUMA 560 robot driven by geared permanent magnet dc motors is simulated. The simulation results show a satisfactory performance for control system under technical specifications. KeywordsModel-free, robust control, position control, PUMA 560.
Abstract: The IFS is a scheme for describing and manipulating complex fractal attractors using simple mathematical models. More precisely, the most popular “fractal –based" algorithms for both representation and compression of computer images have involved some implementation of the method of Iterated Function Systems (IFS) on complete metric spaces. In this paper a new generalized space called Multi-Fuzzy Fractal Space was constructed. On these spases a distance function is defined, and its completeness is proved. The completeness property of this space ensures the existence of a fixed-point theorem for the family of continuous mappings. This theorem is the fundamental result on which the IFS methods are based and the fractals are built. The defined mappings are proved to satisfy some generalizations of the contraction condition.
Abstract: A topologically oriented neural network is very
efficient for real-time path planning for a mobile robot in changing
environments. When using a recurrent neural network for this
purpose and with the combination of the partial differential equation
of heat transfer and the distributed potential concept of the network,
the problem of obstacle avoidance of trajectory planning for a
moving robot can be efficiently solved. The related dimensional
network represents the state variables and the topology of the robot's
working space. In this paper two approaches to problem solution are
proposed. The first approach relies on the potential distribution of
attraction distributed around the moving target, acting as a unique
local extreme in the net, with the gradient of the state variables
directing the current flow toward the source of the potential heat. The
second approach considers two attractive and repulsive potential
sources to decrease the time of potential distribution. Computer
simulations have been carried out to interrogate the performance of
the proposed approaches.
Abstract: Over the past few years, a number of efforts have
been exerted to build parallel processing systems that utilize the idle
power of LAN-s and PC-s available in many homes and corporations.
The main advantage of these approaches is that they provide cheap
parallel processing environments for those who cannot afford the
expenses of supercomputers and parallel processing hardware.
However, most of the solutions provided are not very flexible in the
use of available resources and very difficult to install and setup.
In this paper, a multi-level web-based parallel processing system
(MWPS) is designed (appendix). MWPS is based on the idea of
volunteer computing, very flexible, easy to setup and easy to use.
MWPS allows three types of subscribers: simple volunteers (single
computers), super volunteers (full networks) and end users. All of
these entities are coordinated transparently through a secure web site.
Volunteer nodes provide the required processing power needed by
the system end users. There is no limit on the number of volunteer
nodes, and accordingly the system can grow indefinitely. Both
volunteer and system users must register and subscribe. Once, they
subscribe, each entity is provided with the appropriate MWPS
components. These components are very easy to install.
Super volunteer nodes are provided with special components that
make it possible to delegate some of the load to their inner nodes.
These inner nodes may also delegate some of the load to some other
lower level inner nodes .... and so on. It is the responsibility of the
parent super nodes to coordinate the delegation process and deliver
the results back to the user.
MWPS uses a simple behavior-based scheduler that takes into
consideration the current load and previous behavior of processing
nodes. Nodes that fulfill their contracts within the expected time get a
high degree of trust. Nodes that fail to satisfy their contract get a
lower degree of trust.
MWPS is based on the .NET framework and provides the minimal
level of security expected in distributed processing environments.
Users and processing nodes are fully authenticated. Communications
and messages between nodes are very secure. The system has been
implemented using C#.
MWPS may be used by any group of people or companies to
establish a parallel processing or grid environment.
Abstract: The paper presents a method for multivariate time
series forecasting using Independent Component Analysis (ICA), as a preprocessing tool. The idea of this approach is to do the forecasting in the space of independent components (sources), and then to transform back the results to the original time series
space. The forecasting can be done separately and with a different
method for each component, depending on its time structure. The
paper gives also a review of the main algorithms for independent component analysis in the case of instantaneous mixture models, using second and high-order statistics. The method has been applied in simulation to an artificial multivariate time series
with five components, generated from three sources and a mixing matrix, randomly generated.
Abstract: This paper presents a novel genetic algorithm, termed
the Optimum Individual Monogenetic Algorithm (OIMGA) and
describes its hardware implementation. As the monogenetic strategy
retains only the optimum individual, the memory requirement is
dramatically reduced and no crossover circuitry is needed, thereby
ensuring the requisite silicon area is kept to a minimum.
Consequently, depending on application requirements, OIMGA
allows the investigation of solutions that warrant either larger GA
populations or individuals of greater length. The results given in this
paper demonstrate that both the performance of OIMGA and its
convergence time are superior to those of existing hardware GA
implementations. Local convergence is achieved in OIMGA by
retaining elite individuals, while population diversity is ensured by
continually searching for the best individuals in fresh regions of the
search space.
Abstract: A computational platform is presented in this
contribution. It has been designed as a virtual laboratory to be used
for exploring optimization algorithms in biological problems. This
platform is built on a blackboard-based agent architecture. As a test
case, the version of the platform presented here is devoted to the
study of protein folding, initially with a bead-like description of the
chain and with the widely used model of hydrophobic and polar
residues (HP model). Some details of the platform design are
presented along with its capabilities and also are revised some
explorations of the protein folding problems with different types of
discrete space. It is also shown the capability of the platform to
incorporate specific tools for the structural analysis of the runs in
order to understand and improve the optimization process.
Accordingly, the results obtained demonstrate that the ensemble of
computational tools into a single platform is worthwhile by itself,
since experiments developed on it can be designed to fulfill different
levels of information in a self-consistent fashion. By now, it is being
explored how an experiment design can be useful to create a
computational agent to be included within the platform. These
inclusions of designed agents –or software pieces– are useful for the
better accomplishment of the tasks to be developed by the platform.
Clearly, while the number of agents increases the new version of the
virtual laboratory thus enhances in robustness and functionality.
Abstract: There are two common types of operational research techniques, optimisation and metaheuristic methods. The latter may be defined as a sequential process that intelligently performs the exploration and exploitation adopted by natural intelligence and strong inspiration to form several iterative searches. An aim is to effectively determine near optimal solutions in a solution space. In this work, a type of metaheuristics called Ant Colonies Optimisation, ACO, inspired by a foraging behaviour of ants was adapted to find optimal solutions of eight non-linear continuous mathematical models. Under a consideration of a solution space in a specified region on each model, sub-solutions may contain global or multiple local optimum. Moreover, the algorithm has several common parameters; number of ants, moves, and iterations, which act as the algorithm-s driver. A series of computational experiments for initialising parameters were conducted through methods of Rigid Simplex, RS, and Modified Simplex, MSM. Experimental results were analysed in terms of the best so far solutions, mean and standard deviation. Finally, they stated a recommendation of proper level settings of ACO parameters for all eight functions. These parameter settings can be applied as a guideline for future uses of ACO. This is to promote an ease of use of ACO in real industrial processes. It was found that the results obtained from MSM were pretty similar to those gained from RS. However, if these results with noise standard deviations of 1 and 3 are compared, MSM will reach optimal solutions more efficiently than RS, in terms of speed of convergence.
Abstract: This article is devoted to the numerical solution of
large-scale quadratic eigenvalue problems. Such problems arise in
a wide variety of applications, such as the dynamic analysis of
structural mechanical systems, acoustic systems, fluid mechanics,
and signal processing. We first introduce a generalized second-order
Krylov subspace based on a pair of square matrices and two initial
vectors and present a generalized second-order Arnoldi process for
constructing an orthonormal basis of the generalized second-order
Krylov subspace. Then, by using the projection technique and the
refined projection technique, we propose a restarted generalized
second-order Arnoldi method and a restarted refined generalized
second-order Arnoldi method for computing some eigenpairs of largescale
quadratic eigenvalue problems. Some theoretical results are also
presented. Some numerical examples are presented to illustrate the
effectiveness of the proposed methods.
Abstract: In this paper, a novel approach is presented
for designing multiplier-free state-space digital filters. The
multiplier-free design is obtained by finding power-of-2 coefficients
and also quantizing the state variables to power-of-2
numbers. Expressions for the noise variance are derived for the
quantized state vector and the output of the filter. A “structuretransformation
matrix" is incorporated in these expressions. It
is shown that quantization effects can be minimized by properly
designing the structure-transformation matrix. Simulation
results are very promising and illustrate the design algorithm.
Abstract: Sequential pattern mining is a challenging task in data mining area with large applications. One among those applications is mining patterns from weblog. Recent times, weblog is highly dynamic and some of them may become absolute over time. In addition, users may frequently change the threshold value during the data mining process until acquiring required output or mining interesting rules. Some of the recently proposed algorithms for mining weblog, build the tree with two scans and always consume large time and space. In this paper, we build Revised PLWAP with Non-frequent Items (RePLNI-tree) with single scan for all items. While mining sequential patterns, the links related to the nonfrequent items are not considered. Hence, it is not required to delete or maintain the information of nodes while revising the tree for mining updated transactions. The algorithm supports both incremental and interactive mining. It is not required to re-compute the patterns each time, while weblog is updated or minimum support changed. The performance of the proposed tree is better, even the size of incremental database is more than 50% of existing one. For evaluation purpose, we have used the benchmark weblog dataset and found that the performance of proposed tree is encouraging compared to some of the recently proposed approaches.
Abstract: The concept of privacy, seen in connection to the consumer's private space and personalization, has recently gained a higher importance as a consequence of the increasing marketing efforts of the organizations based on the capturing, processing and usage of consumer-s personal data.Paper intends to provide a definition of the consumer-s private space based on the types of personal data the consumer is willing to disclose, to assess the attitude toward personalization and to identify the means preferred by consumers to control their personal data and defend their private space. Several implications generated through the definition of the consumer-s private space are identified and weighted from both the consumers- and organizations- perspectives.
Abstract: The fault detection and diagnosis of complicated
production processes is one of essential tasks needed to run the process
safely with good final product quality. Unexpected events occurred in
the process may have a serious impact on the process. In this work,
triangular representation of process measurement data obtained in an
on-line basis is evaluated using simulation process. The effect of using
linear and nonlinear reduced spaces is also tested. Their diagnosis
performance was demonstrated using multivariate fault data. It has
shown that the nonlinear technique based diagnosis method produced
more reliable results and outperforms linear method. The use of
appropriate reduced space yielded better diagnosis performance. The
presented diagnosis framework is different from existing ones in that it
attempts to extract the fault pattern in the reduced space, not in the
original process variable space. The use of reduced model space helps
to mitigate the sensitivity of the fault pattern to noise.
Abstract: In this paper, we use a one-step iteration scheme to approximate common fixed points of two quasi-asymptotically nonexpansive mappings. We prove weak and strong convergence theorems in a uniformly convex Banach space. Our results generalize the corresponding results of Yao and Chen [15] to a wider class of mappings while extend those of Khan, Abbas and Khan [4] to an improved one-step iteration scheme without any condition and improve upon many others in the literature.