Abstract: We show that in a two-channel sampling series expansion
of band-pass signals, any finitely many missing samples can
always be recovered via oversampling in a larger band-pass region.
We also obtain an analogous result for multi-channel oversampling
of harmonic signals.
Abstract: Due to the tremendous amount of information provided
by the World Wide Web (WWW) developing methods for mining
the structure of web-based documents is of considerable interest. In
this paper we present a similarity measure for graphs representing
web-based hypertext structures. Our similarity measure is mainly
based on a novel representation of a graph as linear integer strings,
whose components represent structural properties of the graph. The
similarity of two graphs is then defined as the optimal alignment of
the underlying property strings. In this paper we apply the well known
technique of sequence alignments for solving a novel and challenging
problem: Measuring the structural similarity of generalized trees.
In other words: We first transform our graphs considered as high
dimensional objects in linear structures. Then we derive similarity
values from the alignments of the property strings in order to
measure the structural similarity of generalized trees. Hence, we
transform a graph similarity problem to a string similarity problem for
developing a efficient graph similarity measure. We demonstrate that
our similarity measure captures important structural information by
applying it to two different test sets consisting of graphs representing
web-based document structures.
Abstract: In communication networks where communication nodes are connected with finite capacity transmission links, the packet inter-arrival times are strongly correlated with the packet length and the link capacity (or the packet service time). Such correlation affects the system performance significantly, but little attention has been paid to this issue. In this paper, we propose a mathematical framework to study the impact of the correlation between the packet service times and the packet inter-arrival times on system performance. With our mathematical model, we analyze the system performance, e.g., the unfinished work of the system, and show that the correlation affects the system performance significantly. Some numerical examples are also provided.
Abstract: In this paper we propose new method for
simultaneous generating multiple quantiles corresponding to given
probability levels from data streams and massive data sets. This
method provides a basis for development of single-pass low-storage
quantile estimation algorithms, which differ in complexity, storage
requirement and accuracy. We demonstrate that such algorithms may
perform well even for heavy-tailed data.
Abstract: Process measurement is the task of empirically and objectively assigning numbers to the properties of business processes in such a way as to describe them. Desirable attributes to study and measure include complexity, cost, maintainability, and reliability. In our work we will focus on investigating process complexity. We define process complexity as the degree to which a business process is difficult to analyze, understand or explain. One way to analyze a process- complexity is to use a process control-flow complexity measure. In this paper, an attempt has been made to evaluate the control-flow complexity measure in terms of Weyuker-s properties. Weyuker-s properties must be satisfied by any complexity measure to qualify as a good and comprehensive one.
Abstract: The game of Maundy Block is the three-player variant
of Maundy Cake, a classical combinatorial game. Even though to
determine the solution of Maundy Cake is trivial, solving Maundy
Block is challenging because of the identification of queer games,
i.e., games where no player has a winning strategy.
Abstract: To analyze the behavior of Petri nets, the accessibility
graph and Model Checking are widely used. However, if the
analyzed Petri net is unbounded then the accessibility graph becomes
infinite and Model Checking can not be used even for small Petri
nets. ECATNets [2] are a category of algebraic Petri nets. The main
feature of ECATNets is their sound and complete semantics based on
rewriting logic [8] and its language Maude [9]. ECATNets analysis
may be done by using techniques of accessibility analysis and Model
Checking defined in Maude. But, these two techniques supported by
Maude do not work also with infinite-states systems. As a category
of Petri nets, ECATNets can be unbounded and so infinite systems.
In order to know if we can apply accessibility analysis and Model
Checking of Maude to an ECATNet, we propose in this paper an
algorithm allowing the detection if the ECATNet is bounded or not.
Moreover, we propose a rewriting logic based tool implementing this
algorithm. We show that the development of this tool using the
Maude system is facilitated thanks to the reflectivity of the rewriting
logic. Indeed, the self-interpretation of this logic allows us both the
modelling of an ECATNet and acting on it.
Abstract: In image processing and visualization, comparing two
bitmapped images needs to be compared from their pixels by matching
pixel-by-pixel. Consequently, it takes a lot of computational time
while the comparison of two vector-based images is significantly
faster. Sometimes these raster graphics images can be approximately
converted into the vector-based images by various techniques. After
conversion, the problem of comparing two raster graphics images
can be reduced to the problem of comparing vector graphics images.
Hence, the problem of comparing pixel-by-pixel can be reduced to
the problem of polynomial comparisons. In computer aided geometric
design (CAGD), the vector graphics images are the composition of
curves and surfaces. Curves are defined by a sequence of control
points and their polynomials. In this paper, the control points will be
considerably used to compare curves. The same curves after relocated
or rotated are treated to be equivalent while two curves after different
scaled are considered to be similar curves. This paper proposed an
algorithm for comparing the polynomial curves by using the control
points for equivalence and similarity. In addition, the geometric
object-oriented database used to keep the curve information has also
been defined in XML format for further used in curve comparisons.
Abstract: Bagging and boosting are among the most popular resampling ensemble methods that generate and combine a diversity of classifiers using the same learning algorithm for the base-classifiers. Boosting algorithms are considered stronger than bagging on noisefree data. However, there are strong empirical indications that bagging is much more robust than boosting in noisy settings. For this reason, in this work we built an ensemble using a voting methodology of bagging and boosting ensembles with 10 subclassifiers in each one. We performed a comparison with simple bagging and boosting ensembles with 25 sub-classifiers, as well as other well known combining methods, on standard benchmark datasets and the proposed technique was the most accurate.
Abstract: Software developed for a specific customer under contract
typically undergoes a period of testing by the customer before
acceptance. This is known as user acceptance testing and the process
can reveal both defects in the system and requests for changes to
the product. This paper uses nonhomogeneous Poisson processes to
model a real user acceptance data set from a recently developed
system. In particular a split Poisson process is shown to provide an
excellent fit to the data. The paper explains how this model can be
used to aid the allocation of resources through the accurate prediction
of occurrences both during the acceptance testing phase and before
this activity begins.