Abstract: The paper considers a single-server queue with fixedsize
batch Poisson arrivals and exponential service times, a model
that is useful for a buffer that accepts messages arriving as fixed size
batches of packets and releases them one packet at time. Transient
performance measures for queues have long been recognized as
being complementary to the steady-state analysis. The focus of the
paper is on the use of the functions that arise in the analysis of the
transient behaviour of the queuing system. The paper exploits
practical modelling to obtain a solution to the integral equation
encountered in the analysis. Results obtained indicate that under
heavy load conditions, there is significant disparity in the statistics
between the transient and steady state values.
Abstract: Whereas cellular wireless communication systems are
subject to short-and long-term fading. The effect of wireless channel
has largely been ignored in most of the teletraffic assessment
researches. In this paper, a mathematical teletraffic model is proposed
to estimate blocking and forced termination probabilities of cellular
wireless networks as a result of teletraffic behavior as well as the
outage of the propagation channel. To evaluate the proposed
teletraffic model, gamma inter-arrival and general service time
distributions have been considered based on wireless channel fading
effect. The performance is evaluated and compared with the classical
model. The proposed model is dedicated and investigated in different
operational conditions. These conditions will consider not only the
arrival rate process, but also, the different faded channels models.
Abstract: Accounts of language acquisition differ significantly in their treatment of the role of prediction in language learning. In particular, nativist accounts posit that probabilistic learning about words and word sequences has little to do with how children come to use language. The accuracy of this claim was examined by testing whether distributional probabilities and frequency contributed to how well 3-4 year olds repeat simple word chunks. Corresponding chunks were the same length, expressed similar content, and were all grammatically acceptable, yet the results of the study showed marked differences in performance when overall distributional frequency varied. It was found that a distributional model of language predicted the empirical findings better than a number of other models, replicating earlier findings and showing that children attend to distributional probabilities in an adult corpus. This suggested that language is more prediction-and-error based, rather than on abstract rules which nativist camps suggest.
Abstract: Seemingly simple probabilities in the m-player game bingo have never been calculated. These probabilities include expected game length and the expected number of winners on a given turn. The difficulty in probabilistic analysis lies in the subtle interdependence among the m-many bingo game cards in play. In this paper, the game i got it!, a bingo variant, is considered. This variation provides enough weakening of the inter-player dependence to allow probabilistic analysis not possible for traditional bingo. The probability of winning in exactly k turns is calculated for a one-player game. Given a game of m-many players, the expected game length and tie probability are calculated. With these calculations, the game-s interesting payout scheme is considered.
Abstract: In this paper, a mathematical model is proposed to
estimate the dropping probabilities of cellular wireless networks by
queuing handoff instead of reserving guard channels. Usually, prioritized
handling of handoff calls is done with the help of guard channel
reservation. To evaluate the proposed model, gamma inter-arrival and
general service time distributions have been considered. Prevention of
some of the attempted calls from reaching to the switching center due
to electromagnetic propagation failure or whimsical user behaviour
(missed call, prepaid balance etc.), make the inter-arrival time of
the input traffic to follow gamma distribution. The performance is
evaluated and compared with that of guard channel scheme.
Abstract: In this paper we explore the application of a formal proof system to verification problems in cryptography. Cryptographic properties concerning correctness or security of some cryptographic algorithms are of great interest. Beside some basic lemmata, we explore an implementation of a complex function that is used in cryptography. More precisely, we describe formal properties of this implementation that we computer prove. We describe formalized probability distributions (o--algebras, probability spaces and condi¬tional probabilities). These are given in the formal language of the formal proof system Isabelle/HOL. Moreover, we computer prove Bayes' Formula. Besides we describe an application of the presented formalized probability distributions to cryptography. Furthermore, this paper shows that computer proofs of complex cryptographic functions are possible by presenting an implementation of the Miller- Rabin primality test that admits formal verification. Our achievements are a step towards computer verification of cryptographic primitives. They describe a basis for computer verification in cryptography. Computer verification can be applied to further problems in crypto-graphic research, if the corresponding basic mathematical knowledge is available in a database.
Abstract: Influence diagrams (IDs) are one of the most commonly used graphical decision models for reasoning under uncertainty. The quantification of IDs which consists in defining conditional probabilities for chance nodes and utility functions for value nodes is not always obvious. In fact, decision makers cannot always provide exact numerical values and in some cases, it is more easier for them to specify qualitative preference orders. This work proposes an adaptation of standard IDs to the qualitative framework based on possibility theory.
Abstract: In this paper, we propose a texture feature-based
language identification using wavelet-domain BDIP (block difference
of inverse probabilities) and BVLC (block variance of local
correlation coefficients) features and FFT (fast Fourier transform)
feature. In the proposed method, wavelet subbands are first obtained
by wavelet transform from a test image and denoised by Donoho-s
soft-thresholding. BDIP and BVLC operators are next applied to the
wavelet subbands. FFT blocks are also obtained by 2D (twodimensional)
FFT from the blocks into which the test image is
partitioned. Some significant FFT coefficients in each block are
selected and magnitude operator is applied to them. Moments for each
subband of BDIP and BVLC and for each magnitude of significant
FFT coefficients are then computed and fused into a feature vector. In
classification, a stabilized Bayesian classifier, which adopts variance
thresholding, searches the training feature vector most similar to the
test feature vector. Experimental results show that the proposed
method with the three operations yields excellent language
identification even with rather low feature dimension.
Abstract: Testing accounts for the major percentage of technical
contribution in the software development process. Typically, it
consumes more than 50 percent of the total cost of developing a
piece of software. The selection of software tests is a very important
activity within this process to ensure the software reliability
requirements are met. Generally tests are run to achieve maximum
coverage of the software code and very little attention is given to the
achieved reliability of the software. Using an existing methodology,
this paper describes how to use Bayesian Belief Networks (BBNs) to
select unit tests based on their contribution to the reliability of the
module under consideration. In particular the work examines how the
approach can enhance test-first development by assessing the quality
of test suites resulting from this development methodology and
providing insight into additional tests that can significantly reduce
the achieved reliability. In this way the method can produce an
optimal selection of inputs and the order in which the tests are
executed to maximize the software reliability. To illustrate this
approach, a belief network is constructed for a modern software
system incorporating the expert opinion, expressed through
probabilities of the relative quality of the elements of the software,
and the potential effectiveness of the software tests. The steps
involved in constructing the Bayesian Network are explained as is a
method to allow for the test suite resulting from test-driven
development.
Abstract: The Comparison analysis of the Wald-s and Bayestype sequential methods for testing hypotheses is offered. The merits of the new sequential test are: universality which consists in optimality (with given criteria) and uniformity of decision-making regions for any number of hypotheses; simplicity, convenience and uniformity of the algorithms of their realization; reliability of the obtained results and an opportunity of providing the errors probabilities of desirable values. There are given the Computation results of concrete examples which confirm the above-stated characteristics of the new method and characterize the considered methods in regard to each other.
Abstract: In this paper, we present a comparative study between two computer vision systems for objects recognition and tracking, these algorithms describe two different approach based on regions constituted by a set of pixels which parameterized objects in shot sequences. For the image segmentation and objects detection, the FCM technique is used, the overlapping between cluster's distribution is minimized by the use of suitable color space (other that the RGB one). The first technique takes into account a priori probabilities governing the computation of various clusters to track objects. A Parzen kernel method is described and allows identifying the players in each frame, we also show the importance of standard deviation value research of the Gaussian probability density function. Region matching is carried out by an algorithm that operates on the Mahalanobis distance between region descriptors in two subsequent frames and uses singular value decomposition to compute a set of correspondences satisfying both the principle of proximity and the principle of exclusion.
Abstract: Parsing is important in Linguistics and Natural
Language Processing to understand the syntax and semantics of a
natural language grammar. Parsing natural language text is
challenging because of the problems like ambiguity and inefficiency.
Also the interpretation of natural language text depends on context
based techniques. A probabilistic component is essential to resolve
ambiguity in both syntax and semantics thereby increasing accuracy
and efficiency of the parser. Tamil language has some inherent
features which are more challenging. In order to obtain the solutions,
lexicalized and statistical approach is to be applied in the parsing
with the aid of a language model. Statistical models mainly focus on
semantics of the language which are suitable for large vocabulary
tasks where as structural methods focus on syntax which models
small vocabulary tasks. A statistical language model based on Trigram
for Tamil language with medium vocabulary of 5000 words has
been built. Though statistical parsing gives better performance
through tri-gram probabilities and large vocabulary size, it has some
disadvantages like focus on semantics rather than syntax, lack of
support in free ordering of words and long term relationship. To
overcome the disadvantages a structural component is to be
incorporated in statistical language models which leads to the
implementation of hybrid language models. This paper has attempted
to build phrase structured hybrid language model which resolves
above mentioned disadvantages. In the development of hybrid
language model, new part of speech tag set for Tamil language has
been developed with more than 500 tags which have the wider
coverage. A phrase structured Treebank has been developed with 326
Tamil sentences which covers more than 5000 words. A hybrid
language model has been trained with the phrase structured Treebank
using immediate head parsing technique. Lexicalized and statistical
parser which employs this hybrid language model and immediate
head parsing technique gives better results than pure grammar and
trigram based model.