Abstract: The main aim of this study was to examine whether
people understand indicative conditionals on the basis of syntactic
factors or on the basis of subjective conditional probability. The
second aim was to investigate whether the conditional probability of
q given p depends on the antecedent and consequent sizes or derives
from inductive processes leading to establish a link of plausible cooccurrence
between events semantically or experientially associated.
These competing hypotheses have been tested through a 3 x 2 x 2 x 2
mixed design involving the manipulation of four variables: type of
instructions (“Consider the following statement to be true", “Read the
following statement" and condition with no conditional statement);
antecedent size (high/low); consequent size (high/low); statement
probability (high/low). The first variable was between-subjects, the
others were within-subjects. The inferences investigated were Modus
Ponens and Modus Tollens. Ninety undergraduates of the Second
University of Naples, without any prior knowledge of logic or
conditional reasoning, participated in this study.
Results suggest that people understand conditionals in a syntactic
way rather than in a probabilistic way, even though the perception of
the conditional probability of q given p is at least partially involved in
the conditionals- comprehension. They also showed that, in presence
of a conditional syllogism, inferences are not affected by the
antecedent or consequent sizes. From a theoretical point of view these
findings suggest that it would be inappropriate to abandon the idea
that conditionals are naturally understood in a syntactic way for the
idea that they are understood in a probabilistic way.
Abstract: Next Generation Wireless Network (NGWN) is
expected to be a heterogeneous network which integrates all different
Radio Access Technologies (RATs) through a common platform. A
major challenge is how to allocate users to the most suitable RAT for
them. An optimized solution can lead to maximize the efficient use
of radio resources, achieve better performance for service providers
and provide Quality of Service (QoS) with low costs to users.
Currently, Radio Resource Management (RRM) is implemented
efficiently for the RAT that it was developed. However, it is not
suitable for a heterogeneous network. Common RRM (CRRM) was
proposed to manage radio resource utilization in the heterogeneous
network. This paper presents a user level Markov model for a three
co-located RAT networks. The load-balancing based and service
based CRRM algorithms have been studied using the presented
Markov model. A comparison for the performance of load-balancing
based and service based CRRM algorithms is studied in terms of
traffic distribution, new call blocking probability, vertical handover
(VHO) call dropping probability and throughput.
Abstract: Modern highly automated production systems faces
problems of reliability. Machine function reliability results in
changes of productivity rate and efficiency use of expensive
industrial facilities. Predicting of reliability has become an important
research and involves complex mathematical methods and
calculation. The reliability of high productivity technological
automatic machines that consists of complex mechanical, electrical
and electronic components is important. The failure of these units
results in major economic losses of production systems. The
reliability of transport and feeding systems for automatic
technological machines is also important, because failure of transport
leads to stops of technological machines. This paper presents
reliability engineering on the feeding system and its components for
transporting a complex shape parts to automatic machines. It also
discusses about the calculation of the reliability parameters of the
feeding unit by applying the probability theory. Equations produced
for calculating the limits of the geometrical sizes of feeders and the
probability of sticking the transported parts into the chute represents
the reliability of feeders as a function of its geometrical parameters.
Abstract: Since dealing with high dimensional data is
computationally complex and sometimes even intractable, recently
several feature reductions methods have been developed to reduce
the dimensionality of the data in order to simplify the calculation
analysis in various applications such as text categorization, signal
processing, image retrieval, gene expressions and etc. Among feature
reduction techniques, feature selection is one the most popular
methods due to the preservation of the original features.
In this paper, we propose a new unsupervised feature selection
method which will remove redundant features from the original
feature space by the use of probability density functions of various
features. To show the effectiveness of the proposed method, popular
feature selection methods have been implemented and compared.
Experimental results on the several datasets derived from UCI
repository database, illustrate the effectiveness of our proposed
methods in comparison with the other compared methods in terms of
both classification accuracy and the number of selected features.
Abstract: This paper objects to extend Jon Kleinberg-s research. He introduced the structure of small-world in a grid and shows with a greedy algorithm using only local information able to find route between source and target in delivery time O(log2n). His fundamental model for distributed system uses a two-dimensional grid with longrange random links added between any two node u and v with a probability proportional to distance d(u,v)-2. We propose with an additional information of the long link nearby, we can find the shorter path. We apply the ant colony system as a messenger distributed their pheromone, the long-link details, in surrounding area. The subsequence forwarding decision has more option to move to, select among local neighbors or send to node has long link closer to its target. Our experiment results sustain our approach, the average routing time by Color Pheromone faster than greedy method.
Abstract: In this paper we propose new method for
simultaneous generating multiple quantiles corresponding to given
probability levels from data streams and massive data sets. This
method provides a basis for development of single-pass low-storage
quantile estimation algorithms, which differ in complexity, storage
requirement and accuracy. We demonstrate that such algorithms may
perform well even for heavy-tailed data.
Abstract: Graph partitioning is a NP-hard problem with multiple
conflicting objectives. The graph partitioning should minimize the
inter-partition relationship while maximizing the intra-partition
relationship. Furthermore, the partition load should be evenly
distributed over the respective partitions. Therefore this is a multiobjective
optimization problem (MOO). One of the approaches to
MOO is Pareto optimization which has been used in this paper. The
proposed methods of this paper used to improve the performance are
injecting best solutions of previous runs into the first generation of
next runs and also storing the non-dominated set of previous
generations to combine with later generation's non-dominated set.
These improvements prevent the GA from getting stuck in the local
optima and increase the probability of finding more optimal
solutions. Finally, a simulation research is carried out to investigate
the effectiveness of the proposed algorithm. The simulation results
confirm the effectiveness of the proposed method.
Abstract: FlexRay, as a communication protocol for automotive
control systems, is developed to fulfill the increasing demand on the
electronic control units for implementing systems with higher safety
and more comfort. In this work, we study the impact of
radiation-induced soft errors on FlexRay-based steer-by-wire system.
We injected the soft errors into general purpose register set of FlexRay
nodes to identify the most critical registers, the failure modes of the
steer-by-wire system, and measure the probability distribution of
failure modes when an error occurs in the register file.
Abstract: Hidden failure in a protection system has been
recognized as one of the main reasons which may cause to a power
system instability leading to a system cascading collapse. This paper
presents a computationally systematic approach used to obtain the
estimated average probability of a system cascading collapse by
considering the effect of probability hidden failure in a protection
system. The estimated average probability of a system cascading
collapse is then used to determine the severe loading condition
contributing to the higher risk of critical system cascading collapse.
This information is essential to the system utility since it will assist
the operator to determine the highest point of increased system
loading condition prior to the event of critical system cascading
collapse.
Abstract: In this paper, a Gaussian multiple input multiple output multiple eavesdropper (MIMOME) channel is considered where a transmitter communicates to a receiver in the presence of an eavesdropper. We present a technique for determining the secrecy capacity of the multiple input multiple output (MIMO) channel under Gaussian noise. We transform the degraded MIMOME channel into multiple single input multiple output (SIMO) Gaussian wire-tap channels and then use scalar approach to convert it into two equivalent multiple input single output (MISO) channels. The secrecy capacity model is then developed for the condition where the channel state information (CSI) for main channel only is known to the transmitter. The results show that the secret communication is possible when the eavesdropper channel noise is greater than a cutoff noise level. The outage probability is also analyzed of secrecy capacity is also analyzed. The effect of fading and outage probability is also analyzed.
Abstract: This paper reports the feasibility of the ARMA model
to describe a bursty video source transmitting over a AAL5 ATM link
(VBR traffic). The traffic represents the activity of the action movie
"Lethal Weapon 3" transmitted over the ATM network using the Fore
System AVA-200 ATM video codec with a peak rate of 100 Mbps
and a frame rate of 25. The model parameters were estimated for a
single video source and independently multiplexed video sources. It
was found that the model ARMA (2, 4) is well-suited for the real data
in terms of average rate traffic profile, probability density function,
autocorrelation function, burstiness measure, and the pole-zero
distribution of the filter model.
Abstract: In order to study the effects of supplemental irrigation, different levels of nitrogen chemical fertilizer and inoculation with rhizobium bacteria on the grain yield of chickpea, an experiment was carried out using split plot arrangement in randomize complete block design with three replication in agricultural researches station of Zanjan, Iran during 2009-2010 cropping season. The factors of experiment consisted of irritation (without irrigation (I1), irrigation at flowering stage (I2), irrigation at flowering and grain filling stages (I3) and full irrigation (I4)) and different levels of nitrogen fertilizer (without using of nitrogen fertilizer (N0), 75 kg.ha-1 (N75), 150 kg.ha-1 (N150) and inoculation with rhizobium bacteria (N4). The results of the analysis of variance showed that the effects of irrigation, nitrogen fertilizer levels and bacterial inoculation, were significant affect on number of pods per plant, number grains per plant, grain weight, grain yield, biological yield and harvest index at 1% probability level. Also Results showed that the grain yield in full irrigation treatment and inoculated with rhizobium bacteria was significantly higher than the other treatments.
Abstract: A new data fusion method called joint probability density matrix (JPDM) is proposed, which can associate and fuse measurements from spatially distributed heterogeneous sensors to identify the real target in a surveillance region. Using the probabilistic grids representation, we numerically combine the uncertainty regions of all the measurements in a general framework. The NP-hard multisensor data fusion problem has been converted to a peak picking problem in the grids map. Unlike most of the existing data fusion method, the JPDM method dose not need association processing, and will not lead to combinatorial explosion. Its convergence to the CRLB with a diminishing grid size has been proved. Simulation results are presented to illustrate the effectiveness of the proposed technique.
Abstract: Recently, analysis and designing of the structures
based on the Reliability theory have been the center of attention.
Reason of this attention is the existence of the natural and random
structural parameters such as the material specification, external
loads, geometric dimensions etc. By means of the Reliability theory,
uncertainties resulted from the statistical nature of the structural
parameters can be changed into the mathematical equations and the
safety and operational considerations can be considered in the
designing process. According to this theory, it is possible to study the
destruction probability of not only a specific element but also the
entire system. Therefore, after being assured of safety of every
element, their reciprocal effects on the safety of the entire system can
be investigated.
Abstract: Lacking an inherent “natural" dissimilarity measure
between objects in categorical dataset presents special difficulties in
clustering analysis. However, each categorical attributes from a given
dataset provides natural probability and information in the sense of
Shannon. In this paper, we proposed a novel method which
heuristically converts categorical attributes to numerical values by
exploiting such associated information. We conduct an experimental
study with real-life categorical dataset. The experiment demonstrates
the effectiveness of our approach.
Abstract: Network on a chip (NoC) has been proposed as a viable solution to counter the inefficiency of buses in the current VLSI on-chip interconnects. However, as the silicon chip accommodates more transistors, the probability of transient faults is increasing, making fault tolerance a key concern in scaling chips. In packet based communication on a chip, transient failures can corrupt the data packet and hence, undermine the accuracy of data communication. In this paper, we present a comparative analysis of transient fault tolerant techniques including end-to-end, node-by-node, and stochastic communication based on flooding principle.
Abstract: Mendelian Disease Genes represent a collection of single points of failure for the various systems they constitute. Such genes have been shown, on average, to encode longer proteins than 'non-disease' proteins. Existing models suggest that this results from the increased likeli-hood of longer genes undergoing mutations. Here, we show that in saturated mutagenesis experiments performed on model organisms, where the likelihood of each gene mutating is one, a similar relationship between length and the probability of a gene being lethal was observed. We thus suggest an extended model demonstrating that the likelihood of a mutated gene to produce a severe phenotype is length-dependent. Using the occurrence of conserved domains, we bring evidence that this dependency results from a correlation between protein length and the number of functions it performs. We propose that protein length thus serves as a proxy for protein cardinality in different networks required for the organism's survival and well-being. We use this example to argue that the collection of Mendelian Disease Genes can, and should, be used to study the rules governing systems vulnerability in living organisms.
Abstract: Performance of a dual maximal ratio combining
receiver has been analyzed for M-ary coherent and non-coherent
modulations over correlated Nakagami-m fading channels with nonidentical
and arbitrary fading parameter. The classical probability
density function (PDF) based approach is used for analysis.
Expressions for outage probability and average symbol error
performance for M-ary coherent and non-coherent modulations have
been obtained. The obtained results are verified against the special
case published results and found to be matching. The effect of the
unequal fading parameters, branch correlation and unequal input
average SNR on the receiver performance has been studied.
Abstract: Fault-proneness of a software module is the
probability that the module contains faults. To predict faultproneness
of modules different techniques have been proposed which
includes statistical methods, machine learning techniques, neural
network techniques and clustering techniques. The aim of proposed
study is to explore whether metrics available in the early lifecycle
(i.e. requirement metrics), metrics available in the late lifecycle (i.e.
code metrics) and metrics available in the early lifecycle (i.e.
requirement metrics) combined with metrics available in the late
lifecycle (i.e. code metrics) can be used to identify fault prone
modules using Genetic Algorithm technique. This approach has been
tested with real time defect C Programming language datasets of
NASA software projects. The results show that the fusion of
requirement and code metric is the best prediction model for
detecting the faults as compared with commonly used code based
model.
Abstract: The seismic feedback experiences in Algeria have
shown higher percentage of damages for non-code conforming
reinforced concrete (RC) buildings. Furthermore, the vulnerability of
these buildings was further aggravated due to presence of many
factors (e.g. weak the seismic capacity of these buildings, shorts
columns, Pounding effect, etc.).
Consequently Seismic risk assessments were carried out on
populations of buildings to identify the buildings most likely to
undergo losses during an earthquake. The results of such studies are
important in the mitigation of losses under future seismic events as
they allow strengthening intervention and disaster management plans
to be drawn up.
Within this paper, the state of the existing structures is assessed using
"the vulnerability index" method. This method allows the
classification of RC constructions taking into account both, structural
and non structural parameters, considered to be ones of the main
parameters governing the vulnerability of the structure. Based on
seismic feedback from past earthquakes DPM (damage probability
matrices) were developed too.