Abstract: When the failure function is monotone, some monotonic reliability methods are used to gratefully simplify and facilitate the reliability computations. However, these methods often work in a transformed iso-probabilistic space. To this end, a monotonic simulator or transformation is needed in order that the transformed failure function is still monotone. This note proves at first that the output distribution of failure function is invariant under the transformation. And then it presents some conditions under which the transformed function is still monotone in the newly obtained space. These concern the copulas and the dependence concepts. In many engineering applications, the Gaussian copulas are often used to approximate the real word copulas while the available information on the random variables is limited to the set of marginal distributions and the covariances. So this note catches an importance on the conditional monotonicity of the often used transformation from an independent random vector into a dependent random vector with Gaussian copulas.
Abstract: The aim of this paper is to introduce a parametric
distribution model in fatigue life reliability analysis dealing with
variation in material properties. Service loads in terms of responsetime
history signal of Belgian pave were replicated on a multi-axial
spindle coupled road simulator and stress-life method was used to
estimate the fatigue life of automotive stub axle. A PSN curve was
obtained by monotonic tension test and two-parameter Weibull
distribution function was used to acquire the mean life of the
component. A Pearson system was developed to evaluate the fatigue
life reliability by considering stress range intercept and slope of the
PSN curve as random variables. Considering normal distribution of
fatigue strength, it is found that the fatigue life of the stub axle to
have the highest reliability between 10000 – 15000 cycles. Taking
into account the variation of material properties associated with the
size effect, machining and manufacturing conditions, the method
described in this study can be effectively applied in determination of
probability of failure of mass-produced parts.
Abstract: In this paper, we propose a novel algorithm for
delineating the endocardial wall from a human heart ultrasound scan.
We assume that the gray levels in the ultrasound images are
independent and identically distributed random variables with
different Rician Inverse Gaussian (RiIG) distributions. Both synthetic
and real clinical data will be used for testing the algorithm. Algorithm
performance will be evaluated using the expert radiologist evaluation
of a soft copy of an ultrasound scan during the scanning process and
secondly, doctor’s conclusion after going through a printed copy of
the same scan. Successful implementation of this algorithm should
make it possible to differentiate normal from abnormal soft tissue and
help disease identification, what stage the disease is in and how best
to treat the patient. We hope that an automated system that uses this
algorithm will be useful in public hospitals especially in Third World
countries where problems such as shortage of skilled radiologists and
shortage of ultrasound machines are common. These public hospitals
are usually the first and last stop for most patients in these countries.
Abstract: A perfect secret-sharing scheme is a method to distribute a secret among a set of participants in such a way that only qualified subsets of participants can recover the secret and the joint share of participants in any unqualified subset is statistically independent of the secret. The collection of all qualified subsets is called the access structure of the perfect secret-sharing scheme. In a graph-based access structure, each vertex of a graph G represents a participant and each edge of G represents a minimal qualified subset. The average information ratio of a perfect secret-sharing scheme realizing the access structure based on G is defined as AR = (Pv2V (G) H(v))/(|V (G)|H(s)), where s is the secret and v is the share of v, both are random variables from and H is the Shannon entropy. The infimum of the average information ratio of all possible perfect secret-sharing schemes realizing a given access structure is called the optimal average information ratio of that access structure. Most known results about the optimal average information ratio give upper bounds or lower bounds on it. In this present structures based on bipartite graphs and determine the exact values of the optimal average information ratio of some infinite classes of them.
Abstract: We intend to point out the differences which exist
between the classical Gini concentration coefficient and a proposed
bipolarization index defined for an arbitrary random variable which
have a finite support.
In fact Gini's index measures only the "poverty degree" for the
individuals from a given population taking into consideration their
wages. The Gini coefficient is not so sensitive to the significant
income variations in the "rich people class" .
In practice there are multiple interdependent relations between the
pauperization and the socio-economical polarization phenomena. The
presence of a strong pauperization aspect inside the population
induces often a polarization effect in this society. But the
pauperization and the polarization phenomena are not identical. For
this reason it isn't always adequate to use a Gini type coefficient,
based on the Lorenz order, to estimate the bipolarization level of the
individuals from the studied population.
The present paper emphasizes these ideas by considering two
families of random variables which have a linear or a triangular type
distributions. In addition, the continuous variation, depending on the
parameter "time" of the chosen distributions, could simulate a real
dynamical evolution of the population.
Abstract: One of the purposes of the robust method of
estimation is to reduce the influence of outliers in the data, on the
estimates. The outliers arise from gross errors or contamination from
distributions with long tails. The trimmed mean is a robust estimate.
This means that it is not sensitive to violation of distributional
assumptions of the data. It is called an adaptive estimate when the
trimming proportion is determined from the data rather than being
fixed a “priori-.
The main objective of this study is to find out the robustness
properties of the adaptive trimmed means in terms of efficiency, high
breakdown point and influence function. Specifically, it seeks to find
out the magnitude of the trimming proportion of the adaptive
trimmed mean which will yield efficient and robust estimates of the
parameter for data which follow a modified Weibull distribution with
parameter λ = 1/2 , where the trimming proportion is determined by a
ratio of two trimmed means defined as the tail length. Secondly, the
asymptotic properties of the tail length and the trimmed means are
also investigated. Finally, a comparison is made on the efficiency of
the adaptive trimmed means in terms of the standard deviation for the
trimming proportions and when these were fixed a “priori".
The asymptotic tail lengths defined as the ratio of two trimmed
means and the asymptotic variances were computed by using the
formulas derived. While the values of the standard deviations for the
derived tail lengths for data of size 40 simulated from a Weibull
distribution were computed for 100 iterations using a computer
program written in Pascal language.
The findings of the study revealed that the tail lengths of the
Weibull distribution increase in magnitudes as the trimming
proportions increase, the measure of the tail length and the adaptive
trimmed mean are asymptotically independent as the number of
observations n becomes very large or approaching infinity, the tail
length is asymptotically distributed as the ratio of two independent
normal random variables, and the asymptotic variances decrease as
the trimming proportions increase. The simulation study revealed
empirically that the standard error of the adaptive trimmed mean
using the ratio of tail lengths is relatively smaller for different values
of trimming proportions than its counterpart when the trimming
proportions were fixed a 'priori'.
Abstract: The electrical substation components are often subject to degradation due to over-voltage or over-current, caused by a short circuit or a lightning. A particular interest is given to the circuit breaker, regarding the importance of its function and its dangerous failure. This component degrades gradually due to the use, and it is also subject to the shock process resulted from the stress of isolating the fault when a short circuit occurs in the system. In this paper, based on failure mechanisms developments, the wear out of the circuit breaker contacts is modeled. The aim of this work is to evaluate its reliability and consequently its residual lifetime. The shock process is based on two random variables such as: the arrival of shocks and their magnitudes. The arrival of shocks was modeled using homogeneous Poisson process (HPP). By simulation, the dates of short-circuit arrivals were generated accompanied with their magnitudes. The same principle of simulation is applied to the amount of cumulative wear out contacts. The objective reached is to find the formulation of the wear function depending on the number of solicitations of the circuit breaker.
Abstract: Fuzzy random variables have been introduced as an imprecise concept of numeric values for characterizing the imprecise knowledge. The descriptive parameters can be used to describe the primary features of a set of fuzzy random observations. In fuzzy environments, the expected values are usually represented as fuzzy-valued, interval-valued or numeric-valued descriptive parameters using various metrics. Instead of the concept of area metric that is usually adopted in the relevant studies, the numeric expected value is proposed by the concept of distance metric in this study based on two characters (fuzziness and randomness) of FRVs. Comparing with the existing measures, although the results show that the proposed numeric expected value is same with those using the different metric, if only triangular membership functions are used. However, the proposed approach has the advantages of intuitiveness and computational efficiency, when the membership functions are not triangular types. An example with three datasets is provided for verifying the proposed approach.
Abstract: Mathematical programming has been applied to various
problems. For many actual problems, the assumption that the parameters
involved are deterministic known data is often unjustified. In
such cases, these data contain uncertainty and are thus represented
as random variables, since they represent information about the
future. Decision-making under uncertainty involves potential risk.
Stochastic programming is a commonly used method for optimization
under uncertainty. A stochastic programming problem with recourse
is referred to as a two-stage stochastic problem. In this study, we
consider a stochastic programming problem with simple integer
recourse in which the value of the recourse variable is restricted to a
multiple of a nonnegative integer. The algorithm of a dynamic slope
scaling procedure for solving this problem is developed by using a
property of the expected recourse function. Numerical experiments
demonstrate that the proposed algorithm is quite efficient. The
stochastic programming model defined in this paper is quite useful
for a variety of design and operational problems.
Abstract: This paper proposes a smart design strategy for a sequential detector to reliably detect the primary user-s signal, especially in fast fading environments. We study the computation of the log-likelihood ratio for coping with a fast changing received signal and noise sample variances, which are considered random variables. First, we analyze the detectability of the conventional generalized log-likelihood ratio (GLLR) scheme when considering fast changing statistics of unknown parameters caused by fast fading effects. Secondly, we propose an efficient sensing algorithm for performing the sequential probability ratio test in a robust and efficient manner when the channel statistics are unknown. Finally, the proposed scheme is compared to the conventional method with simulation results with respect to the average number of samples required to reach a detection decision.
Abstract: This paper proposes, implements and evaluates an original discretization method for continuous random variables, in order to estimate the reliability of systems for which stress and strength are defined as complex functions, and whose reliability is not derivable through analytic techniques. This method is compared to other two discretizing approaches appeared in literature, also through a comparative study involving four engineering applications. The results show that the proposal is very efficient in terms of closeness of the estimates to the true (simulated) reliability. In the study we analyzed both a normal and a non-normal distribution for the random variables: this method is theoretically suitable for each parametric family.
Abstract: We study the spatial design of experiment and we want to select a most informative subset, having prespecified size, from a set of correlated random variables. The problem arises in many applied domains, such as meteorology, environmental statistics, and statistical geology. In these applications, observations can be collected at different locations and possibly at different times. In spatial design, when the design region and the set of interest are discrete then the covariance matrix completely describe any objective function and our goal is to choose a feasible design that minimizes the resulting uncertainty. The problem is recast as that of maximizing the determinant of the covariance matrix of the chosen subset. This problem is NP-hard. For using these designs in computer experiments, in many cases, the design space is very large and it's not possible to calculate the exact optimal solution. Heuristic optimization methods can discover efficient experiment designs in situations where traditional designs cannot be applied, exchange methods are ineffective and exact solution not possible. We developed a GA algorithm to take advantage of the exploratory power of this algorithm. The successful application of this method is demonstrated in large design space. We consider a real case of design of experiment. In our problem, design space is very large and for solving the problem, we used proposed GA algorithm.
Abstract: Complex statistical analysis of stresses in concrete
slab of the real type of rigid pavement is performed. The
computational model of the pavement is designed as a spatial (3D) model, is based on a nonlinear variant of the finite element method
that respects the structural nonlinearity, enables to model different arrangement of joints, and the entire model can be loaded by the
thermal load. Interaction of adjacent slabs in joints and contact of the slab and the subsequent layer are modeled with help of special
contact elements. Four concrete slabs separated by transverse and
longitudinal joints and the additional subgrade layers and soil to the depth of about 3m are modeled. The thickness of individual layers,
physical and mechanical properties of materials, characteristics of
joints, and the temperature of the upper and lower surface of slabs are supposed to be random variables. The modern simulation technique
Updated Latin Hypercube Sampling with 20 simulations is used for statistical analysis. As results, the estimates of basic statistics of the
principal stresses s1 and s3 in 53 points on the upper and lower surface of the slabs are obtained.
Abstract: In This Article We establish moment inequality of
dependent random variables,furthermore some theorems of strong law
of large numbers and complete convergence for sequences of dependent
random variables. In particular, independent and identically
distributed Marcinkiewicz Law of large numbers are generalized to
the case of m0-dependent sequences.
Abstract: In this paper a stochastic scenario-based model predictive control applied to molten salt storage systems in concentrated solar tower power plant is presented. The main goal of this study is to build up a tool to analyze current and expected future resources for evaluating the weekly power to be advertised on electricity secondary market. This tool will allow plant operator to maximize profits while hedging the impact on the system of stochastic variables such as resources or sunlight shortage.
Solving the problem first requires a mixed logic dynamic modeling of the plant. The two stochastic variables, respectively the sunlight incoming energy and electricity demands from secondary market, are modeled by least square regression. Robustness is achieved by drawing a certain number of random variables realizations and applying the most restrictive one to the system. This scenario approach control technique provides the plant operator a confidence interval containing a given percentage of possible stochastic variable realizations in such a way that robust control is always achieved within its bounds. The results obtained from many trajectory simulations show the existence of a ‘’reliable’’ interval, which experimentally confirms the algorithm robustness.
Abstract: A novel path planning approach is presented to solve
optimal path in stochastic, time-varying networks under priori traffic
information. Most existing studies make use of dynamic programming
to find optimal path. However, those methods are proved to
be unable to obtain global optimal value, moreover, how to design
efficient algorithms is also another challenge.
This paper employs a decision theoretic framework for defining
optimal path: for a given source S and destination D in urban transit
network, we seek an S - D path of lowest expected travel time
where its link travel times are discrete random variables. To solve
deficiency caused by the methods of dynamic programming, such as
curse of dimensionality and violation of optimal principle, an integer
programming model is built to realize assignment of discrete travel
time variables to arcs. Simultaneously, pruning techniques are also
applied to reduce computation complexity in the algorithm. The final
experiments show the feasibility of the novel approach.