Abstract: In this article, we consider the estimation of P[Y < X], when strength, X and stress, Y are two independent variables of Burr Type XII distribution. The MLE of the R based on one simple iterative procedure is obtained. Assuming that the common parameter is known, the maximum likelihood estimator, uniformly minimum variance unbiased estimator and Bayes estimator of P[Y < X] are discussed. The exact confidence interval of the R is also obtained. Monte Carlo simulations are performed to compare the different proposed methods.
Abstract: Recently, the Spherical Motion Models (SMM-s) have been introduced [1]. These new models have been developed for 3D local landmark-base Autonomous Navigation (AN). This paper is revealing new arguments and experimental results to support the SMM-s characteristics. The accuracy and the robustness in performing a specific task are the main concerns of the new investigations. To analyze their performances of the SMM-s, the most powerful tools of estimation theory, the extended Kalman filter (EKF) and unscented Kalman filter (UKF), which give the best estimations in noisy environments, have been employed. The Monte Carlo validation implementations used to test the stability and robustness of the models have been employed as well.
Abstract: Estimating the reliability of a computer network has been a subject of great interest. It is a well known fact that this problem is NP-hard. In this paper we present a very efficient combinatorial approach for Monte Carlo reliability estimation of a network with unreliable nodes and unreliable edges. Its core is the computation of some network combinatorial invariants. These invariants, once computed, directly provide pure and simple framework for computation of network reliability. As a specific case of this approach we obtain tight lower and upper bounds for distributed network reliability (the so called residual connectedness reliability). We also present some simulation results.
Abstract: This paper addresses the problem of determining the current 3D location of a moving object and robustly tracking it from a sequence of camera images. The approach presented here uses a particle filter and does not perform any explicit triangulation. Only the color of the object to be tracked is required, but not any precisemotion model. The observation model we have developed avoids the color filtering of the entire image. That and the Monte Carlotechniques inside the particle filter provide real time performance.Experiments with two real cameras are presented and lessons learned are commented. The approach scales easily to more than two cameras and new sensor cues.
Abstract: Missing data yields many analysis challenges. In case of complex survey design, in addition to dealing with missing data, researchers need to account for the sampling design to achieve useful inferences. Methods for incorporating sampling weights in neural network imputation were investigated to account for complex survey designs. An estimate of variance to account for the imputation uncertainty as well as the sampling design using neural networks will be provided. A simulation study was conducted to compare estimation results based on complete case analysis, multiple imputation using a Markov Chain Monte Carlo, and neural network imputation. Furthermore, a public-use dataset was used as an example to illustrate neural networks imputation under a complex survey design
Abstract: to simulate the phenomenon of electronic transport in semiconductors, we try to adapt a numerical method, often and most frequently it’s that of Monte Carlo. In our work, we applied this method in the case of a ternary alloy semiconductor GaInP in its cubic form; The Calculations are made using a non-parabolic effective-mass energy band model. We consider a band of conduction to three valleys (ΓLX), major of the scattering mechanisms are taken into account in this modeling, as the interactions with the acoustic phonons (elastic collisions) and optics (inelastic collisions). The polar optical phonons cause anisotropic collisions, intra-valleys, very probable in the III-V semiconductors. Other optical phonons, no polar, allow transitions inter-valleys. Initially, we present the full results obtained by the simulation of Monte Carlo in GaInP in stationary regime. We consider thereafter the effects related to the application of an electric field varying according to time, we thus study the transient phenomenon which make their appearance in ternary material
Abstract: BEAMnrc was used to calculate the spectrum and
HVL for X-ray Beam during low energy X-ray radiation using tube model: SRO 33/100 /ROT 350 Philips. The results of BEAMnrc
simulation and measurements were compared to the IPEM report
number 78 and SpekCalc software. Three energies 127, 103 and 84
Kv were used. In these simulation a tungsten anode with 1.2 mm for
Be window were used as source. HVLs were calculated from
BEAMnrc spectrum with air Kerma method for four different filters.
For BEAMnrc one billion particles were used as original particles for
all simulations. The results show that for 127 kV, there was
maximum 5.2 % difference between BEAMnrc and Measurements
and minimum was 0.7% .the maximum 9.1% difference between
BEAMnrc and IPEM and minimum was 2.3% .The maximum
difference was 3.2% between BEAMnrc and SpekCal and minimum
was 2.8%. The result show BEAMnrc was able to satisfactory predict
the quantities of Low energy Beam as well as high energy X-ray
radiation.
Abstract: Three dimensional nanostructure materials have attracted the attention of many researches because the possibility to apply them for near future devices in sensors, catalysis and energy related. Tin dioxide is the most used material for gas sensing because its three-dimensional nanostructures and properties are related to the large surface exposed to gas adsorption. We propose the use of branch SnO2 nanowhiskers in interaction with ethanol. All Sn atoms are symmetric. The total energy, potential energy and Kinetic energy calculated for interaction between SnO2 and ethanol in different distances and temperatures. The calculations achieved by methods of Langevin Dynamic and Mont Carlo simulation. The total energy increased with addition ethanol molecules and temperature so interactions between them are endothermic.
Abstract: In this paper, based on the past project cost and time
performance, a model for forecasting project cost performance is
developed. This study presents a probabilistic project control concept
to assure an acceptable forecast of project cost performance. In this
concept project activities are classified into sub-groups entitled
control accounts. Then obtain the Stochastic S-Curve (SS-Curve), for
each sub-group and the project SS-Curve is obtained by summing
sub-groups- SS-Curves. In this model, project cost uncertainties are
considered through Beta distribution functions of the project
activities costs required to complete the project at every selected time
sections through project accomplishment, which are extracted from a
variety of sources. Based on this model, after a percentage of the
project progress, the project performance is measured via Earned
Value Management to adjust the primary cost probability distribution
functions. Then, accordingly the future project cost performance is
predicted by using the Monte-Carlo simulation method.
Abstract: The object of this work is the probabilistic performance evaluation of safety instrumented systems (SIS), i.e. the average probability of dangerous failure on demand (PFDavg) and the average frequency of failure (PFH), taking into account the uncertainties related to the different parameters that come into play: failure rate (λ), common cause failure proportion (β), diagnostic coverage (DC)... This leads to an accurate and safe assessment of the safety integrity level (SIL) inherent to the safety function performed by such systems. This aim is in keeping with the requirement of the IEC 61508 standard with respect to handling uncertainty. To do this, we propose an approach that combines (1) Monte Carlo simulation and (2) fuzzy sets. Indeed, the first method is appropriate where representative statistical data are available (using pdf of the relating parameters), while the latter applies in the case characterized by vague and subjective information (using membership function). The proposed approach is fully supported with a suitable computer code.
Abstract: In this paper we proposed two new confidence intervals for the normal population mean with known coefficient of variation. This situation occurs normally in environment and agriculture experiments where the scientist knows the coefficient of variation of their experiments. We propose two new confidence intervals for this problem based on the recent work of Searls [5] and the new method proposed in this paper for the first time. We derive analytic expressions for the coverage probability and the expected length of each confidence interval. Monte Carlo simulation will be used to assess the performance of these intervals based on their expected lengths.
Abstract: The Reverse Monte Carlo (RMC) simulation is applied in the study of an aqueous electrolyte LiCl6H2O. On the basis of the available experimental neutron scattering data, RMC computes pair radial distribution functions in order to explore the structural features of the system. The obtained results include some unrealistic features. To overcome this problem, we use the Hybrid Reverse Monte Carlo (HRMC), incorporating an energy constraint in addition to the commonly used constraints derived from experimental data. Our results show a good agreement between experimental and computed partial distribution functions (PDFs) as well as a significant improvement in pair partial distribution curves. This kind of study can be considered as a useful test for a defined interaction model for conventional simulation techniques.
Abstract: In this work, propagation of uncertainty during calibration
process of TRANUS, an integrated land use and transport model
(ILUTM), has been investigated. It has also been examined, through a
sensitivity analysis, which input parameters affect the variation of the
outputs the most. Moreover, a probabilistic verification methodology
of calibration process, which equates the observed and calculated
production, has been proposed. The model chosen as an application is
the model of the city of Grenoble, France. For sensitivity analysis and
uncertainty propagation, Monte Carlo method was employed, and a
statistical hypothesis test was used for verification. The parameters of
the induced demand function in TRANUS, were assumed as uncertain
in the present case. It was found that, if during calibration, TRANUS
converges, then with a high probability the calibration process is
verified. Moreover, a weak correlation was found between the inputs
and the outputs of the calibration process. The total effect of the
inputs on outputs was investigated, and the output variation was found
to be dictated by only a few input parameters.
Abstract: This paper considers inference under progressive type II censoring with a compound Rayleigh failure time distribution. The maximum likelihood (ML), and Bayes methods are used for estimating the unknown parameters as well as some lifetime parameters, namely reliability and hazard functions. We obtained Bayes estimators using the conjugate priors for two shape and scale parameters. When the two parameters are unknown, the closed-form expressions of the Bayes estimators cannot be obtained. We use Lindley.s approximation to compute the Bayes estimates. Another Bayes estimator has been obtained based on continuous-discrete joint prior for the unknown parameters. An example with the real data is discussed to illustrate the proposed method. Finally, we made comparisons between these estimators and the maximum likelihood estimators using a Monte Carlo simulation study.
Abstract: This paper presents an online method that learns the
corresponding points of an object from un-annotated grayscale images
containing instances of the object. In the first image being
processed, an ensemble of node points is automatically selected
which is matched in the subsequent images. A Bayesian posterior
distribution for the locations of the nodes in the images is formed.
The likelihood is formed from Gabor responses and the prior assumes
the mean shape of the node ensemble to be similar in a translation
and scale free space. An association model is applied for separating
the object nodes and background nodes. The posterior distribution is
sampled with Sequential Monte Carlo method. The matched object
nodes are inferred to be the corresponding points of the object
instances. The results show that our system matches the object nodes
as accurately as other methods that train the model with annotated
training images.
Abstract: Lattice Monte Carlo methods are an excellent
choice for the simulation of non-linear thermal diffusion
problems. In this paper, and for the first time, Lattice Monte
Carlo analysis is performed on thermal diffusion combined
with convective heat transfer. Laminar flow of water modeled
as an incompressible fluid inside a copper pipe with a constant
surface temperature is considered. For the simulation of
thermal conduction, the temperature dependence of the
thermal conductivity of the water is accounted for. Using the
novel Lattice Monte Carlo approach, temperature distributions
and energy fluxes are obtained.
Abstract: When binary decision diagrams are formed from
uniformly distributed Monte Carlo data for a large number of
variables, the complexity of the decision diagrams exhibits a
predictable relationship to the number of variables and minterms. In
the present work, a neural network model has been used to analyze the
pattern of shortest path length for larger number of Monte Carlo data
points. The neural model shows a strong descriptive power for the
ISCAS benchmark data with an RMS error of 0.102 for the shortest
path length complexity. Therefore, the model can be considered as a
method of predicting path length complexities; this is expected to lead
to minimum time complexity of very large-scale integrated circuitries
and related computer-aided design tools that use binary decision
diagrams.
Abstract: This paper presents a Reliability-Based Topology
Optimization (RBTO) based on Evolutionary Structural Optimization
(ESO). An actual design involves uncertain conditions such as
material property, operational load and dimensional variation.
Deterministic Topology Optimization (DTO) is obtained without
considering of the uncertainties related to the uncertainty parameters.
However, RBTO involves evaluation of probabilistic constraints,
which can be done in two different ways, the reliability index
approach (RIA) and the performance measure approach (PMA). Limit
state function is approximated using Monte Carlo Simulation and
Central Composite Design for reliability analysis. ESO, one of the
topology optimization techniques, is adopted for topology
optimization. Numerical examples are presented to compare the DTO
with RBTO.
Abstract: Among various HLM techniques, the Multivariate Hierarchical Linear Model (MHLM) is desirable to use, particularly when multivariate criterion variables are collected and the covariance structure has information valuable for data analysis. In order to reflect prior information or to obtain stable results when the sample size and the number of groups are not sufficiently large, the Bayes method has often been employed in hierarchical data analysis. In these cases, although the Markov Chain Monte Carlo (MCMC) method is a rather powerful tool for parameter estimation, Procedures regarding MCMC have not been formulated for MHLM. For this reason, this research presents concrete procedures for parameter estimation through the use of the Gibbs samplers. Lastly, several future topics for the use of MCMC approach for HLM is discussed.
Abstract: Quantitative trait loci (QTL) experiments have yielded
important biological and biochemical information necessary for
understanding the relationship between genetic markers and
quantitative traits. For many years, most QTL algorithms only
allowed one observation per genotype. Recently, there has been an
increasing demand for QTL algorithms that can accommodate more
than one observation per genotypic distribution. The Bayesian
hierarchical model is very flexible and can easily incorporate this
information into the model. Herein a methodology is presented that
uses a Bayesian hierarchical model to capture the complexity of the
data. Furthermore, the Markov chain Monte Carlo model composition
(MC3) algorithm is used to search and identify important markers. An
extensive simulation study illustrates that the method captures the
true QTL, even under nonnormal noise and up to 6 QTL.