Abstract: Edgeworth Approximation, Bootstrap and Monte Carlo Simulations have a considerable impact on the achieving certain results related to different problems taken into study. In our paper, we have treated a financial case related to the effect that have the components of a Cash-Flow of one of the most successful businesses in the world, as the financial activity, operational activity and investing activity to the cash and cash equivalents at the end of the three-months period. To have a better view of this case we have created a Vector Autoregression model, and after that we have generated the impulse responses in the terms of Asymptotic Analysis (Edgeworth Approximation), Monte Carlo Simulations and Residual Bootstrap based on the standard errors of every series created. The generated results consisted of the common tendencies for the three methods applied, that consequently verified the advantage of the three methods in the optimization of the model that contains many variants.
Abstract: In modern financial mathematics, valuing derivatives such as options is often a tedious task. This is simply because their fair and correct prices in the future are often probabilistic. This paper examines three different Stochastic Differential Equation (SDE) models in finance; the Constant Elasticity of Variance (CEV) model, the Balck-Karasinski model, and the Heston model. The various Martingales option price valuation formulas for these three models were obtained using the replicating portfolio method. Also, the numerical solution of the derived Martingales options price valuation equations for the SDEs models was carried out using the Monte Carlo method which was implemented using MATLAB. Furthermore, results from the numerical examples using published data from the Nigeria Stock Exchange (NSE), all share index data show the effect of increase in the underlying asset value (stock price) on the value of the European Put Option for these models. From the results obtained, we see that an increase in the stock price yields a decrease in the value of the European put option price. Hence, this guides the option holder in making a quality decision by not exercising his right on the option.
Abstract: With the rapid development of national modern industry, people begin to pay attention to environmental pollution and harm caused by industrial dust. Based on above, a numerical study on the dedusting technology of industrial environment was conducted. The dynamic models of multicomponent particles collision and coagulation, breakage and deposition are developed, and the interaction of water droplet and aerosol particle in 2-Dimension flow field was researched by Eulerian-Lagrangian method and Multi-Monte Carlo method. The effects of the droplet scale, movement speed of droplet and the flow field structure on scavenging efficiency were analyzed. The results show that under the certain condition, 30μm of droplet has the best scavenging efficiency. At the initial speed 1m/s of droplets, droplets and aerosol particles have more time to interact, so it has a better scavenging efficiency for the particle.
Abstract: This paper is to compare the parameter estimation of
the mean in normal distribution by Maximum Likelihood (ML),
Bayes, and Markov Chain Monte Carlo (MCMC) methods. The ML
estimator is estimated by the average of data, the Bayes method is
considered from the prior distribution to estimate Bayes estimator,
and MCMC estimator is approximated by Gibbs sampling from
posterior distribution. These methods are also to estimate a parameter
then the hypothesis testing is used to check a robustness of the
estimators. Data are simulated from normal distribution with the true
parameter of mean 2, and variance 4, 9, and 16 when the sample
sizes is set as 10, 20, 30, and 50. From the results, it can be seen
that the estimation of MLE, and MCMC are perceivably different
from the true parameter when the sample size is 10 and 20 with
variance 16. Furthermore, the Bayes estimator is estimated from the
prior distribution when mean is 1, and variance is 12 which showed
the significant difference in mean with variance 9 at the sample size
10 and 20.
Abstract: The ionization yield of ion tracks in polymers and bio-molecular systems reaches a maximum, known as the Bragg peak, close to the end of the ion trajectories. Along the path of the ions through the materials, many electrons are generated, which produce a cascade of further ionizations and, consequently, a shower of secondary electrons. Among these, very low energy secondary electrons can produce damage in the biomolecules by dissociative electron attachment. This work deals with the calculation of the energy distribution of electrons produced by protons in a sample of polymethylmethacrylate (PMMA), a material that is used as a phantom for living tissues in hadron therapy. PMMA is also of relevance for microelectronics in CMOS technologies and as a photoresist mask in electron beam lithography. We present a Monte Carlo code that, starting from a realistic description of the energy distribution of the electrons ejected by protons moving through PMMA, simulates the entire cascade of generated secondary electrons. By following in detail the motion of all these electrons, we find the radial distribution of the energy that they deposit in PMMA for several initial proton energies characteristic of the Bragg peak.
Abstract: This paper presents a novel algorithm for modeling
photovoltaic based distributed generators for the purpose of optimal
planning of distribution networks. The proposed algorithm utilizes
sequential Monte Carlo method in order to accurately consider the
stochastic nature of photovoltaic based distributed generators. The
proposed algorithm is implemented in MATLAB environment and
the results obtained are presented and discussed.
Abstract: The progress of industry integrated circuits in recent
years has been pushed by continuous miniaturization of transistors.
With the reduction of dimensions of components at 0.1 micron and
below, new physical effects come into play as the standard simulators
of two dimensions (2D) do not consider. In fact the third dimension
comes into play because the transverse and longitudinal dimensions
of the components are of the same order of magnitude. To describe
the operation of such components with greater fidelity, we must
refine simulation tools and adapted to take into account these
phenomena. After an analytical study of the static characteristics of
the component, according to the different operating modes, a
numerical simulation is performed of field-effect transistor with
submicron gate MESFET GaInP. The influence of the dimensions of
the gate length is studied. The results are used to determine the
optimal geometric and physical parameters of the component for their
specific applications and uses.
Abstract: This paper presents a novel statistical description of
the counterpoise effective length due to lightning surges, where the
(impulse) effective length had been obtained by means of regressive
formulas applied to the transient simulation results. The effective
length is described in terms of a statistical distribution function, from
which median, mean, variance, and other parameters of interest could
be readily obtained. The influence of lightning current amplitude,
lightning front duration, and soil resistivity on the effective length has
been accounted for, assuming statistical nature of these parameters. A
method for determining the optimal counterpoise length, in terms of
the statistical impulse effective length, is also presented. It is based on
estimating the number of dangerous events associated with lightning
strikes. Proposed statistical description and the associated method
provide valuable information which could aid the design engineer in
optimising physical lengths of counterpoises in different grounding
arrangements and soil resistivity situations.
Abstract: Photoacoustic imaging (PAI) is a non-invasive and
non-ionizing imaging modality that combines the absorption contrast
of light with ultrasound resolution. Laser is used to deposit optical
energy into a target (i.e., optical fluence). Consequently, the target
temperature rises, and then thermal expansion occurs that leads to
generating a PA signal. In general, most image reconstruction
algorithms for PAI assume uniform fluence within an imaging object.
However, it is known that optical fluence distribution within the
object is non-uniform. This could affect the reconstruction of PA
images. In this study, we have investigated the influence of optical
fluence distribution on PA back-propagation imaging using finite
element method. The uniform fluence was simulated as a triangular
waveform within the object of interest. The non-uniform fluence
distribution was estimated by solving light propagation within a
tissue model via Monte Carlo method. The results show that the PA
signal in the case of non-uniform fluence is wider than the uniform
case by 23%. The frequency spectrum of the PA signal due to the
non-uniform fluence has missed some high frequency components in
comparison to the uniform case. Consequently, the reconstructed
image with the non-uniform fluence exhibits a strong smoothing
effect.
Abstract: In this paper, based on the past project cost and time
performance, a model for forecasting project cost performance is
developed. This study presents a probabilistic project control concept
to assure an acceptable forecast of project cost performance. In this
concept project activities are classified into sub-groups entitled
control accounts. Then obtain the Stochastic S-Curve (SS-Curve), for
each sub-group and the project SS-Curve is obtained by summing
sub-groups- SS-Curves. In this model, project cost uncertainties are
considered through Beta distribution functions of the project
activities costs required to complete the project at every selected time
sections through project accomplishment, which are extracted from a
variety of sources. Based on this model, after a percentage of the
project progress, the project performance is measured via Earned
Value Management to adjust the primary cost probability distribution
functions. Then, accordingly the future project cost performance is
predicted by using the Monte-Carlo simulation method.
Abstract: In this work, propagation of uncertainty during calibration
process of TRANUS, an integrated land use and transport model
(ILUTM), has been investigated. It has also been examined, through a
sensitivity analysis, which input parameters affect the variation of the
outputs the most. Moreover, a probabilistic verification methodology
of calibration process, which equates the observed and calculated
production, has been proposed. The model chosen as an application is
the model of the city of Grenoble, France. For sensitivity analysis and
uncertainty propagation, Monte Carlo method was employed, and a
statistical hypothesis test was used for verification. The parameters of
the induced demand function in TRANUS, were assumed as uncertain
in the present case. It was found that, if during calibration, TRANUS
converges, then with a high probability the calibration process is
verified. Moreover, a weak correlation was found between the inputs
and the outputs of the calibration process. The total effect of the
inputs on outputs was investigated, and the output variation was found
to be dictated by only a few input parameters.
Abstract: This paper presents an online method that learns the
corresponding points of an object from un-annotated grayscale images
containing instances of the object. In the first image being
processed, an ensemble of node points is automatically selected
which is matched in the subsequent images. A Bayesian posterior
distribution for the locations of the nodes in the images is formed.
The likelihood is formed from Gabor responses and the prior assumes
the mean shape of the node ensemble to be similar in a translation
and scale free space. An association model is applied for separating
the object nodes and background nodes. The posterior distribution is
sampled with Sequential Monte Carlo method. The matched object
nodes are inferred to be the corresponding points of the object
instances. The results show that our system matches the object nodes
as accurately as other methods that train the model with annotated
training images.
Abstract: Lattice Monte Carlo methods are an excellent
choice for the simulation of non-linear thermal diffusion
problems. In this paper, and for the first time, Lattice Monte
Carlo analysis is performed on thermal diffusion combined
with convective heat transfer. Laminar flow of water modeled
as an incompressible fluid inside a copper pipe with a constant
surface temperature is considered. For the simulation of
thermal conduction, the temperature dependence of the
thermal conductivity of the water is accounted for. Using the
novel Lattice Monte Carlo approach, temperature distributions
and energy fluxes are obtained.
Abstract: A code has been developed in Mathematica using
Direct Simulation Monte Carlo (DSMC) technique. The code was
tested for 2-D air flow around a circular cylinder. Same geometry
and flow properties were used in FLUENT 6.2 for comparison. The
results obtained from Mathematica simulation indicated significant
agreement with FLUENT calculations, hence providing insight into
particle nature of fluid flows.
Abstract: To increase precision and reliability of automatic control systems, we have to take into account of random factors affecting the control system. Thus, operational matrix technique is used for statistical analysis of first order plus time delay system with uniform random parameter. Examples with deterministic and stochastic disturbance are considered to demonstrate the validity of the method. Comparison with Monte Carlo method is made to show the computational effectiveness of the method.
Abstract: The purpose of this work is measurement of the
system presampling MTF of a variable resolution x-ray (VRX) CT
scanner. In this paper, we used the parameters of an actual VRX CT
scanner for simulation and study of effect of different focal spot sizes
on system presampling MTF by Monte Carlo method (GATE
simulation software). Focal spot size of 0.6 mm limited the spatial
resolution of the system to 5.5 cy/mm at incident angles of below 17º
for cell#1. By focal spot size of 0.3 mm the spatial resolution
increased up to 11 cy/mm and the limiting effect of focal spot size
appeared at incident angles of below 9º. The focal spot size of 0.3
mm could improve the spatial resolution to some extent but because
of magnification non-uniformity, there is a 10 cy/mm difference
between spatial resolution of cell#1 and cell#256. The focal spot size
of 0.1 mm acted as an ideal point source for this system. The spatial
resolution increased to more than 35 cy/mm and at all incident angles
the spatial resolution was a function of incident angle. By the way
focal spot size of 0.1 mm minimized the effect of magnification nonuniformity.
Abstract: Electronics Products that achieve high levels of integrated communications, computing and entertainment, multimedia features in small, stylish and robust new form factors are winning in the market place. Due to the high costs that an industry may undergo and how a high yield is directly proportional to high profits, IC (Integrated Circuit) manufacturers struggle to maximize yield, but today-s customers demand miniaturization, low costs, high performance and excellent reliability making the yield maximization a never ending research of an enhanced assembly process. With factors such as minimum tolerances, tighter parameter variations a systematic approach is needed in order to predict the assembly process. In order to evaluate the quality of upcoming circuits, yield models are used which not only predict manufacturing costs but also provide vital information in order to ease the process of correction when the yields fall below expectations. For an IC manufacturer to obtain higher assembly yields all factors such as boards, placement, components, the material from which the components are made of and processes must be taken into consideration. Effective placement yield depends heavily on machine accuracy and the vision of the system which needs the ability to recognize the features on the board and component to place the device accurately on the pads and bumps of the PCB. There are currently two methods for accurate positioning, using the edge of the package and using solder ball locations also called footprints. The only assumption that a yield model makes is that all boards and devices are completely functional. This paper will focus on the Monte Carlo method which consists in a class of computational algorithms (information processed algorithms) which depends on repeated random samplings in order to compute the results. This method utilized in order to recreate the simulation of placement and assembly processes within a production line.
Abstract: The purpose of this study is to introduce a new
interface program to calculate a dose distribution with Monte Carlo method in complex heterogeneous systems such as organs or tissues
in proton therapy. This interface program was developed under
MATLAB software and includes a friendly graphical user interface
with several tools such as image properties adjustment or results display. Quadtree decomposition technique was used as an image
segmentation algorithm to create optimum geometries from Computed Tomography (CT) images for dose calculations of proton
beam. The result of the mentioned technique is a number of nonoverlapped
squares with different sizes in every image. By this way
the resolution of image segmentation is high enough in and near
heterogeneous areas to preserve the precision of dose calculations
and is low enough in homogeneous areas to reduce the number of
cells directly. Furthermore a cell reduction algorithm can be used to combine neighboring cells with the same material. The validation of this method has been done in two ways; first, in comparison with experimental data obtained with 80 MeV proton beam in Cyclotron
and Radioisotope Center (CYRIC) in Tohoku University and second, in comparison with data based on polybinary tissue calibration method, performed in CYRIC. These results are presented in this paper. This program can read the output file of Monte Carlo code while region of interest is selected manually, and give a plot of dose distribution of proton beam superimposed onto the CT images.
Abstract: Nowadays, power systems, energy generation by wind
has been very important. Noting that the production of electrical
energy by wind turbines on site to several factors (such as wind speed
and profile site for the turbines, especially off the wind input speed,
wind rated speed and wind output speed disconnect) is dependent. On
the other hand, several different types of turbines in the market there.
Therefore, selecting a turbine that its capacity could also answer the
need for electric consumers the efficiency is high something is
important and necessary. In this context, calculating the amount of
wind power to help optimize overall network, system operation, in
determining the parameters of wind power is very important.
In this article, to help calculate the amount of wind power plant,
connected to the national network in the region Manjil wind,
selecting the best type of turbine and power delivery profile
appropriate to the network using Monte Carlo method has been.
In this paper, wind speed data from the wind site in Manjil, as minute
and during the year has been. Necessary simulations based on
Random Numbers Simulation method and repeat, using the software
MATLAB and Excel has been done.
Abstract: Data stream analysis is the process of computing
various summaries and derived values from large amounts of data
which are continuously generated at a rapid rate. The nature of a
stream does not allow a revisit on each data element. Furthermore,
data processing must be fast to produce timely analysis results. These
requirements impose constraints on the design of the algorithms to
balance correctness against timely responses. Several techniques
have been proposed over the past few years to address these
challenges. These techniques can be categorized as either dataoriented
or task-oriented. The data-oriented approach analyzes a
subset of data or a smaller transformed representation, whereas taskoriented
scheme solves the problem directly via approximation
techniques. We propose a hybrid approach to tackle the data stream
analysis problem. The data stream has been both statistically
transformed to a smaller size and computationally approximated its
characteristics. We adopt a Monte Carlo method in the approximation
step. The data reduction has been performed horizontally and
vertically through our EMR sampling method. The proposed method
is analyzed by a series of experiments. We apply our algorithm on
clustering and classification tasks to evaluate the utility of our
approach.