Abstract: Seismic inversion is a technique which has been in use for years and its main goal is to estimate and to model physical characteristics of rocks and fluids. Generally, it is a combination of seismic and well-log data. Seismic inversion can be carried out through different methods; we have conducted and compared post-stack and pre- stack seismic inversion methods on real data in one of the fields in the Persian Gulf. Pre-stack seismic inversion can transform seismic data to rock physics such as P-impedance, S-impedance and density. While post- stack seismic inversion can just estimate P-impedance. Then these parameters can be used in reservoir identification. Based on the results of inverting seismic data, a gas reservoir was detected in one of Hydrocarbon oil fields in south of Iran (Persian Gulf). By comparing post stack and pre-stack seismic inversion it can be concluded that the pre-stack seismic inversion provides a more reliable and detailed information for identification and prediction of hydrocarbon reservoirs.
Abstract: Stochastic modeling concerns the use of probability
to model real-world situations in which uncertainty is present.
Therefore, the purpose of stochastic modeling is to estimate the
probability of outcomes within a forecast, i.e. to be able to predict
what conditions or decisions might happen under different situations.
In the present study, we present a model of a stochastic diffusion
process based on the bi-Weibull distribution function (its trend
is proportional to the bi-Weibull probability density function). In
general, the Weibull distribution has the ability to assume the
characteristics of many different types of distributions. This has
made it very popular among engineers and quality practitioners, who
have considered it the most commonly used distribution for studying
problems such as modeling reliability data, accelerated life testing,
and maintainability modeling and analysis. In this work, we start
by obtaining the probabilistic characteristics of this model, as the
explicit expression of the process, its trends, and its distribution by
transforming the diffusion process in a Wiener process as shown in
the Ricciaardi theorem. Then, we develop the statistical inference of
this model using the maximum likelihood methodology. Finally, we
analyse with simulated data the computational problems associated
with the parameters, an issue of great importance in its application to
real data with the use of the convergence analysis methods. Overall,
the use of a stochastic model reflects only a pragmatic decision on
the part of the modeler. According to the data that is available and
the universe of models known to the modeler, this model represents
the best currently available description of the phenomenon under
consideration.
Abstract: Forecasting electricity load plays a crucial role regards
decision making and planning for economical purposes. Besides, in
the light of the recent privatization and deregulation of the power
industry, the forecasting of future electricity load turned out to be a
very challenging problem. Empirical data about electricity load
highlights a clear seasonal behavior (higher load during the winter
season), which is partly due to climatic effects. We also emphasize
the presence of load periodicity at a weekly basis (electricity load is
usually lower on weekends or holidays) and at daily basis (electricity
load is clearly influenced by the hour). Finally, a long-term trend may
depend on the general economic situation (for example, industrial
production affects electricity load). All these features must be
captured by the model.
The purpose of this paper is then to build an hourly electricity load
model. The deterministic component of the model requires non-linear
regression and Fourier series while we will investigate the stochastic
component through econometrical tools.
The calibration of the parameters’ model will be performed by
using data coming from the Italian market in a 6 year period (2007-
2012). Then, we will perform a Monte Carlo simulation in order to
compare the simulated data respect to the real data (both in-sample
and out-of-sample inspection). The reliability of the model will be
deduced thanks to standard tests which highlight a good fitting of the
simulated values.
Abstract: The aim of this work is to detect geometrical shape
objects in an image. In this paper, the object is considered to be as a
circle shape. The identification requires find three characteristics,
which are number, size, and location of the object. To achieve the
goal of this work, this paper presents an algorithm that combines
from some of statistical approaches and image analysis techniques.
This algorithm has been implemented to arrive at the major
objectives in this paper. The algorithm has been evaluated by using
simulated data, and yields good results, and then it has been applied
to real data.
Abstract: The knitted fabric suffers a deformation in its
dimensions due to stretching and tension factors, transverse and
longitudinal respectively, during the process in rectilinear knitting
machines so it performs a dry relaxation shrinkage procedure and
thermal action of prefixed to obtain stable conditions in the knitting.
This paper presents a dry relaxation shrinkage prediction of Bordeaux
fiber using a feed forward neural network and linear regression
models. Six operational alternatives of shrinkage were predicted. A
comparison of the results was performed finding neural network
models with higher levels of explanation of the variability and
prediction. The presence of different reposes is included. The models
were obtained through a neural toolbox of Matlab and Minitab
software with real data in a knitting company of Southern
Guanajuato. The results allow predicting dry relaxation shrinkage of
each alternative operation.
Abstract: The research of juice flavor forecasting has become
more important in China. Due to the fast economic growth in China,
many different kinds of juices have been introduced to the market. If a
beverage company can understand their customers’ preference well,
the juice can be served more attractive. Thus, this study intends to
introducing the basic theory and computing process of grapes juice
flavor forecasting based on support vector regression (SVR). Applying
SVR, BPN, and LR to forecast the flavor of grapes juice in real data
shows that SVR is more suitable and effective at predicting
performance.
Abstract: In this paper, we consider the vehicle routing problem
with mixed fleet of conventional and heterogenous electric vehicles
and time dependent charging costs, denoted VRP-HFCC, in which
a set of geographically scattered customers have to be served by a
mixed fleet of vehicles composed of a heterogenous fleet of Electric
Vehicles (EVs), having different battery capacities and operating
costs, and Conventional Vehicles (CVs). We include the possibility
of charging EVs in the available charging stations during the routes
in order to serve all customers. Each charging station offers charging
service with a known technology of chargers and time dependent
charging costs. Charging stations are also subject to operating time
windows constraints. EVs are not necessarily compatible with all
available charging technologies and a partial charging is allowed.
Intermittent charging at the depot is also allowed provided that
constraints related to the electricity grid are satisfied.
The objective is to minimize the number of employed vehicles and
then minimize the total travel and charging costs.
In this study, we present a Mixed Integer Programming Model and
develop a Charging Routing Heuristic and a Local Search Heuristic
based on the Inject-Eject routine with different insertion methods. All
heuristics are tested on real data instances.
Abstract: Safety is one of the most important considerations
when buying a new car. While active safety aims at avoiding
accidents, passive safety systems such as airbags and seat belts
protect the occupant in case of an accident. In addition to legal
regulations, organizations like Euro NCAP provide consumers with
an independent assessment of the safety performance of cars and
drive the development of safety systems in automobile industry.
Those ratings are mainly based on injury assessment reference values
derived from physical parameters measured in dummies during a car
crash test.
The components and sub-systems of a safety system are designed
to achieve the required restraint performance. Sled tests and other
types of tests are then carried out by car makers and their suppliers
to confirm the protection level of the safety system. A Knowledge
Discovery in Databases (KDD) process is proposed in order to
minimize the number of tests. The KDD process is based on the
data emerging from sled tests according to Euro NCAP specifications.
About 30 parameters of the passive safety systems from different data
sources (crash data, dummy protocol) are first analysed together with
experts opinions. A procedure is proposed to manage missing data
and validated on real data sets. Finally, a procedure is developed to
estimate a set of rough initial parameters of the passive system before
testing aiming at reducing the number of tests.
Abstract: Estimation of a proportion has many applications in
economics and social studies. A common application is the estimation
of the low income proportion, which gives the proportion of people
classified as poor into a population. In this paper, we present this
poverty indicator and propose to use the logistic regression estimator
for the problem of estimating the low income proportion. Various
sampling designs are presented. Assuming a real data set obtained
from the European Survey on Income and Living Conditions, Monte
Carlo simulation studies are carried out to analyze the empirical
performance of the logistic regression estimator under the various
sampling designs considered in this paper. Results derived from
Monte Carlo simulation studies indicate that the logistic regression
estimator can be more accurate than the customary estimator under
the various sampling designs considered in this paper. The stratified
sampling design can also provide more accurate results.
Abstract: Accurate Short Term Load Forecasting (STLF) is essential for a variety of decision making processes. However, forecasting accuracy can drop due to the presence of uncertainty in the operation of energy systems or unexpected behavior of exogenous variables. Interval Type 2 Fuzzy Logic System (IT2 FLS), with additional degrees of freedom, gives an excellent tool for handling uncertainties and it improved the prediction accuracy. The training data used in this study covers the period from January 1, 2012 to February 1, 2012 for winter season and the period from July 1, 2012 to August 1, 2012 for summer season. The actual load forecasting period starts from January 22, till 28, 2012 for winter model and from July 22 till 28, 2012 for summer model. The real data for Iraqi power system which belongs to the Ministry of Electricity.
Abstract: The problem of estimating a proportion has important
applications in the field of economics, and in general, in many areas
such as social sciences. A common application in economics is
the estimation of the headcount index. In this paper, we define the
general headcount index as a proportion. Furthermore, we introduce
a new quantitative method for estimating the headcount index. In
particular, we suggest to use the logistic regression estimator for the
problem of estimating the headcount index. Assuming a real data set,
results derived from Monte Carlo simulation studies indicate that the
logistic regression estimator can be more accurate than the traditional
estimator of the headcount index.
Abstract: Liquefaction is a phenomenon in which the strength
and stiffness of a soil is reduced by earthquake shaking or other rapid
cyclic loading. Liquefaction and related phenomena have been
responsible for huge amounts of damage in historical earthquakes
around the world.
Modeling of soil behavior is the main step in soil liquefaction
prediction process. Nowadays, several constitutive models for sand
have been presented. Nevertheless, only some of them can satisfy this
mechanism. One of the most useful models in this term is
UBCSAND model. In this research, the capability of this model is
considered by using PLAXIS software. The real data of superstition
hills earthquake 1987 in the Imperial Valley was used. The results of
the simulation have shown resembling trend of the UBC3D-PLM
model.
Abstract: The paper proposes an approach to ranking a set of potential countries to invest taking into account the investor point of view about importance of different economic indicators. For the goal, a ranking algorithm that contributes to rational decision making is proposed. The described algorithm is based on combinatorial optimization modeling and repeated multi-criteria tasks solution. The final result is list of countries ranked in respect of investor preferences about importance of economic indicators for investment attractiveness. Different scenarios are simulated conforming to different investors preferences. A numerical example with real dataset of indicators is solved. The numerical testing shows the applicability of the described algorithm. The proposed approach can be used with any sets of indicators as ranking criteria reflecting different points of view of investors.
Abstract: Flash floods are considered natural disasters that can
cause casualties and demolishing of infra structures. The problem is
that flash floods, particularly in arid and semi arid zones, take place
in very short time. So, it is important to forecast flash floods earlier to
its events with a lead time up to 48 hours to give early warning alert
to avoid or minimize disasters. The flash flood took place over Wadi
Watier - Sinai Peninsula, in October 24th, 2008, has been simulated,
investigated and analyzed using the state of the art regional weather
model. The Weather Research and Forecast (WRF) model, which is a
reliable short term forecasting tool for precipitation events, has been
utilized over the study area. The model results have been calibrated
with the real data, for the same date and time, of the rainfall
measurements recorded at Sorah gauging station. The WRF model
forecasted total rainfall of 11.6 mm while the real measured one was
10.8 mm. The calibration shows significant consistency between
WRF model and real measurements results.
Abstract: Classification is an important topic in machine learning
and bioinformatics. Many datasets have been introduced for
classification tasks. A dataset contains multiple features, and the quality of features influences the classification accuracy of the dataset.
The power of classification for each feature differs. In this study, we
suggest the Classification Influence Index (CII) as an indicator of classification power for each feature. CII enables evaluation of the
features in a dataset and improved classification accuracy by transformation of the dataset. By conducting experiments using CII
and the k-nearest neighbor classifier to analyze real datasets, we confirmed that the proposed index provided meaningful improvement
of the classification accuracy.
Abstract: In this paper we present a novel error model for
packet loss and subsequent error description. The proposed model
simulates the error performance of wireless communication link. The
model is designed as two independent Markov chains, where the first
one is used for packet generation and the second one generates
correctly and incorrectly transmitted bits for received packets from
the first chain. The statistical analyses of real communication on the
wireless link are used for determination of model-s parameters. Using
the obtained parameters and the implementation of the generator, we
collected generated traffic. The obtained results generated by
proposed model are compared with the real data collection.
Abstract: Ratio and regression type estimators have been used by previous authors to estimate a population mean for the principal variable from samples in which both auxiliary x and principal y variable data are available. However, missing data are a common problem in statistical analyses with real data. Ratio and regression type estimators have also been used for imputing values of missing y data. In this paper, six new ratio and regression type estimators are proposed for imputing values for any missing y data and estimating a population mean for y from samples with missing x and/or y data. A simulation study has been conducted to compare the six ratio and regression type estimators with a previous estimator of Rueda. Two population sizes N = 1,000 and 5,000 have been considered with sample sizes of 10% and 30% and with correlation coefficients between population variables X and Y of 0.5 and 0.8. In the simulations, 10 and 40 percent of sample y values and 10 and 40 percent of sample x values were randomly designated as missing. The new ratio and regression type estimators give similar mean absolute percentage errors that are smaller than the Rueda estimator for all cases. The new estimators give a large reduction in errors for the case of 40% missing y values and sampling fraction of 30%.
Abstract: In online context, the design and implementation of
effective remote laboratories environment is highly challenging on
account of hardware and software needs. This paper presents the
remote laboratory software framework modified from ilab shared
architecture (ISA). The ISA is a framework which enables students to
remotely acccess and control experimental hardware using internet
infrastructure. The need for remote laboratories came after
experiencing problems imposed by traditional laboratories. Among
them are: the high cost of laboratory equipment, scarcity of space,
scarcity of technical personnel along with the restricted university
budget creates a significant bottleneck on building required
laboratory experiments. The solution to these problems is to build
web-accessible laboratories. Remote laboratories allow students and
educators to interact with real laboratory equipment located
anywhere in the world at anytime. Recently, many universities and
other educational institutions especially in third world countries rely
on simulations because they do not afford the experimental
equipment they require to their students. Remote laboratories enable
users to get real data from real-time hand-on experiments. To
implement many remote laboratories, the system architecture should
be flexible, understandable and easy to implement, so that different
laboratories with different hardware can be deployed easily. The
modifications were made to enable developers to add more
equipment in ISA framework and to attract the new developers to
develop many online laboratories.
Abstract: Camera calibration plays an important role in the domain of the analysis of sports video. Considering soccer video, in most cases, the cross-points can be used for calibration at the center of the soccer field are not sufficient, so this paper introduces a new automatic camera calibration algorithm focus on solving this problem by using the properties of images of the center circle, halfway line and a touch line. After the theoretical analysis, a practicable automatic algorithm is proposed. Very little information used though, results of experiments with both synthetic data and real data show that the algorithm is applicable.
Abstract: Knowing about the customer behavior in a grocery has
been a long-standing issue in the retailing industry. The advent of
RFID has made it easier to collect moving data for an individual
shopper's behavior. Most of the previous studies used the traditional
statistical clustering technique to find the major characteristics of
customer behavior, especially shopping path. However, in using the
clustering technique, due to various spatial constraints in the store,
standard clustering methods are not feasible because moving data such
as the shopping path should be adjusted in advance of the analysis,
which is time-consuming and causes data distortion. To alleviate this
problem, we propose a new approach to spatial pattern clustering
based on the longest common subsequence. Experimental results using
real data obtained from a grocery confirm the good performance of the
proposed method in finding the hot spot, dead spot and major path
patterns of customer movements.