Abstract: Cloud computing is ready to transform the structure of businesses and learning through supplying the real-time applications and provide an immediate help for small to medium sized businesses. The ability to run a hypervisor inside a virtual machine is important feature of virtualization and it is called nested virtualization. In today’s growing field of information technology, many of the virtualization models are available, that provide a convenient approach to implement, but decision for a single model selection is difficult. This paper explains the applications of operating system based virtualization in cloud computing with an appropriate/suitable model with their different specifications and user’s requirements. In the present paper, most popular models are selected, and the selection was based on container and hypervisor based virtualization. Selected models were compared with a wide range of user’s requirements as number of CPUs, memory size, nested virtualization supports, live migration and commercial supports, etc. and we identified a most suitable model of virtualization.
Abstract: We proposed a Hyperbolic Gompertz Growth Model
(HGGM), which was developed by introducing a shape parameter
(allometric). This was achieved by convoluting hyperbolic sine
function on the intrinsic rate of growth in the classical gompertz
growth equation. The resulting integral solution obtained
deterministically was reprogrammed into a statistical model and used
in modeling the height and diameter of Pines (Pinus caribaea). Its
ability in model prediction was compared with the classical gompertz
growth model, an approach which mimicked the natural variability of
height/diameter increment with respect to age and therefore provides
a more realistic height/diameter predictions using goodness of fit
tests and model selection criteria. The Kolmogorov Smirnov test and
Shapiro-Wilk test was also used to test the compliance of the error
term to normality assumptions while the independence of the error
term was confirmed using the runs test. The mean function of top
height/Dbh over age using the two models under study predicted
closely the observed values of top height/Dbh in the hyperbolic
gompertz growth models better than the source model (classical
gompertz growth model) while the results of R2, Adj. R2, MSE and
AIC confirmed the predictive power of the Hyperbolic Gompertz
growth models over its source model.
Abstract: The objective of this study is to propose a statistical
modeling method which enables simultaneous term structure
estimation of the risk-free interest rate, hazard and loss given default,
incorporating the characteristics of the bond issuing company such as
credit rating and financial information. A reduced form model is used
for this purpose. Statistical techniques such as spline estimation and
Bayesian information criterion are employed for parameter estimation
and model selection. An empirical analysis is conducted using the
information on the Japanese bond market data. Results of the
empirical analysis confirm the usefulness of the proposed method.
Abstract: The hydrologic time series data display periodic
structure and periodic autoregressive process receives considerable
attention in modeling of such series. In this communication long
term record of monthly waste flow of Lyari river is utilized to
quantify by using PAR modeling technique. The parameters of
model are estimated by using Frances & Paap methodology. This
study shows that periodic autoregressive model of order 2 is the most
parsimonious model for assessing periodicity in waste flow of the
river. A careful statistical analysis of residuals of PAR (2) model is
used for establishing goodness of fit. The forecast by using proposed
model confirms significance and effectiveness of the model.
Abstract: Nevertheless the widespread application of finite
mixture models in segmentation, finite mixture model selection is
still an important issue. In fact, the selection of an adequate number
of segments is a key issue in deriving latent segments structures and
it is desirable that the selection criteria used for this end are effective.
In order to select among several information criteria, which may
support the selection of the correct number of segments we conduct a
simulation study. In particular, this study is intended to determine
which information criteria are more appropriate for mixture model
selection when considering data sets with only categorical
segmentation base variables. The generation of mixtures of
multinomial data supports the proposed analysis. As a result, we
establish a relationship between the level of measurement of
segmentation variables and some (eleven) information criteria-s
performance. The criterion AIC3 shows better performance (it
indicates the correct number of the simulated segments- structure
more often) when referring to mixtures of multinomial segmentation
base variables.
Abstract: The prediction of long-term deformations of concrete and reinforced concrete structures has been a field of extensive research and several different creep models have been developed so far. Most of the models were developed for constant concrete stresses, thus, in case of varying stresses a specific superposition principle or time-integration, respectively, is necessary. Nowadays, when modeling concrete creep the engineering focus is rather on the application of sophisticated time-integration methods than choosing the more appropriate creep model. For this reason, this paper presents a method to quantify the uncertainties of creep prediction originating from the selection of creep models or from the time-integration methods. By adapting variance based global sensitivity analysis, a methodology is developed to quantify the influence of creep model selection or choice of time-integration method. Applying the developed method, general recommendations how to model creep behavior for varying stresses are given.
Abstract: The Random Coefficient Dynamic Regression (RCDR)
model is to developed from Random Coefficient Autoregressive
(RCA) model and Autoregressive (AR) model. The RCDR model
is considered by adding exogenous variables to RCA model. In this
paper, the concept of the Maximum Likelihood (ML) method is used
to estimate the parameter of RCDR(1,1) model. Simulation results
have shown the AIC and BIC criterion to compare the performance of
the the RCDR(1,1) model. The variables as the stationary and weakly
stationary data are good estimates where the exogenous variables
are weakly stationary. However, the model selection indicated that
variables are nonstationarity data based on the stationary data of the
exogenous variables.
Abstract: This paper presents a method of model selection and
identification of Hammerstein systems by hybridization of the genetic
algorithm (GA) and particle swarm optimization (PSO). An unknown
nonlinear static part to be estimated is approximately represented
by an automatic choosing function (ACF) model. The weighting
parameters of the ACF and the system parameters of the linear
dynamic part are estimated by the linear least-squares method. On
the other hand, the adjusting parameters of the ACF model structure
are properly selected by the hybrid algorithm of the GA and PSO,
where the Akaike information criterion is utilized as the evaluation
value function. Simulation results are shown to demonstrate the
effectiveness of the proposed hybrid algorithm.
Abstract: This paper presents a computational methodology
based on matrix operations for a computer based solution to the
problem of performance analysis of software reliability models
(SRMs). A set of seven comparison criteria have been formulated to
rank various non-homogenous Poisson process software reliability
models proposed during the past 30 years to estimate software
reliability measures such as the number of remaining faults, software
failure rate, and software reliability. Selection of optimal SRM for
use in a particular case has been an area of interest for researchers in
the field of software reliability. Tools and techniques for software
reliability model selection found in the literature cannot be used with
high level of confidence as they use a limited number of model
selection criteria. A real data set of middle size software project from
published papers has been used for demonstration of matrix method.
The result of this study will be a ranking of SRMs based on the
Permanent value of the criteria matrix formed for each model based
on the comparison criteria. The software reliability model with
highest value of the Permanent is ranked at number – 1 and so on.
Abstract: The zero inflated models are usually used in modeling
count data with excess zeros where the existence of the excess zeros
could be structural zeros or zeros which occur by chance. These type
of data are commonly found in various disciplines such as finance,
insurance, biomedical, econometrical, ecology, and health sciences
which involve sex and health dental epidemiology. The most popular
zero inflated models used by many researchers are zero inflated
Poisson and zero inflated negative binomial models. In addition, zero
inflated generalized Poisson and zero inflated double Poisson models
are also discussed and found in some literature. Recently zero
inflated inverse trinomial model and zero inflated strict arcsine
models are advocated and proven to serve as alternative models in
modeling overdispersed count data caused by excessive zeros and
unobserved heterogeneity. The purpose of this paper is to review
some related literature and provide a variety of examples from
different disciplines in the application of zero inflated models.
Different model selection methods used in model comparison are
discussed.
Abstract: Feature and model selection are in the center of
attention of many researches because of their impact on classifiers-
performance. Both selections are usually performed separately but
recent developments suggest using a combined GA-SVM approach to
perform them simultaneously. This approach improves the
performance of the classifier identifying the best subset of variables
and the optimal parameters- values. Although GA-SVM is an
effective method it is computationally expensive, thus a rough
method can be considered. The paper investigates a joined approach
of Genetic Algorithm and kernel matrix criteria to perform
simultaneously feature and model selection for SVM classification
problem. The purpose of this research is to improve the classification
performance of SVM through an efficient approach, the Kernel
Matrix Genetic Algorithm method (KMGA).
Abstract: This article illustrates a model selection management approach for virtual prototypes in interactive simulations. In those numerical simulations, the virtual prototype and its environment are modelled as a multiagent system, where every entity (prototype,human, etc.) is modelled as an agent. In particular, virtual prototyp ingagents that provide mathematical models of mechanical behaviour inform of computational methods are considered. This work argues that selection of an appropriate model in a changing environment,supported by models? characteristics, can be managed by the deter-mination a priori of specific exploitation and performance measures of virtual prototype models. As different models exist to represent a single phenomenon, it is not always possible to select the best one under all possible circumstances of the environment. Instead the most appropriate shall be selecting according to the use case. The proposed approach consists in identifying relevant metrics or indicators for each group of models (e.g. entity models, global model), formulate their qualification, analyse the performance, and apply the qualification criteria. Then, a model can be selected based on the performance prediction obtained from its qualification. The authors hope that this approach will not only help to inform engineers and researchers about another approach for selecting virtual prototype models, but also assist virtual prototype engineers in the systematic or automatic model selection.