Abstract: Higher ground-level ozone (GLO) concentration adversely affects human health, vegetations as well as activities in the ecosystem. In Malaysia, most of the analysis on GLO concentration are carried out using the average value of GLO concentration, which refers to the centre of distribution to make a prediction or estimation. However, analysis which focuses on the higher value or extreme value in GLO concentration is rarely explored. Hence, the objective of this study is to classify the tail behaviour of GLO using generalized extreme value (GEV) distribution estimation the return level using the corresponding modelling (Gumbel, Weibull, and Frechet) of GEV distribution. The results show that Weibull distribution which is also known as short tail distribution and considered as having less extreme behaviour is the best-fitted distribution for four selected air monitoring stations in Peninsular Malaysia, namely Larkin, Pelabuhan Kelang, Shah Alam, and Tanjung Malim; while Gumbel distribution which is considered as a medium tail distribution is the best-fitted distribution for Nilai station. The return level of GLO concentration in Shah Alam station is comparatively higher than other stations. Overall, return levels increase with increasing return periods but the increment depends on the type of the tail of GEV distribution’s tail. We conduct this study by using maximum likelihood estimation (MLE) method to estimate the parameters at four selected stations in Peninsular Malaysia. Next, the validation for the fitted block maxima series to GEV distribution is performed using probability plot, quantile plot and likelihood ratio test. Profile likelihood confidence interval is tested to verify the type of GEV distribution. These results are important as a guide for early notification on future extreme ozone events.
Abstract: Waste reduction is a fundamental problem for sustainability. Methods for waste reduction with point-of-sales (POS) data are proposed, utilizing the knowledge of a recent econophysics study on a statistical property of POS data. Concretely, the non-stationary time series analysis method based on the Particle Filter is developed, which considers abnormal fluctuation scaling known as Taylor's law. This method is extended for handling incomplete sales data because of stock-outs by introducing maximum likelihood estimation for censored data. The way for optimal stock determination with pricing the cost of waste reduction is also proposed. This study focuses on the examination of the methods for large sales numbers where Taylor's law is obvious. Numerical analysis using aggregated POS data shows the effectiveness of the methods to reduce food waste maintaining a high profit for large sales numbers. Moreover, the way of pricing the cost of waste reduction reveals that a small profit loss realizes substantial waste reduction, especially in the case that the proportionality constant of Taylor’s law is small. Specifically, around 1% profit loss realizes half disposal at =0.12, which is the actual value of processed food items used in this research. The methods provide practical and effective solutions for waste reduction keeping a high profit, especially with large sales numbers.
Abstract: This paper considers the modelling of a non-stationary
bivariate integer-valued autoregressive moving average of order
one (BINARMA(1,1)) with correlated Poisson innovations. The
BINARMA(1,1) model is specified using the binomial thinning
operator and by assuming that the cross-correlation between the
two series is induced by the innovation terms only. Based on
these assumptions, the non-stationary marginal and joint moments
of the BINARMA(1,1) are derived iteratively by using some initial
stationary moments. As regards to the estimation of parameters of
the proposed model, the conditional maximum likelihood (CML)
estimation method is derived based on thinning and convolution
properties. The forecasting equations of the BINARMA(1,1) model
are also derived. A simulation study is also proposed where
BINARMA(1,1) count data are generated using a multivariate
Poisson R code for the innovation terms. The performance of
the BINARMA(1,1) model is then assessed through a simulation
experiment and the mean estimates of the model parameters obtained
are all efficient, based on their standard errors. The proposed model
is then used to analyse a real-life accident data on the motorway in
Mauritius, based on some covariates: policemen, daily patrol, speed
cameras, traffic lights and roundabouts. The BINARMA(1,1) model
is applied on the accident data and the CML estimates clearly indicate
a significant impact of the covariates on the number of accidents on
the motorway in Mauritius. The forecasting equations also provide
reliable one-step ahead forecasts.
Abstract: Communication signal modulation recognition
technology is one of the key technologies in the field of modern
information warfare. At present, communication signal automatic
modulation recognition methods are mainly divided into two major
categories. One is the maximum likelihood hypothesis testing method
based on decision theory, the other is a statistical pattern recognition
method based on feature extraction. Now, the most commonly used
is a statistical pattern recognition method, which includes feature
extraction and classifier design. With the increasingly complex
electromagnetic environment of communications, how to effectively
extract the features of various signals at low signal-to-noise ratio
(SNR) is a hot topic for scholars in various countries. To solve this
problem, this paper proposes a feature extraction algorithm for the
communication signal based on the improved Holder cloud feature.
And the extreme learning machine (ELM) is used which aims at
the problem of the real-time in the modern warfare to classify
the extracted features. The algorithm extracts the digital features
of the improved cloud model without deterministic information in
a low SNR environment, and uses the improved cloud model to
obtain more stable Holder cloud features and the performance of the
algorithm is improved. This algorithm addresses the problem that
a simple feature extraction algorithm based on Holder coefficient
feature is difficult to recognize at low SNR, and it also has a
better recognition accuracy. The results of simulations show that the
approach in this paper still has a good classification result at low
SNR, even when the SNR is -15dB, the recognition accuracy still
reaches 76%.
Abstract: The statistical modelling of precipitation data for a
given portion of territory is fundamental for the monitoring of
climatic conditions and for Hydrogeological Management Plans
(HMP). This modelling is rendered particularly complex by the
changes taking place in the frequency and intensity of precipitation,
presumably to be attributed to the global climate change. This paper
applies the Wakeby distribution (with 5 parameters) as a theoretical
reference model. The number and the quality of the parameters
indicate that this distribution may be the appropriate choice for
the interpolations of the hydrological variables and, moreover, the
Wakeby is particularly suitable for describing phenomena producing
heavy tails. The proposed estimation methods for determining the
value of the Wakeby parameters are the same as those used for
density functions with heavy tails. The commonly used procedure
is the classic method of moments weighed with probabilities
(probability weighted moments, PWM) although this has often shown
difficulty of convergence, or rather, convergence to a configuration
of inappropriate parameters. In this paper, we analyze the problem of
the likelihood estimation of a random variable expressed through its
quantile function. The method of maximum likelihood, in this case,
is more demanding than in the situations of more usual estimation.
The reasons for this lie, in the sampling and asymptotic properties of
the estimators of maximum likelihood which improve the estimates
obtained with indications of their variability and, therefore, their
accuracy and reliability. These features are highly appreciated in
contexts where poor decisions, attributable to an inefficient or
incomplete information base, can cause serious damages.
Abstract: Water lily (Nymphaea L.) is the largest genus of Nymphaeaceae. This family is composed of six genera (Nuphar, Ondinea, Euryale, Victoria, Barclaya, Nymphaea). Its members are nearly worldwide in tropical and temperate regions. The classification of some species in Nymphaea is ambiguous due to high variation in leaf and flower parts such as leaf margin, stamen appendage. Therefore, the phylogenetic relationships based on 18S rDNA were constructed to delimit this genus. DNAs of 52 specimens belonging to water lily family were extracted using modified conventional method containing cetyltrimethyl ammonium bromide (CTAB). The results showed that the amplified fragment is about 1600 base pairs in size. After analysis, the aligned sequences presented 9.36% for variable characters comprising 2.66% of parsimonious informative sites and 6.70% of singleton sites. Moreover, there are 6 regions of 1-2 base(s) for insertion/deletion. The phylogenetic trees based on maximum parsimony and maximum likelihood with high bootstrap support indicated that genus Nymphaea was a paraphyletic group because of Ondinea, Victoria and Euryale disruption. Within genus Nymphaea, subgenus Nymphaea is a basal lineage group which cooperated with Euryale and Victoria. The other four subgenera, namely Lotos, Hydrocallis, Brachyceras and Anecphya were included the same large clade which Ondinea was placed within Anecphya clade due to geographical sharing.
Abstract: In this paper, ways of modeling dynamic measurement
systems are discussed. Specially, for linear system with single-input
single-output, it could be modeled with shallow neural network.
Then, gradient based optimization algorithms are used for searching
the proper coefficients. Besides, method with normal equation and
second order gradient descent are proposed to accelerate the modeling
process, and ways of better gradient estimation are discussed. It
shows that the mathematical essence of the learning objective is
maximum likelihood with noises under Gaussian distribution. For
conventional gradient descent, the mini-batch learning and gradient
with momentum contribute to faster convergence and enhance model
ability. Lastly, experimental results proved the effectiveness of second
order gradient descent algorithm, and indicated that optimization with
normal equation was the most suitable for linear dynamic models.
Abstract: The driven processes of Wiener and Lévy are known
self-standing Gaussian-Markov processes for fitting non-linear
dynamical Vasciek model. In this paper, a coincidental Gaussian
density stationarity condition and autocorrelation function of the
two driven processes were established. This led to the conflation
of Wiener and Lévy processes so as to investigate the efficiency
of estimates incorporated into the one-dimensional Vasciek model
that was estimated via the Maximum Likelihood (ML) technique.
The conditional laws of drift, diffusion and stationarity process
was ascertained for the individual Wiener and Lévy processes as
well as the commingle of the two processes for a fixed effect
and Autoregressive like Vasciek model when subjected to financial
series; exchange rate of Naira-CFA Franc. In addition, the model
performance error of the sub-merged driven process was miniature
compared to the self-standing driven process of Wiener and Lévy.
Abstract: Many research projects require accurate delineation of the different land cover type of the agricultural area. Especially it is critically important for the definition of specific plants like cannabis. However, the complexity of vegetation stands structure, abundant vegetation species, and the smooth transition between different seconder section stages make vegetation classification difficult when using traditional approaches such as the maximum likelihood classifier. Most of the time, classification distinguishes only between trees/annual or grain. It has been difficult to accurately determine the cannabis mixed with other plants. In this paper, a mixed distribution models approach is applied to classify pure and mix cannabis parcels using Worldview-2 imagery in the Lakes region of Turkey. Five different land use types (i.e. sunflower, maize, bare soil, and cannabis) were identified in the image. A constrained Gaussian mixture discriminant analysis (GMDA) was used to unmix the image. In the study, 255 reflectance ratios derived from spectral signatures of seven bands (Blue-Green-Yellow-Red-Rededge-NIR1-NIR2) were randomly arranged as 80% for training and 20% for test data. Gaussian mixed distribution model approach is proved to be an effective and convenient way to combine very high spatial resolution imagery for distinguishing cannabis vegetation. Based on the overall accuracies of the classification, the Gaussian mixed distribution model was found to be very successful to achieve image classification tasks. This approach is sensitive to capture the illegal cannabis planting areas in the large plain. This approach can also be used for monitoring and determination with spectral reflections in illegal cannabis planting areas.
Abstract: Construction industry, as one of the main contributor in depletion of natural resources, influences climate change. This paper discusses incremental and evolutionary development of the proposed models for optimization of a life-cycle analysis to explicit strategy for evaluation systems. The main categories are virtually irresistible for introducing uncertainties, uptake composite structure model (CSM) as environmental management systems (EMSs) in a practice science of evaluation small and medium-sized enterprises (SMEs). The model simplified complex systems to reflect nature systems’ input, output and outcomes mode influence “framework measures” and give a maximum likelihood estimation of how elements are simulated over the composite structure. The traditional knowledge of modeling is based on physical dynamic and static patterns regarding parameters influence environment. It unified methods to demonstrate how construction systems ecology interrelated from management prospective in procedure reflects the effect of the effects of engineering systems to ecology as ultimately unified technologies in extensive range beyond constructions impact so as, - energy systems. Sustainability broadens socioeconomic parameters to practice science that meets recovery performance, engineering reflects the generic control of protective systems. When the environmental model employed properly, management decision process in governments or corporations could address policy for accomplishment strategic plans precisely. The management and engineering limitation focuses on autocatalytic control as a close cellular system to naturally balance anthropogenic insertions or aggregation structure systems to pound equilibrium as steady stable conditions. Thereby, construction systems ecology incorporates engineering and management scheme, as a midpoint stage between biotic and abiotic components to predict constructions impact. The later outcomes’ theory of environmental obligation suggests either a procedures of method or technique that is achieved in sustainability impact of construction system ecology (SICSE), as a relative mitigation measure of deviation control, ultimately.
Abstract: This work is devoted to the study of modeling
geophysical time series. A stochastic technique with time-varying
parameters is used to forecast the volatility of data arising in
geophysics. In this study, the volatility is defined as a logarithmic
first-order autoregressive process. We observe that the inclusion of
log-volatility into the time-varying parameter estimation significantly
improves forecasting which is facilitated via maximum likelihood
estimation. This allows us to conclude that the estimation algorithm
for the corresponding one-step-ahead suggested volatility (with ±2
standard prediction errors) is very feasible since it possesses good
convergence properties.
Abstract: In this paper, we propose a method to model the
relationship between failure time and degradation for a simple step
stress test where underlying degradation path is linear and different
causes of failure are possible. It is assumed that the intensity function
depends only on the degradation value. No assumptions are made
about the distribution of the failure times. A simple step-stress test
is used to shorten failure time of products and a tampered failure
rate (TFR) model is proposed to describe the effect of the changing
stress on the intensities. We assume that some of the products that
fail during the test have a cause of failure that is only known to
belong to a certain subset of all possible failures. This case is known
as masking. In the presence of masking, the maximum likelihood
estimates (MLEs) of the model parameters are obtained through an
expectation-maximization (EM) algorithm by treating the causes of
failure as missing values. The effect of incomplete information on the
estimation of parameters is studied through a Monte-Carlo simulation.
Finally, a real example is analyzed to illustrate the application of the
proposed methods.
Abstract: The purpose of this study is providing an improved mode choice model considering parameters including age grouping of prime-aged and old age. In this study, 2010 Household Travel Survey data were used and improper samples were removed through the analysis. Chosen alternative, date of birth, mode, origin code, destination code, departure time, and arrival time are considered from Household Travel Survey. By preprocessing data, travel time, travel cost, mode, and ratio of people aged 45 to 55 years, 55 to 65 years and over 65 years were calculated. After the manipulation, the mode choice model was constructed using LIMDEP by maximum likelihood estimation. A significance test was conducted for nine parameters, three age groups for three modes. Then the test was conducted again for the mode choice model with significant parameters, travel cost variable and travel time variable. As a result of the model estimation, as the age increases, the preference for the car decreases and the preference for the bus increases. This study is meaningful in that the individual and households characteristics are applied to the aggregate model.
Abstract: This paper is to compare the parameter estimation of
the mean in normal distribution by Maximum Likelihood (ML),
Bayes, and Markov Chain Monte Carlo (MCMC) methods. The ML
estimator is estimated by the average of data, the Bayes method is
considered from the prior distribution to estimate Bayes estimator,
and MCMC estimator is approximated by Gibbs sampling from
posterior distribution. These methods are also to estimate a parameter
then the hypothesis testing is used to check a robustness of the
estimators. Data are simulated from normal distribution with the true
parameter of mean 2, and variance 4, 9, and 16 when the sample
sizes is set as 10, 20, 30, and 50. From the results, it can be seen
that the estimation of MLE, and MCMC are perceivably different
from the true parameter when the sample size is 10 and 20 with
variance 16. Furthermore, the Bayes estimator is estimated from the
prior distribution when mean is 1, and variance is 12 which showed
the significant difference in mean with variance 9 at the sample size
10 and 20.
Abstract: Stochastic modeling concerns the use of probability
to model real-world situations in which uncertainty is present.
Therefore, the purpose of stochastic modeling is to estimate the
probability of outcomes within a forecast, i.e. to be able to predict
what conditions or decisions might happen under different situations.
In the present study, we present a model of a stochastic diffusion
process based on the bi-Weibull distribution function (its trend
is proportional to the bi-Weibull probability density function). In
general, the Weibull distribution has the ability to assume the
characteristics of many different types of distributions. This has
made it very popular among engineers and quality practitioners, who
have considered it the most commonly used distribution for studying
problems such as modeling reliability data, accelerated life testing,
and maintainability modeling and analysis. In this work, we start
by obtaining the probabilistic characteristics of this model, as the
explicit expression of the process, its trends, and its distribution by
transforming the diffusion process in a Wiener process as shown in
the Ricciaardi theorem. Then, we develop the statistical inference of
this model using the maximum likelihood methodology. Finally, we
analyse with simulated data the computational problems associated
with the parameters, an issue of great importance in its application to
real data with the use of the convergence analysis methods. Overall,
the use of a stochastic model reflects only a pragmatic decision on
the part of the modeler. According to the data that is available and
the universe of models known to the modeler, this model represents
the best currently available description of the phenomenon under
consideration.
Abstract: The purpose of this article is to find a method
of comparing designs for ordinal regression models using
quantile dispersion graphs in the presence of linear predictor
misspecification. The true relationship between response variable
and the corresponding control variables are usually unknown.
Experimenter assumes certain form of the linear predictor of the
ordinal regression models. The assumed form of the linear predictor
may not be correct always. Thus, the maximum likelihood estimates
(MLE) of the unknown parameters of the model may be biased due to
misspecification of the linear predictor. In this article, the uncertainty
in the linear predictor is represented by an unknown function. An
algorithm is provided to estimate the unknown function at the
design points where observations are available. The unknown function
is estimated at all points in the design region using multivariate
parametric kriging. The comparison of the designs are based on
a scalar valued function of the mean squared error of prediction
(MSEP) matrix, which incorporates both variance and bias of the
prediction caused by the misspecification in the linear predictor. The
designs are compared using quantile dispersion graphs approach.
The graphs also visually depict the robustness of the designs on the
changes in the parameter values. Numerical examples are presented
to illustrate the proposed methodology.
Abstract: In recent decades, rapid and incorrect changes in land-use have been associated with consequences such as natural resources degradation and environmental pollution. Detecting changes in land-use is one of the tools for natural resource management and assessment of changes in ecosystems. The target of this research is studying the land-use changes in Haraz basin with an area of 677000 hectares in a 15 years period (1996 to 2011) using LANDSAT data. Therefore, the quality of the images was first evaluated. Various enhancement methods for creating synthetic bonds were used in the analysis. Separate training sites were selected for each image. Then the images of each period were classified in 9 classes using supervised classification method and the maximum likelihood algorithm. Finally, the changes were extracted in GIS environment. The results showed that these changes are an alarm for the HARAZ basin status in future. The reason is that 27% of the area has been changed, which is related to changing the range lands to bare land and dry farming and also changing the dense forest to sparse forest, horticulture, farming land and residential area.
Abstract: This research provides a technical account of
estimating Transition Probability using Time-homogeneous Markov
Jump Process applying by South African HIV/AIDS data from the
Statistics South Africa. It employs Maximum Likelihood Estimator
(MLE) model to explore the possible influence of Transition
Probability of mortality cases in which case the data was based on
actual Statistics South Africa. This was conducted via an integrated
demographic and epidemiological model of South African HIV/AIDS
epidemic. The model was fitted to age-specific HIV prevalence data
and recorded death data using MLE model. Though the previous
model results suggest HIV in South Africa has declined and AIDS
mortality rates have declined since 2002 – 2013, in contrast, our
results differ evidently with the generally accepted HIV models
(Spectrum/EPP and ASSA2008) in South Africa. However, there is
the need for supplementary research to be conducted to enhance the
demographic parameters in the model and as well apply it to each of
the nine (9) provinces of South Africa.
Abstract: We introduce a new model called the Marshall-Olkin Rayleigh distribution which extends the Rayleigh distribution using Marshall-Olkin transformation and has increasing and decreasing shapes for the hazard rate function. Various structural properties of the new distribution are derived including explicit expressions for the moments, generating and quantile function, some entropy measures, and order statistics are presented. The model parameters are estimated by the method of maximum likelihood and the observed information matrix is determined. The potentiality of the new model is illustrated by means of a simulation study.
Abstract: Multiple-input multiple-output (MIMO) radar has
received increasing attention in recent years. MIMO radar has many
advantages over conventional phased array radar such as target
detection,resolution enhancement, and interference suppression. In
this paper, the results are presented from a simulation study of MIMO
uniformly-spaced linear array (ULA) antennas. The performance is
investigated under varied parameters, including varied array size,
pseudo random (PN) sequence length, number of snapshots, and
signal to noise ratio (SNR). The results of MIMO are compared to a
traditional array antenna.