Abstract: Quality measurement and reporting systems are used in healthcare internationally. In Australia, the Australian Council on Healthcare Standards records and reports hundreds of clinical indicators (CIs) nationally across the healthcare system. These CIs are measures of performance in the clinical setting, and are used as a screening tool to help assess whether a standard of care is being met. Existing analysis and reporting of these CIs incorporate Bayesian methods to address sampling variation; however, such assessments are retrospective in nature, reporting upon the previous six or twelve months of data. The use of Bayesian methods within statistical process control for monitoring systems is an important pursuit to support more timely decision-making. Our research has developed and assessed a new graphical monitoring tool, similar to a control chart, based on the beta-binomial posterior predictive (BBPP) distribution to facilitate the real-time assessment of health care organizational performance via CIs. The BBPP charts have been compared with the traditional Bernoulli CUSUM (BC) chart by simulation. The more traditional “central” and “highest posterior density” (HPD) interval approaches were each considered to define the limits, and the multiple charts were compared via in-control and out-of-control average run lengths (ARLs), assuming that the parameter representing the underlying CI rate (proportion of cases with an event of interest) required estimation. Preliminary results have identified that the BBPP chart with HPD-based control limits provides better out-of-control run length performance than the central interval-based and BC charts. Further, the BC chart’s performance may be improved by using Bayesian parameter estimation of the underlying CI rate.
Abstract: In this paper, we study the factors which determine the capacity of a Convolutional Neural Network (CNN) model and propose the ways to evaluate and adjust the capacity of a CNN model for best matching to a specific pattern recognition task. Firstly, a scheme is proposed to adjust the number of independent functional units within a CNN model to make it be better fitted to a task. Secondly, the number of independent functional units in the capsule network is adjusted to fit it to the training dataset. Thirdly, a method based on Bayesian GAN is proposed to enrich the variances in the current dataset to increase its complexity. Experimental results on the PASCAL VOC 2010 Person Part dataset and the MNIST dataset show that, in both conventional CNN models and capsule networks, the number of independent functional units is an important factor that determines the capacity of a network model. By adjusting the number of functional units, the capacity of a model can better match the complexity of a dataset.
Abstract: Knee collateral ligaments play a significant role in restraining excessive frontal motion (varus/valgus rotations). In this investigation, a multiscale frame was developed based on structural hierarchies of the collateral ligaments starting from the bottom (tropocollagen molecule) to up where the fibred reinforced structure established. Experimental data of failure tensile test were considered as the principal driver of the developed model. This model was calibrated statistically using Bayesian calibration due to the high number of unknown parameters. Then the model is scaled up to fit the real structure of the collateral ligaments and simulated under realistic boundary conditions. Predications have been successful in describing the observed transient response of the collateral ligaments during tensile test under pre- and post-damage loading conditions. Collateral ligaments maximum stresses and strengths were observed near to the femoral insertions, a results that is in good agreement with experimental investigations. Also for the first time, damage initiation and propagation were documented with this model as a function of the cross-link density between tropocollagen molecules.
Abstract: The exchange rate is a pivotal pricing instrument that simultaneously impacts various components of the economy. Depreciation of nominal exchange rate is export promoting, which might be a desired export-led growth policy, and particularly critical to closing-down the widening current account imbalance. However, negative effects resulting from high dollarization and high share of imported intermediate inputs can outweigh positive effect. The aim of this research is to quantify impact of change in nominal exchange rate and test contractionary depreciation hypothesis on Georgian economy using structural and Bayesian vector autoregression. According to the acquired results, appreciation of nominal exchange rate is expected to decrease inflation, monetary policy rate, interest rate on domestic currency loans and economic growth in the medium run; however, impact on economic growth in the short run is statistically not significant.
Abstract: Many supervised machine learning tasks require
decision making across numerous different classes. Multi-class
classification has several applications, such as face recognition, text
recognition and medical diagnostics. The objective of this article is
to analyze an adapted method of Stacking in multi-class problems,
which combines ensembles within the ensemble itself. For this
purpose, a training similar to Stacking was used, but with three
levels, where the final decision-maker (level 2) performs its training
by combining outputs from the tree-based pair of meta-classifiers
(level 1) from Bayesian families. These are in turn trained by pairs
of base classifiers (level 0) of the same family. This strategy seeks to
promote diversity among the ensembles forming the meta-classifier
level 2. Three performance measures were used: (1) accuracy, (2)
area under the ROC curve, and (3) time for three factors: (a)
datasets, (b) experiments and (c) levels. To compare the factors,
ANOVA three-way test was executed for each performance measure,
considering 5 datasets by 25 experiments by 3 levels. A triple
interaction between factors was observed only in time. The accuracy
and area under the ROC curve presented similar results, showing
a double interaction between level and experiment, as well as for
the dataset factor. It was concluded that level 2 had an average
performance above the other levels and that the proposed method
is especially efficient for multi-class problems when compared to
binary problems.
Abstract: In this paper, a variable multiple dependent state (MDS) sampling plan is developed based on the process capability index using Bayesian approach. The optimal parameters of the developed sampling plan with respect to constraints related to the risk of consumer and producer are presented. Two comparison studies have been done. First, the methods of double sampling model, sampling plan for resubmitted lots and repetitive group sampling (RGS) plan are elaborated and average sample numbers of the developed MDS plan and other classical methods are compared. A comparison study between the developed MDS plan based on Bayesian approach and the exact probability distribution is carried out.
Abstract: Nowadays websites provide a vast number of resources for users. Recommender systems have been developed as an essential element of these websites to provide a personalized environment for users. They help users to retrieve interested resources from large sets of available resources. Due to the dynamic feature of user preference, constructing an appropriate model to estimate the user preference is the major task of recommender systems. Profile matching and latent factors are two main approaches to identify user preference. In this paper, we employed the latent factor and profile matching to cluster the user profile and identify user preference, respectively. The method uses the Distance Dependent Chines Restaurant Process as a Bayesian nonparametric framework to extract the latent factors from the user profile. These latent factors are mapped to user interests and a weighted distribution is used to identify user preferences. We evaluate the proposed method using a real-world data-set that contains news tweets of a news agency (BBC). The experimental results and comparisons show the superior recommendation accuracy of the proposed approach related to existing methods, and its ability to effectively evolve over time.
Abstract: Electromyography (EMG) is one of the most important interfaces between humans and robots for rehabilitation. Decoding this signal helps to recognize muscle activation and converts it into smooth motion for the robots. Detecting each muscle’s pattern during walking and running is vital for improving the quality of a patient’s life. In this study, EMG data from 10 muscles in 10 subjects at 4 different speeds were analyzed. EMG signals are nonlinear with high dimensionality. To deal with this challenge, we extracted some features in time-frequency domain and used manifold learning and Laplacian Eigenmaps algorithm to find the intrinsic features that represent data in low-dimensional space. We then used the Bayesian classifier to identify various patterns of EMG signals for different muscles across a range of running speeds. The best result for vastus medialis muscle corresponds to 97.87±0.69 for sensitivity and 88.37±0.79 for specificity with 97.07±0.29 accuracy using Bayesian classifier. The results of this study provide important insight into human movement and its application for robotics research.
Abstract: Safety analysis for multi-agent systems is complicated by the, potentially nonlinear, interactions between agents. This paper proposes a method for analyzing the safety of multi-agent systems by explicitly focusing on interactions and the accident data of systems that are similar in structure and function to the system being analyzed. The method creates a Bayesian network using the accident data from similar systems. A feature of our method is that the events in accident data are labeled with HAZOP guide words. Our method uses an Ontology to abstract away from the details of a multi-agent implementation. Using the ontology, our methods then constructs an “Interaction Map,” a graphical representation of the patterns of interactions between agents and other artifacts. Interaction maps combined with statistical data from accidents and the HAZOP classifications of events can be converted into a Bayesian Network. Bayesian networks allow designers to explore “what it” scenarios and make design trade-offs that maintain safety. We show how to use the Bayesian networks, and the interaction maps to improve multi-agent system designs.
Abstract: We explore the relationship between internal migration
and poverty in Tunisia. We present a methodology combining
potential outcomes approach with multiple imputation to highlight the
effect of internal migration on poverty states. We find that probability
of being poor decreases when leaving the poorest regions (the west
areas) to the richer regions (greater Tunis and the east regions).
Abstract: In this paper, we present a model and an algorithm for
the calculation of the optimal control limit, average cost, sample size,
and the sampling interval for an optimal Bayesian chart to control
the proportion of defective items produced using a semi-Markov
decision process approach. Traditional p-chart has been widely
used for controlling the proportion of defectives in various kinds
of production processes for many years. It is well known that
traditional non-Bayesian charts are not optimal, but very few optimal
Bayesian control charts have been developed in the literature, mostly
considering finite horizon. The objective of this paper is to develop
a fast computational algorithm to obtain the optimal parameters of a
Bayesian p-chart. The decision problem is formulated in the partially
observable framework and the developed algorithm is illustrated by
a numerical example.
Abstract: In this paper, we present the human action recognition method using the variational Bayesian HMM with the Dirichlet process mixture (DPM) of the Gaussian-Wishart emission model (GWEM). First, we define the Bayesian HMM based on the Dirichlet process, which allows an infinite number of Gaussian-Wishart components to support continuous emission observations. Second, we have considered an efficient variational Bayesian inference method that can be applied to drive the posterior distribution of hidden variables and model parameters for the proposed model based on training data. And then we have derived the predictive distribution that may be used to classify new action. Third, the paper proposes a process of extracting appropriate spatial-temporal feature vectors that can be used to recognize a wide range of human behaviors from input video image. Finally, we have conducted experiments that can evaluate the performance of the proposed method. The experimental results show that the method presented is more efficient with human action recognition than existing methods.
Abstract: Quantitative measurement of myocardium perfusion is possible with single photon emission computed tomography (SPECT) using a semiconductor detector. However, accumulation of 99mTc-tetrofosmin in the liver may make it difficult to assess that accurately in the inferior myocardium. Our idea is to reduce the high accumulation in the liver by using dynamic SPECT imaging and a technique called time subtraction. We evaluated the performance of a new SPECT system with a cadmium-zinc-telluride solid-state semi- conductor detector (Discovery NM 530c; GE Healthcare). Our system acquired list-mode raw data over 10 minutes for a typical patient. From the data, ten SPECT images were reconstructed, one for every minute of acquired data. Reconstruction with the semiconductor detector was based on an implementation of a 3-D iterative Bayesian reconstruction algorithm. We studied 20 patients with coronary artery disease (mean age 75.4 ± 12.1 years; range 42-86; 16 males and 4 females). In each subject, 259 MBq of 99mTc-tetrofosmin was injected intravenously. We performed both a phantom and a clinical study using dynamic SPECT. An approximation to a liver-only image is obtained by reconstructing an image from the early projections during which time the liver accumulation dominates (0.5~2.5 minutes SPECT image-5~10 minutes SPECT image). The extracted liver-only image is then subtracted from a later SPECT image that shows both the liver and the myocardial uptake (5~10 minutes SPECT image-liver-only image). The time subtraction of liver was possible in both a phantom and the clinical study. The visualization of the inferior myocardium was improved. In past reports, higher accumulation in the myocardium due to the overlap of the liver is un-diagnosable. Using our time subtraction method, the image quality of the 99mTc-tetorofosmin myocardial SPECT image is considerably improved.
Abstract: In this work, we present a Bayesian non-parametric
approach to model the motion control of ATVs. The motion control
model is based on a Dirichlet Process-Gaussian Process (DP-GP)
mixture model. The DP-GP mixture model provides a flexible
representation of patterns of control manoeuvres along trajectories
of different lengths and discretizations. The model also estimates the
number of patterns, sufficient for modeling the dynamics of the ATV.
Abstract: With the increasing complexity of cyberspace security, the cyber-attack attribution has become an important challenge of the security protection systems. The difficult points of cyber-attack attribution were forced on the problems of huge data handling and key data missing. According to this situation, this paper presented a reasoning method of cyber-attack attribution based on threat intelligence. The method utilizes the intrusion kill chain model and Bayesian network to build attack chain and evidence chain of cyber-attack on threat intelligence platform through data calculation, analysis and reasoning. Then, we used a number of cyber-attack events which we have observed and analyzed to test the reasoning method and demo system, the result of testing indicates that the reasoning method can provide certain help in cyber-attack attribution.
Abstract: In this paper, we discuss a Bayesian approach to
quantile autoregressive (QAR) time series model estimation and
forecasting. Together with a combining forecasts technique, we then
predict USD to GBP currency exchange rates. Combined forecasts
contain all the information captured by the fitted QAR models
at different quantile levels and are therefore better than those
obtained from individual models. Our results show that an unequally
weighted combining method performs better than other forecasting
methodology. We found that a median AR model can perform well in
point forecasting when the predictive density functions are symmetric.
However, in practice, using the median AR model alone may involve
the loss of information about the data captured by other QAR models.
We recommend that combined forecasts should be used whenever
possible.
Abstract: Piecewise polynomial regression model is very flexible model for modeling the data. If the piecewise polynomial regression model is matched against the data, its parameters are not generally known. This paper studies the parameter estimation problem of piecewise polynomial regression model. The method which is used to estimate the parameters of the piecewise polynomial regression model is Bayesian method. Unfortunately, the Bayes estimator cannot be found analytically. Reversible jump MCMC algorithm is proposed to solve this problem. Reversible jump MCMC algorithm generates the Markov chain that converges to the limit distribution of the posterior distribution of piecewise polynomial regression model parameter. The resulting Markov chain is used to calculate the Bayes estimator for the parameters of piecewise polynomial regression model.
Abstract: This paper presents a classifier ensemble approach for
predicting the survivability of the breast cancer patients using the
latest database version of the Surveillance, Epidemiology, and End
Results (SEER) Program of the National Cancer Institute. The system
consists of two main components; features selection and classifier
ensemble components. The features selection component divides the
features in SEER database into four groups. After that it tries to find
the most important features among the four groups that maximizes the
weighted average F-score of a certain classification algorithm. The
ensemble component uses three different classifiers, each of which
models different set of features from SEER through the features
selection module. On top of them, another classifier is used to give
the final decision based on the output decisions and confidence
scores from each of the underlying classifiers. Different classification
algorithms have been examined; the best setup found is by using the
decision tree, Bayesian network, and Na¨ıve Bayes algorithms for the
underlying classifiers and Na¨ıve Bayes for the classifier ensemble
step. The system outperforms all published systems to date when
evaluated against the exact same data of SEER (period of 1973-2002).
It gives 87.39% weighted average F-score compared to 85.82% and
81.34% of the other published systems. By increasing the data size to
cover the whole database (period of 1973-2014), the overall weighted
average F-score jumps to 92.4% on the held out unseen test set.
Abstract: The emergence of Cloud data centers has revolutionized
the IT industry. Private Clouds in specific provide Cloud services
for certain group of customers/businesses. In a real-time private
Cloud each task that is given to the system has a deadline that
desirably should not be violated. Scheduling tasks in a real-time
private CLoud determine the way available resources in the system
are shared among incoming tasks. The aim of the scheduling policy is
to optimize the system outcome which for a real-time private Cloud
can include: energy consumption, deadline violation, execution time
and the number of host switches. Different scheduling policies can be
used for scheduling. Each lead to a sub-optimal outcome in a certain
settings of the system. A Bayesian Scheduling strategy is proposed
for scheduling to further improve the system outcome. The Bayesian
strategy showed to outperform all selected policies. It also has the
flexibility in dealing with complex pattern of incoming task and has
the ability to adapt.
Abstract: This paper reviews a number of theoretical aspects
for implementing an explicit spatial perspective in econometrics
for modelling non-continuous data, in general, and count data, in
particular. It provides an overview of the several spatial econometric
approaches that are available to model data that are collected with
reference to location in space, from the classical spatial econometrics
approaches to the recent developments on spatial econometrics to
model count data, in a Bayesian hierarchical setting. Considerable
attention is paid to the inferential framework, necessary for
structural consistent spatial econometric count models, incorporating
spatial lag autocorrelation, to the corresponding estimation and
testing procedures for different assumptions, to the constrains and
implications embedded in the various specifications in the literature. This review combines insights from the classical spatial
econometrics literature as well as from hierarchical modeling and
analysis of spatial data, in order to look for new possible directions
on the processing of count data, in a spatial hierarchical Bayesian
econometric context.