Abstract: Detecting changes in multiple images of the same
scene has recently seen increased interest due to the many
contemporary applications including smart security systems, smart
homes, remote sensing, surveillance, medical diagnosis, weather
forecasting, speed and distance measurement, post-disaster forensics
and much more. These applications differ in the scale, nature, and
speed of change. This paper presents an application of image
processing techniques to implement a real-time change detection
system. Change is identified by comparing the RGB representation of
two consecutive frames captured in real-time. The detection threshold
can be controlled to account for various luminance levels. The
comparison result is passed through a filter before decision making to
reduce false positives, especially at lower luminance conditions. The
system is implemented with a MATLAB Graphical User interface
with several controls to manage its operation and performance.
Abstract: Automated Teller Machines (ATMs) can be
considered among one of the most important service facilities in the
banking industry. The investment in ATMs and the impact on the
banking industry is growing steadily in every part of the world. The
banks take into consideration many factors like safety, convenience,
visibility, and cost in order to determine the optimum locations of
ATMs. Today, ATMs are not only available in bank branches but
also at retail locations. Another important factor is the cash
management in ATMs. A cash demand model for every ATM is
needed in order to have an efficient cash management system. This
forecasting model is based on historical cash demand data which is
highly related to the ATMs location. So, the location and the cash
management problem should be considered together. This paper
provides a general review on studies, efforts and development in
ATMs location and cash management problem.
Abstract: In this talk, we introduce a newly developed quantile
function model that can be used for estimating conditional
distributions of financial returns and for obtaining multi-step ahead
out-of-sample predictive distributions of financial returns. Since we
forecast the whole conditional distributions, any predictive quantity
of interest about the future financial returns can be obtained simply
as a by-product of the method. We also show an application of the
model to the daily closing prices of Dow Jones Industrial Average
(DJIA) series over the period from 2 January 2004 - 8 October 2010.
We obtained the predictive distributions up to 15 days ahead for
the DJIA returns, which were further compared with the actually
observed returns and those predicted from an AR-GARCH model.
The results show that the new model can capture the main features
of financial returns and provide a better fitted model together with
improved mean forecasts compared with conventional methods. We
hope this talk will help audience to see that this new model has the
potential to be very useful in practice.
Abstract: In this paper, temperature extremes are forecast by
employing the block maxima method of the Generalized extreme
value(GEV) distribution to analyse temperature data from the
Cameroon Development Corporation (C.D.C). By considering two sets
of data (Raw data and simulated data) and two (stationary and
non-stationary) models of the GEV distribution, return levels analysis
is carried out and it was found that in the stationary model, the
return values are constant over time with the raw data while in the
simulated data, the return values show an increasing trend but with
an upper bound. In the non-stationary model, the return levels of
both the raw data and simulated data show an increasing trend but
with an upper bound. This clearly shows that temperatures in the
tropics even-though show a sign of increasing in the future, there
is a maximum temperature at which there is no exceedence. The
results of this paper are very vital in Agricultural and Environmental
research.
Abstract: The current tools for real time management of sewer
systems are based on two software tools: the software of weather
forecast and the software of hydraulic simulation. The use of the first
ones is an important cause of imprecision and uncertainty, the use of
the second requires temporal important steps of decision because of
their need in times of calculation. This way of proceeding fact that
the obtained results are generally different from those waited. The major idea of this project is to change the basic paradigm by
approaching the problem by the "automatic" face rather than by that
"hydrology". The objective is to make possible the realization of a
large number of simulations at very short times (a few seconds)
allowing to take place weather forecasts by using directly the real
time meditative pluviometric data. The aim is to reach a system
where the decision-making is realized from reliable data and where
the correction of the error is permanent. A first model of control laws was realized and tested with different
return-period rainfalls. The gains obtained in rejecting volume vary
from 19 to 100 %. The development of a new algorithm was then
used to optimize calculation time and thus to overcome the
subsequent combinatorial problem in our first approach. Finally, this
new algorithm was tested with 16- year-rainfall series. The obtained
gains are 40 % of total volume rejected to the natural environment
and of 65 % in the number of discharges.
Abstract: The current paper presents an extensive bottom-up
framework for assessing building sector-specific vulnerability to
climate change: energy supply and demand. The research focuses on
the application of downscaled seasonal models for estimating energy
performance of buildings in Greece. The ARW-WRF model has
been set-up and suitably parameterized to produce downscaled
climatological fields for Greece, forced by the output of the CFSv2
model. The outer domain, D01/Europe, included 345 x 345 cells of
horizontal resolution 20 x 20 km2 and the inner domain, D02/Greece,
comprised 180 x 180 cells of 5 x 5 km2 horizontal resolution. The
model run has been setup for a period with a forecast horizon of 6
months, storing outputs on a six hourly basis.
Abstract: Accurate forecasting of fresh produce demand is one
the challenges faced by Small Medium Enterprise (SME)
wholesalers. This paper is an attempt to understand the cause for the
high level of variability such as weather, holidays etc., in demand of
SME wholesalers. Therefore, understanding the significance of
unidentified factors may improve the forecasting accuracy. This
paper presents the current literature on the factors used to predict
demand and the existing forecasting techniques of short shelf life
products. It then investigates a variety of internal and external
possible factors, some of which is not used by other researchers in the
demand prediction process. The results presented in this paper are
further analysed using a number of techniques to minimize noise in
the data. For the analysis past sales data (January 2009 to May 2014)
from a UK based SME wholesaler is used and the results presented
are limited to product ‘Milk’ focused on café’s in derby. The
correlation analysis is done to check the dependencies of variability
factor on the actual demand. Further PCA analysis is done to
understand the significance of factors identified using correlation.
The PCA results suggest that the cloud cover, weather summary and
temperature are the most significant factors that can be used in
forecasting the demand. The correlation of the above three factors
increased relative to monthly and becomes more stable compared to
the weekly and daily demand.
Abstract: In this paper, we introduce an NLG application for the automatic creation of ready-to-publish texts from big data. The resulting fully automatic generated news stories have a high resemblance to the style in which the human writer would draw up such a story. Topics include soccer games, stock exchange market reports, and weather forecasts. Each generated text is unique. Readyto-publish stories written by a computer application can help humans to quickly grasp the outcomes of big data analyses, save timeconsuming pre-formulations for journalists and cater to rather small audiences by offering stories that would otherwise not exist.
Abstract: This paper addresses a cutting edge method of
business demand forecasting, based on an empirical probability
function when the historical behavior of the data is random.
Additionally, it presents error determination based on the numerical
method technique ‘propagation of errors.’ The methodology was
conducted characterization and process diagnostics demand planning
as part of the production management, then new ways to predict its
value through techniques of probability and to calculate their mistake
investigated, it was tools used numerical methods. All this based on
the behavior of the data. This analysis was determined considering
the specific business circumstances of a company in the sector of
communications, located in the city of Bogota, Colombia. In
conclusion, using this application it was possible to obtain the
adequate stock of the products required by the company to provide its
services, helping the company reduce its service time, increase the
client satisfaction rate, reduce stock which has not been in rotation
for a long time, code its inventory, and plan reorder points for the
replenishment of stock.
Abstract: The purpose of the paper is to estimate the US small
wind turbines market potential and forecast the small wind turbines
sales in the US. The forecasting method is based on the application of
the Bass model and the generalized Bass model of innovations
diffusion under replacement purchases. In the work an exponential
distribution is used for modeling of replacement purchases. Only one
parameter of such distribution is determined by average lifetime of
small wind turbines. The identification of the model parameters is
based on nonlinear regression analysis on the basis of the annual
sales statistics which has been published by the American Wind
Energy Association (AWEA) since 2001 up to 2012. The estimation
of the US average market potential of small wind turbines (for
adoption purchases) without account of price changes is 57080
(confidence interval from 49294 to 64866 at P = 0.95) under average
lifetime of wind turbines 15 years, and 62402 (confidence interval
from 54154 to 70648 at P = 0.95) under average lifetime of wind
turbines 20 years. In the first case the explained variance is 90,7%,
while in the second - 91,8%. The effect of the wind turbines price
changes on their sales was estimated using generalized Bass model.
This required a price forecast. To do this, the polynomial regression
function, which is based on the Berkeley Lab statistics, was used. The
estimation of the US average market potential of small wind turbines
(for adoption purchases) in that case is 42542 (confidence interval
from 32863 to 52221 at P = 0.95) under average lifetime of wind
turbines 15 years, and 47426 (confidence interval from 36092 to
58760 at P = 0.95) under average lifetime of wind turbines 20 years.
In the first case the explained variance is 95,3%, while in the second
– 95,3%.
Abstract: In this paper the problem of the application of
temporal reasoning and case-based reasoning in intelligent decision
support systems is considered. The method of case-based reasoning
with temporal dependences for the solution of problems of real-time
diagnostics and forecasting in intelligent decision support systems is
described. This paper demonstrates how the temporal case-based
reasoning system can be used in intelligent decision support systems
of the car access control. This work was supported by RFBR.
Abstract: The research of juice flavor forecasting has become
more important in China. Due to the fast economic growth in China,
many different kinds of juices have been introduced to the market. If a
beverage company can understand their customers’ preference well,
the juice can be served more attractive. Thus, this study intends to
introducing the basic theory and computing process of grapes juice
flavor forecasting based on support vector regression (SVR). Applying
SVR, BPN, and LR to forecast the flavor of grapes juice in real data
shows that SVR is more suitable and effective at predicting
performance.
Abstract: Taiwan is a hyper endemic area for the Hepatitis B
virus (HBV). The estimated total number of HBsAg carriers in the
general population who are more than 20 years old is more than 3
million. Therefore, a case record review is conducted from January
2003 to June 2007 for all patients with a diagnosis of acute hepatitis
who were admitted to the Emergency Department (ED) of a
well-known teaching hospital. The cost for the use of medical
resources is defined as the total medical fee. In this study, principal
component analysis (PCA) is firstly employed to reduce the number of
dimensions. Support vector regression (SVR) and artificial neural
network (ANN) are then used to develop the forecasting model. A total
of 117 patients meet the inclusion criteria. 61% patients involved in
this study are hepatitis B related. The computational result shows that
the proposed PCA-SVR model has superior performance than other
compared algorithms. In conclusion, the Child-Pugh score and
echogram can both be used to predict the cost of medical resources for
patients with acute hepatitis in the ED.
Abstract: A model was constructed to predict the amount of
solar radiation that will make contact with the surface of the earth in
a given location an hour into the future. This project was supported
by the Southern Company to determine at what specific times during
a given day of the year solar panels could be relied upon to produce
energy in sufficient quantities. Due to their ability as universal
function approximators, an artificial neural network was used to
estimate the nonlinear pattern of solar radiation, which utilized
measurements of weather conditions collected at the Griffin, Georgia
weather station as inputs. A number of network configurations and
training strategies were utilized, though a multilayer perceptron with
a variety of hidden nodes trained with the resilient propagation
algorithm consistently yielded the most accurate predictions. In
addition, a modeled direct normal irradiance field and adjacent
weather station data were used to bolster prediction accuracy. In later
trials, the solar radiation field was preprocessed with a discrete
wavelet transform with the aim of removing noise from the
measurements. The current model provides predictions of solar
radiation with a mean square error of 0.0042, though ongoing efforts
are being made to further improve the model’s accuracy.
Abstract: Traditional document representation for classification
follows Bag of Words (BoW) approach to represent the term weights.
The conventional method uses the Vector Space Model (VSM) to
exploit the statistical information of terms in the documents and they
fail to address the semantic information as well as order of the terms
present in the documents. Although, the phrase based approach
follows the order of the terms present in the documents rather than
semantics behind the word. Therefore, a semantic concept based
approach is used in this paper for enhancing the semantics by
incorporating the ontology information. In this paper a novel method
is proposed to forecast the intraday stock market price directional
movement based on the sentiments from Twitter and money control
news articles. The stock market forecasting is a very difficult and
highly complicated task because it is affected by many factors such
as economic conditions, political events and investor’s sentiment etc.
The stock market series are generally dynamic, nonparametric, noisy
and chaotic by nature. The sentiment analysis along with wisdom of
crowds can automatically compute the collective intelligence of
future performance in many areas like stock market, box office sales
and election outcomes. The proposed method utilizes collective
sentiments for stock market to predict the stock price directional
movements. The collective sentiments in the above social media have
powerful prediction on the stock price directional movements as
up/down by using Granger Causality test.
Abstract: This paper examines the effect of the volatility of oil
prices on food price in South Africa using monthly data covering the
period 2002:01 to 2014:09. Food price is measured by the South
African consumer price index for food while oil price is proxied by
the Brent crude oil. The study employs the GARCH-in-mean VAR
model, which allows the investigation of the effect of a negative and
positive shock in oil price volatility on food price. The model also
allows the oil price uncertainty to be measured as the conditional
standard deviation of a one-step-ahead forecast error of the change in
oil price. The results show that oil price uncertainty has a positive
and significant effect on food price in South Africa. The responses of
food price to a positive and negative oil price shocks is asymmetric.
Abstract: Modeling and forecasting dynamics of rainfall
occurrences constitute one of the major topics, which have been
largely treated by statisticians, hydrologists, climatologists and many
other groups of scientists. In the same issue, we propose, in the
present paper, a new hybrid method, which combines Extreme
Values and fractal theories. We illustrate the use of our methodology
for transformed Emberger Index series, constructed basing on data
recorded in Oujda (Morocco).
The index is treated at first by Peaks Over Threshold (POT)
approach, to identify excess observations over an optimal threshold u.
In the second step, we consider the resulting excess as a fractal object
included in one dimensional space of time. We identify fractal
dimension by the box counting. We discuss the prospect descriptions
of rainfall data sets under Generalized Pareto Distribution, assured by
Extreme Values Theory (EVT). We show that, despite of the
appropriateness of return periods given by POT approach, the
introduction of fractal dimension provides accurate interpretation
results, which can ameliorate apprehension of rainfall occurrences.
Abstract: The effects of hypertension are often lethal thus its
early detection and prevention is very important for everybody. In
this paper, a neural network (NN) model was developed and trained
based on a dataset of hypertension causative parameters in order to
forecast the likelihood of occurrence of hypertension in patients. Our
research goal was to analyze the potential of the presented NN to
predict, for a period of time, the risk of hypertension or the risk of
developing this disease for patients that are or not currently
hypertensive. The results of the analysis for a given patient can
support doctors in taking pro-active measures for averting the
occurrence of hypertension such as recommendations regarding the
patient behavior in order to lower his hypertension risk. Moreover,
the paper envisages a set of three example scenarios in order to
determine the age when the patient becomes hypertensive, i.e.
determine the threshold for hypertensive age, to analyze what
happens if the threshold hypertensive age is set to a certain age and
the weight of the patient if being varied, and, to set the ideal weight
for the patient and analyze what happens with the threshold of
hypertensive age.
Abstract: Previous studies on financial distress prediction choose
the conventional failing and non-failing dichotomy; however, the
distressed extent differs substantially among different financial
distress events. To solve the problem, “non-distressed”, “slightlydistressed”
and “reorganization and bankruptcy” are used in our article
to approximate the continuum of corporate financial health. This paper
explains different financial distress events using the two-stage method.
First, this investigation adopts firm-specific financial ratios, corporate
governance and market factors to measure the probability of various
financial distress events based on multinomial logit models.
Specifically, the bootstrapping simulation is performed to examine the
difference of estimated misclassifying cost (EMC). Second, this work
further applies macroeconomic factors to establish the credit cycle
index and determines the distressed cut-off indicator of the two-stage
models using such index. Two different models, one-stage and
two-stage prediction models are developed to forecast financial
distress, and the results acquired from different models are compared
with each other, and with the collected data. The findings show that the
one-stage model has the lower misclassification error rate than the
two-stage model. The one-stage model is more accurate than the
two-stage model.
Abstract: The aim of this paper is to select the most accurate
forecasting method for predicting the future values of the
unemployment rate in selected European countries. In order to do so,
several forecasting techniques adequate for forecasting time series
with trend component, were selected, namely: double exponential
smoothing (also known as Holt`s method) and Holt-Winters` method
which accounts for trend and seasonality. The results of the empirical
analysis showed that the optimal model for forecasting
unemployment rate in Greece was Holt-Winters` additive method. In
the case of Spain, according to MAPE, the optimal model was double
exponential smoothing model. Furthermore, for Croatia and Italy the
best forecasting model for unemployment rate was Holt-Winters`
multiplicative model, whereas in the case of Portugal the best model
to forecast unemployment rate was Double exponential smoothing
model. Our findings are in line with European Commission
unemployment rate estimates.