Retail Strategy to Reduce Waste Keeping High Profit Utilizing Taylor's Law in Point-of-Sales Data

Waste reduction is a fundamental problem for sustainability. Methods for waste reduction with point-of-sales (POS) data are proposed, utilizing the knowledge of a recent econophysics study on a statistical property of POS data. Concretely, the non-stationary time series analysis method based on the Particle Filter is developed, which considers abnormal fluctuation scaling known as Taylor's law. This method is extended for handling incomplete sales data because of stock-outs by introducing maximum likelihood estimation for censored data. The way for optimal stock determination with pricing the cost of waste reduction is also proposed. This study focuses on the examination of the methods for large sales numbers where Taylor's law is obvious. Numerical analysis using aggregated POS data shows the effectiveness of the methods to reduce food waste maintaining a high profit for large sales numbers. Moreover, the way of pricing the cost of waste reduction reveals that a small profit loss realizes substantial waste reduction, especially in the case that the proportionality constant  of Taylor’s law is small. Specifically, around 1% profit loss realizes half disposal at =0.12, which is the actual  value of processed food items used in this research. The methods provide practical and effective solutions for waste reduction keeping a high profit, especially with large sales numbers.

The Non-Stationary BINARMA(1,1) Process with Poisson Innovations: An Application on Accident Data

This paper considers the modelling of a non-stationary bivariate integer-valued autoregressive moving average of order one (BINARMA(1,1)) with correlated Poisson innovations. The BINARMA(1,1) model is specified using the binomial thinning operator and by assuming that the cross-correlation between the two series is induced by the innovation terms only. Based on these assumptions, the non-stationary marginal and joint moments of the BINARMA(1,1) are derived iteratively by using some initial stationary moments. As regards to the estimation of parameters of the proposed model, the conditional maximum likelihood (CML) estimation method is derived based on thinning and convolution properties. The forecasting equations of the BINARMA(1,1) model are also derived. A simulation study is also proposed where BINARMA(1,1) count data are generated using a multivariate Poisson R code for the innovation terms. The performance of the BINARMA(1,1) model is then assessed through a simulation experiment and the mean estimates of the model parameters obtained are all efficient, based on their standard errors. The proposed model is then used to analyse a real-life accident data on the motorway in Mauritius, based on some covariates: policemen, daily patrol, speed cameras, traffic lights and roundabouts. The BINARMA(1,1) model is applied on the accident data and the CML estimates clearly indicate a significant impact of the covariates on the number of accidents on the motorway in Mauritius. The forecasting equations also provide reliable one-step ahead forecasts.

Investigation of Improved Chaotic Signal Tracking by Echo State Neural Networks and Multilayer Perceptron via Training of Extended Kalman Filter Approach

This paper presents a prediction performance of feedforward Multilayer Perceptron (MLP) and Echo State Networks (ESN) trained with extended Kalman filter. Feedforward neural networks and ESN are powerful neural networks which can track and predict nonlinear signals. However, their tracking performance depends on the specific signals or data sets, having the risk of instability accompanied by large error. In this study we explore this process by applying different network size and leaking rate for prediction of nonlinear or chaotic signals in MLP neural networks. Major problems of ESN training such as the problem of initialization of the network and improvement in the prediction performance are tackled. The influence of coefficient of activation function in the hidden layer and other key parameters are investigated by simulation results. Extended Kalman filter is employed in order to improve the sequential and regulation learning rate of the feedforward neural networks. This training approach has vital features in the training of the network when signals have chaotic or non-stationary sequential pattern. Minimization of the variance in each step of the computation and hence smoothing of tracking were obtained by examining the results, indicating satisfactory tracking characteristics for certain conditions. In addition, simulation results confirmed satisfactory performance of both of the two neural networks with modified parameterization in tracking of the nonlinear signals.

Replicating Brain’s Resting State Functional Connectivity Network Using a Multi-Factor Hub-Based Model

The brain’s functional connectivity while temporally non-stationary does express consistency at a macro spatial level. The study of stable resting state connectivity patterns hence provides opportunities for identification of diseases if such stability is severely perturbed. A mathematical model replicating the brain’s spatial connections will be useful for understanding brain’s representative geometry and complements the empirical model where it falls short. Empirical computations tend to involve large matrices and become infeasible with fine parcellation. However, the proposed analytical model has no such computational problems. To improve replicability, 92 subject data are obtained from two open sources. The proposed methodology, inspired by financial theory, uses multivariate regression to find relationships of every cortical region of interest (ROI) with some pre-identified hubs. These hubs acted as representatives for the entire cortical surface. A variance-covariance framework of all ROIs is then built based on these relationships to link up all the ROIs. The result is a high level of match between model and empirical correlations in the range of 0.59 to 0.66 after adjusting for sample size; an increase of almost forty percent. More significantly, the model framework provides an intuitive way to delineate between systemic drivers and idiosyncratic noise while reducing dimensions by more than 30 folds, hence, providing a way to conduct attribution analysis. Due to its analytical nature and simple structure, the model is useful as a standalone toolkit for network dependency analysis or as a module for other mathematical models.

Non-Stationary Stochastic Optimization of an Oscillating Water Column

A non-stationary stochastic optimization methodology is applied to an OWC (oscillating water column) to find the design that maximizes the wave energy extraction. Different temporal cycles are considered to represent the long-term variability of the wave climate at the site in the optimization problem. The results of the non-stationary stochastic optimization problem are compared against those obtained by a stationary stochastic optimization problem. The comparative analysis reveals that the proposed non-stationary optimization provides designs with a better fit to reality. However, the stationarity assumption can be adequate when looking at averaged system response.

Optimizing Approach for Sifting Process to Solve a Common Type of Empirical Mode Decomposition Mode Mixing

Empirical mode decomposition (EMD), a new data-driven of time-series decomposition, has the advantage of supposing that a time series is non-linear or non-stationary, as is implicitly achieved in Fourier decomposition. However, the EMD suffers of mode mixing problem in some cases. The aim of this paper is to present a solution for a common type of signals causing of EMD mode mixing problem, in case a signal suffers of an intermittency. By an artificial example, the solution shows superior performance in terms of cope EMD mode mixing problem comparing with the conventional EMD and Ensemble Empirical Mode decomposition (EEMD). Furthermore, the over-sifting problem is also completely avoided; and computation load is reduced roughly six times compared with EEMD, an ensemble number of 50.

Urban Growth Analysis Using Multi-Temporal Satellite Images, Non-stationary Decomposition Methods and Stochastic Modeling

Remotely sensed data are a significant source for monitoring and updating databases for land use/cover. Nowadays, changes detection of urban area has been a subject of intensive researches. Timely and accurate data on spatio-temporal changes of urban areas are therefore required. The data extracted from multi-temporal satellite images are usually non-stationary. In fact, the changes evolve in time and space. This paper is an attempt to propose a methodology for changes detection in urban area by combining a non-stationary decomposition method and stochastic modeling. We consider as input of our methodology a sequence of satellite images I1, I2, … In at different periods (t = 1, 2, ..., n). Firstly, a preprocessing of multi-temporal satellite images is applied. (e.g. radiometric, atmospheric and geometric). The systematic study of global urban expansion in our methodology can be approached in two ways: The first considers the urban area as one same object as opposed to non-urban areas (e.g. vegetation, bare soil and water). The objective is to extract the urban mask. The second one aims to obtain a more knowledge of urban area, distinguishing different types of tissue within the urban area. In order to validate our approach, we used a database of Tres Cantos-Madrid in Spain, which is derived from Landsat for a period (from January 2004 to July 2013) by collecting two frames per year at a spatial resolution of 25 meters. The obtained results show the effectiveness of our method.

Poincaré Plot for Heart Rate Variability

Heart is the most important part in the body of living organisms. It affects and is affected by any factor in the body. Therefore, it is a good detector for all conditions in the body. Heart signal is a non-stationary signal; thus, it is utmost important to study the variability of heart signal. The Heart Rate Variability (HRV) has attracted considerable attention in psychology, medicine and has become important dependent measure in psychophysiology and behavioral medicine. The standards of measurements, physiological interpretation and clinical use for HRV that are most often used were described in many researcher papers, however, remain complex issues are fraught with pitfalls. This paper presents one of the nonlinear techniques to analyze HRV. It discusses many points like, what Poincaré plot is and how Poincaré plot works; also, Poincaré plot's merits especially in HRV. Besides, it discusses the limitation of Poincaré cause of standard deviation SD1, SD2 and how to overcome this limitation by using complex correlation measure (CCM). The CCM is most sensitive to changes in temporal structure of the Poincaré plot as compared toSD1 and SD2.

Extreme Temperature Forecast in Mbonge, Cameroon through Return Level Analysis of the Generalized Extreme Value (GEV) Distribution

In this paper, temperature extremes are forecast by employing the block maxima method of the Generalized extreme value(GEV) distribution to analyse temperature data from the Cameroon Development Corporation (C.D.C). By considering two sets of data (Raw data and simulated data) and two (stationary and non-stationary) models of the GEV distribution, return levels analysis is carried out and it was found that in the stationary model, the return values are constant over time with the raw data while in the simulated data, the return values show an increasing trend but with an upper bound. In the non-stationary model, the return levels of both the raw data and simulated data show an increasing trend but with an upper bound. This clearly shows that temperatures in the tropics even-though show a sign of increasing in the future, there is a maximum temperature at which there is no exceedence. The results of this paper are very vital in Agricultural and Environmental research.

Data-driven Multiscale Tsallis Complexity: Application to EEG Analysis

This work proposes a data-driven multiscale based quantitative measures to reveal the underlying complexity of electroencephalogram (EEG), applying to a rodent model of hypoxic-ischemic brain injury and recovery. Motivated by that real EEG recording is nonlinear and non-stationary over different frequencies or scales, there is a need of more suitable approach over the conventional single scale based tools for analyzing the EEG data. Here, we present a new framework of complexity measures considering changing dynamics over multiple oscillatory scales. The proposed multiscale complexity is obtained by calculating entropies of the probability distributions of the intrinsic mode functions extracted by the empirical mode decomposition (EMD) of EEG. To quantify EEG recording of a rat model of hypoxic-ischemic brain injury following cardiac arrest, the multiscale version of Tsallis entropy is examined. To validate the proposed complexity measure, actual EEG recordings from rats (n=9) experiencing 7 min cardiac arrest followed by resuscitation were analyzed. Experimental results demonstrate that the use of the multiscale Tsallis entropy leads to better discrimination of the injury levels and improved correlations with the neurological deficit evaluation after 72 hours after cardiac arrest, thus suggesting an effective metric as a prognostic tool.

Study of the Behavior of an Organic Coating Applied on Algerian Oil Tanker in Seawater

Paints are the most widely used methods of protection against atmospheric corrosion of metals. The aim of this work was to determine the protective performance of epoxy coating against sea water before and after damage. Investigations are conducted using stationary and non-stationary electrochemical tools such as electrochemical impedance spectroscopy has allowed us to characterize the protective qualities of these films. The application of the EIS on our damaged in-situ painting shows the existence of several capacitive loops which is an indicator of the failure of our tested paint. Microscopic analysis (micrograph) helped bring essential elements in understanding the degradation of our paint condition and immersion training corrosion products.

Large Eddy Simulation of Flow Separation Control over a NACA2415 Airfoil

This study involves a numerical simulation of the flow around a NACA2415 airfoil, with a 15°angle of attack, and flow separation control using a rod, It reposes inputting a cylindrical rod upstream of the leading edge in order to accelerate the transition of the boundary layer by interaction between the rod wake and the boundary layer. The viscous, non-stationary flow is simulated using ANSYS FLUENT 13. Our results showed a substantial modification in the flow behavior and a maximum drag reduction of 51%.

The Study on the Stationarity of Housing Price-to-Rent and Housing Price-to-Income Ratios in China

This paper aims to examine whether a bubble is present in the housing market of China. Thus, we use the housing  price-to-income ratios and housing price-to-rent ratios of 35 cities from 1998 to 2010. The methods of the panel KSS unit root test with a  Fourier function and the SPSM process are likewise used. The panel  KSS unit root test with a Fourier function considers the problem of  non-linearity and structural changes, and the SPSM process can avoid  the stationary time series from dominating the result-generated bias.  Through a rigorous empirical study, we determine that the housing  price-to-income ratios are stationary in 34 of the 35 cities in China.  Only Xining is non-stationary. The housing price-to-rent ratios are  stationary in 32 of the 35 cities in China. Chengdu, Fuzhou, and  Zhengzhou are non-stationary. Overall, the housing bubbles are not a  serious problem in China at the time.  

A New Technique for Multi Resolution Characterization of Epileptic Spikes in EEG

A technique proposed for the automatic detection of spikes in electroencephalograms (EEG). A multi-resolution approach and a non-linear energy operator are exploited. The signal on each EEG channel is decomposed into three sub bands using a non-decimated wavelet transform (WT). The WT is a powerful tool for multi-resolution analysis of non-stationary signal as well as for signal compression, recognition and restoration. Each sub band is analyzed by using a non-linear energy operator, in order to detect spikes. A decision rule detects the presence of spikes in the EEG, relying upon the energy of the three sub-bands. The effectiveness of the proposed technique was confirmed by analyzing both test signals and EEG layouts.

Performance Analysis of a Series of Adaptive Filters in Non-Stationary Environment for Noise Cancelling Setup

One of the essential components of much of DSP application is noise cancellation. Changes in real time signals are quite rapid and swift. In noise cancellation, a reference signal which is an approximation of noise signal (that corrupts the original information signal) is obtained and then subtracted from the noise bearing signal to obtain a noise free signal. This approximation of noise signal is obtained through adaptive filters which are self adjusting. As the changes in real time signals are abrupt, this needs adaptive algorithm that converges fast and is stable. Least mean square (LMS) and normalized LMS (NLMS) are two widely used algorithms because of their plainness in calculations and implementation. But their convergence rates are small. Adaptive averaging filters (AFA) are also used because they have high convergence, but they are less stable. This paper provides the comparative study of LMS and Normalized NLMS, AFA and new enhanced average adaptive (Average NLMS-ANLMS) filters for noise cancelling application using speech signals.

Modelling Extreme Temperature in Malaysia Using Generalized Extreme Value Distribution

Extreme temperature of several stations in Malaysia is modelled by fitting the monthly maximum to the Generalized Extreme Value (GEV) distribution. The Mann-Kendall (MK) test suggests a non-stationary model. Two models are considered for stations with trend and the Likelihood Ratio test is used to determine the best-fitting model. Results show that half of the stations favour a model which is linear for the location parameters. The return level is the level of events (maximum temperature) which is expected to be exceeded once, on average, in a given number of years, is obtained.

Noise Estimation for Speech Enhancement in Non-Stationary Environments-A New Method

This paper presents a new method for estimating the nonstationary noise power spectral density given a noisy signal. The method is based on averaging the noisy speech power spectrum using time and frequency dependent smoothing factors. These factors are adjusted based on signal-presence probability in individual frequency bins. Signal presence is determined by computing the ratio of the noisy speech power spectrum to its local minimum, which is updated continuously by averaging past values of the noisy speech power spectra with a look-ahead factor. This method adapts very quickly to highly non-stationary noise environments. The proposed method achieves significant improvements over a system that uses voice activity detector (VAD) in noise estimation.

Comparison of Detrending Methods in Spectral Analysis of Heart Rate Variability

Non-stationary trend in R-R interval series is considered as a main factor that could highly influence the evaluation of spectral analysis. It is suggested to remove trends in order to obtain reliable results. In this study, three detrending methods, the smoothness prior approach, the wavelet and the empirical mode decomposition, were compared on artificial R-R interval series with four types of simulated trends. The Lomb-Scargle periodogram was used for spectral analysis of R-R interval series. Results indicated that the wavelet method showed a better overall performance than the other two methods, and more time-saving, too. Therefore it was selected for spectral analysis of real R-R interval series of thirty-seven healthy subjects. Significant decreases (19.94±5.87% in the low frequency band and 18.97±5.78% in the ratio (p

Improved Estimation of Evolutionary Spectrum based on Short Time Fourier Transforms and Modified Magnitude Group Delay by Signal Decomposition

A new estimator for evolutionary spectrum (ES) based on short time Fourier transform (STFT) and modified group delay function (MGDF) by signal decomposition (SD) is proposed. The STFT due to its built-in averaging, suppresses the cross terms and the MGDF preserves the frequency resolution of the rectangular window with the reduction in the Gibbs ripple. The present work overcomes the magnitude distortion observed in multi-component non-stationary signals with STFT and MGDF estimation of ES using SD. The SD is achieved either through discrete cosine transform based harmonic wavelet transform (DCTHWT) or perfect reconstruction filter banks (PRFB). The MGDF also improves the signal to noise ratio by removing associated noise. The performance of the present method is illustrated for cross chirp and frequency shift keying (FSK) signals, which indicates that its performance is better than STFT-MGDF (STFT-GD) alone. Further its noise immunity is better than STFT. The SD based methods, however cannot bring out the frequency transition path from band to band clearly, as there will be gap in the contour plot at the transition. The PRFB based STFT-SD shows good performance than DCTHWT decomposition method for STFT-GD.

A New Time-Frequency Speech Analysis Approach Based On Adaptive Fourier Decomposition

In this paper, a new adaptive Fourier decomposition (AFD) based time-frequency speech analysis approach is proposed. Given the fact that the fundamental frequency of speech signals often undergo fluctuation, the classical short-time Fourier transform (STFT) based spectrogram analysis suffers from the difficulty of window size selection. AFD is a newly developed signal decomposition theory. It is designed to deal with time-varying non-stationary signals. Its outstanding characteristic is to provide instantaneous frequency for each decomposed component, so the time-frequency analysis becomes easier. Experiments are conducted based on the sample sentence in TIMIT Acoustic-Phonetic Continuous Speech Corpus. The results show that the AFD based time-frequency distribution outperforms the STFT based one.