Infrastructure Change Monitoring Using Multitemporal Multispectral Satellite Images

The main objective of this study is to find a suitable approach to monitor the land infrastructure growth over a period of time using multispectral satellite images. Bi-temporal change detection method is unable to indicate the continuous change occurring over a long period of time. To achieve this objective, the approach used here estimates a statistical model from series of multispectral image data over a long period of time, assuming there is no considerable change during that time period and then compare it with the multispectral image data obtained at a later time. The change is estimated pixel-wise. Statistical composite hypothesis technique is used for estimating pixel based change detection in a defined region. The generalized likelihood ratio test (GLRT) is used to detect the changed pixel from probabilistic estimated model of the corresponding pixel. The changed pixel is detected assuming that the images have been co-registered prior to estimation. To minimize error due to co-registration, 8-neighborhood pixels around the pixel under test are also considered. The multispectral images from Sentinel-2 and Landsat-8 from 2015 to 2018 are used for this purpose. There are different challenges in this method. First and foremost challenge is to get quite a large number of datasets for multivariate distribution modelling. A large number of images are always discarded due to cloud coverage. Due to imperfect modelling there will be high probability of false alarm. Overall conclusion that can be drawn from this work is that the probabilistic method described in this paper has given some promising results, which need to be pursued further.

Non-Linear Control Based on State Estimation for the Convoy of Autonomous Vehicles

In this paper, a longitudinal and lateral control approach based on a nonlinear observer is proposed for a convoy of autonomous vehicles to follow a desired trajectory. To authors best knowledge, this topic has not yet been sufficiently addressed in the literature for the control of multi vehicles. The modeling of the convoy of the vehicles is revisited using a robotic method for simulation purposes and control design. With these models, a sliding mode observer is proposed to estimate the states of each vehicle in the convoy from the available sensors, then a sliding mode control based on this observer is used to control the longitudinal and lateral movement. The validation and performance evaluation are done using the well-known driving simulator Scanner-Studio. The results are presented for different maneuvers of 5 vehicles.

The Estimation of Bird Diversity Loss and Gain as an Impact of Oil Palm Plantation: Study Case in KJNP Estate Riau Province

The rapid growth of oil palm industry in Indonesia raised many negative accusations from various parties, who said that oil palm plantation is damaging the environment and biodiversity, including birds. Since research on oil palm plantation impacts on bird diversity is still limited, this study needs to be developed in order to gain further learning and understanding. Data on bird diversity were collected in March 2018 in KJNP Estate, Riau Province using strip transect method on five different land cover types (young, intermediate, and old growth of oil palm plantation, high conservation value area, and crops field or the baseline). The observations were conducted simultaneously, with three repetitions. The result shows that the baseline has 19 species of birds and land cover after the oil palm plantation has 39 species. HCV (high conservation value) area has the highest increase in diversity value. Oil palm plantation has changed the composition of bird species. The highest similarity index is shown by young growth oil palm land cover with total score 0.65, meanwhile the lowest similarity index with total score 0.43 is shown by HCV area. Overall, the existence of oil palm plantation made a positive impact by increasing bird species diversity, with total 23 species gained and 3 species lost.

Urban Areas Management in Developing Countries: Analysis of the Urban Areas Crossed with Risk of Storm Water Drains, Aswan-Egypt

One of the most risky areas in Aswan is Abouelreesh, which is suffering from flood disasters, as heavy deluge inundates urban areas causing considerable damage to buildings and infrastructure. Moreover, the main problem was the urban sprawl towards this risky area. This paper aims to identify the urban areas located in the risk areas prone to flash floods. Analyzing this phenomenon needs a lot of data to ensure satisfactory results; however, in this case the official data and field data were limited, and therefore, free sources of satellite data were used. This paper used ArcGIS tools to obtain the storm water drains network by analyzing DEM files. Additionally, historical imagery in Google Earth was studied to determine the age of each building. The last step was to overlay the urban area layer and the storm water drains layer to identify the vulnerable areas. The results of this study would be helpful to urban planners and government officials to make the disasters risk estimation and develop primary plans to recover the risky area, especially urban areas located in torrents.

A Decision Tree Approach to Estimate Permanent Residents Using Remote Sensing Data in Lebanese Municipalities

Population estimation using Geographic Information System (GIS) and remote sensing faces many obstacles such as the determination of permanent residents. A permanent resident is an individual who stays and works during all four seasons in his village. So, all those who move towards other cities or villages are excluded from this category. The aim of this study is to identify the factors affecting the percentage of permanent residents in a village and to determine the attributed weight to each factor. To do so, six factors have been chosen (slope, precipitation, temperature, number of services, time to Central Business District (CBD) and the proximity to conflict zones) and each one of those factors has been evaluated using one of the following data: the contour lines map of 50 m, the precipitation map, four temperature maps and data collected through surveys. The weighting procedure has been done using decision tree method. As a result of this procedure, temperature (50.8%) and percentage of precipitation (46.5%) are the most influencing factors.

Kinetic Parameter Estimation from Thermogravimetry and Microscale Combustion Calorimetry

Flammability analysis of extruded polystyrene (XPS) has become crucial due to its utilization as insulation material for energy efficient buildings. Using the Kissinger-Akahira-Sunose and Flynn-Wall-Ozawa methods, the degradation kinetics of two pure XPS from the local market, red and grey ones, were obtained from the results of thermogravity analysis (TG) and microscale combustion calorimetry (MCC) experiments performed under the same heating rates. From the experiments, it was discovered that red XPS released more heat than grey XPS and both materials showed two mass loss stages. Consequently, the kinetic parameters for red XPS were higher than grey XPS. A comparative evaluation of activation energies from MCC and TG showed an insignificant degree of deviation signifying an equivalent apparent activation energy from both methods. However, different activation energy profiles as a result of the different chemical pathways were presented when the dependencies of the activation energies on extent of conversion for TG and MCC were compared.

Estimation of Uncertainty of Thermal Conductivity Measurement with Single Laboratory Validation Approach

The thermal conductivity of thermal insulation materials are measured by Heat Flow Meter (HFM) apparatus. The components of uncertainty are complex and difficult on routine measurement by modelling approach. In this study, uncertainty of thermal conductivity measurement was estimated by single laboratory validation approach. The within-laboratory reproducibility was 1.1%. The standard uncertainty of method and laboratory bias by using SRM1453 expanded polystyrene board was dominant at 1.4%. However, it was assessed that there was no significant bias. For sample measurement, the sources of uncertainty were repeatability, density of sample and thermal conductivity resolution of HFM. From this approach to sample measurements, the combined uncertainty was calculated. In summary, the thermal conductivity of sample, polystyrene foam, was reported as 0.03367 W/m·K ± 3.5% (k = 2) at mean temperature 23.5 °C. The single laboratory validation approach is simple key of routine testing laboratory for estimation uncertainty of thermal conductivity measurement by using HFM, according to ISO/IEC 17025-2017 requirements. These are meaningful for laboratory competent improvement, quality control on products, and conformity assessment.

Speech Intelligibility Improvement Using Variable Level Decomposition DWT

Intelligibility is an essential characteristic of a speech signal, which is used to help in the understanding of information in speech signal. Background noise in the environment can deteriorate the intelligibility of a recorded speech. In this paper, we presented a simple variance subtracted - variable level discrete wavelet transform, which improve the intelligibility of speech. The proposed algorithm does not require an explicit estimation of noise, i.e., prior knowledge of the noise; hence, it is easy to implement, and it reduces the computational burden. The proposed algorithm decides a separate decomposition level for each frame based on signal dominant and dominant noise criteria. The performance of the proposed algorithm is evaluated with speech intelligibility measure (STOI), and results obtained are compared with Universal Discrete Wavelet Transform (DWT) thresholding and Minimum Mean Square Error (MMSE) methods. The experimental results revealed that the proposed scheme outperformed competing methods

Discrete Estimation of Spectral Density for Alpha Stable Signals Observed with an Additive Error

This paper is interested in two difficulties encountered in practice when observing a continuous time process. The first is that we cannot observe a process over a time interval; we only take discrete observations. The second is the process frequently observed with a constant additive error. It is important to give an estimator of the spectral density of such a process taking into account the additive observation error and the choice of the discrete observation times. In this work, we propose an estimator based on the spectral smoothing of the periodogram by the polynomial Jackson kernel reducing the additive error. In order to solve the aliasing phenomenon, this estimator is constructed from observations taken at well-chosen times so as to reduce the estimator to the field where the spectral density is not zero. We show that the proposed estimator is asymptotically unbiased and consistent. Thus we obtain an estimate solving the two difficulties concerning the choice of the instants of observations of a continuous time process and the observations affected by a constant error.

Effect of Density on the Shear Modulus and Damping Ratio of Saturated Sand in Small Strain

Dynamic properties of soil in small strains, especially for geotechnical engineers, are important for describing the behavior of soil and estimation of the earth structure deformations and structures, especially significant structures. This paper presents the effect of density on the shear modulus and damping ratio of saturated clean sand at various isotropic confining pressures. For this purpose, the specimens were compared with two different relative densities, loose Dr = 30% and dense Dr = 70%. Dynamic parameters were attained from a series of consolidated undrained fixed – free type torsional resonant column tests in small strain. Sand No. 161 is selected for this paper. The experiments show that by increasing sand density and confining pressure, the shear modulus increases and the damping ratio decreases.

Estimation of Synchronous Machine Synchronizing and Damping Torque Coefficients

Synchronizing and damping torque coefficients of a synchronous machine can give a quite clear picture for machine behavior during transients. These coefficients are used as a power system transient stability measurement. In this paper, a crow search optimization algorithm is presented and implemented to study the power system stability during transients. The algorithm makes use of the machine responses to perform the stability study in time domain. The problem is formulated as a dynamic estimation problem. An objective function that minimizes the error square in the estimated coefficients is designed. The method is tested using practical system with different study cases. Results are reported and a thorough discussion is presented. The study illustrates that the proposed method can estimate the stability coefficients for the critical stable cases where other methods may fail. The tests proved that the proposed tool is an accurate and reliable tool for estimating the machine coefficients for assessment of power system stability.

Retail Strategy to Reduce Waste Keeping High Profit Utilizing Taylor's Law in Point-of-Sales Data

Waste reduction is a fundamental problem for sustainability. Methods for waste reduction with point-of-sales (POS) data are proposed, utilizing the knowledge of a recent econophysics study on a statistical property of POS data. Concretely, the non-stationary time series analysis method based on the Particle Filter is developed, which considers abnormal fluctuation scaling known as Taylor's law. This method is extended for handling incomplete sales data because of stock-outs by introducing maximum likelihood estimation for censored data. The way for optimal stock determination with pricing the cost of waste reduction is also proposed. This study focuses on the examination of the methods for large sales numbers where Taylor's law is obvious. Numerical analysis using aggregated POS data shows the effectiveness of the methods to reduce food waste maintaining a high profit for large sales numbers. Moreover, the way of pricing the cost of waste reduction reveals that a small profit loss realizes substantial waste reduction, especially in the case that the proportionality constant  of Taylor’s law is small. Specifically, around 1% profit loss realizes half disposal at =0.12, which is the actual  value of processed food items used in this research. The methods provide practical and effective solutions for waste reduction keeping a high profit, especially with large sales numbers.

Online Pose Estimation and Tracking Approach with Siamese Region Proposal Network

Human pose estimation and tracking are to accurately identify and locate the positions of human joints in the video. It is a computer vision task which is of great significance for human motion recognition, behavior understanding and scene analysis. There has been remarkable progress on human pose estimation in recent years. However, more researches are needed for human pose tracking especially for online tracking. In this paper, a framework, called PoseSRPN, is proposed for online single-person pose estimation and tracking. We use Siamese network attaching a pose estimation branch to incorporate Single-person Pose Tracking (SPT) and Visual Object Tracking (VOT) into one framework. The pose estimation branch has a simple network structure that replaces the complex upsampling and convolution network structure with deconvolution. By augmenting the loss of fully convolutional Siamese network with the pose estimation task, pose estimation and tracking can be trained in one stage. Once trained, PoseSRPN only relies on a single bounding box initialization and producing human joints location. The experimental results show that while maintaining the good accuracy of pose estimation on COCO and PoseTrack datasets, the proposed method achieves a speed of 59 frame/s, which is superior to other pose tracking frameworks.

Absorbed Dose Estimation of 68Ga-EDTMP in Human Organs

Bone metastases are observed in a wide range of cancers leading to intolerable pain. While early detection can help the physicians in the decision of the type of treatment, various radiopharmaceuticals using phosphonates like 68Ga-EDTMP have been developed. In this work, due to the importance of absorbed dose, human absorbed dose of this new agent was calculated for the first time based on biodistribution data in Wild-type rats. 68Ga was obtained from 68Ge/68Ga generator with radionuclidic purity and radiochemical purity of higher than 99%. The radiolabeled complex was prepared in the optimized conditions. Radiochemical purity of the radiolabeled complex was checked by instant thin layer chromatography (ITLC) method using Whatman No. 2 paper and saline. The results indicated the radiochemical purity of higher than 99%. The radiolabelled complex was injected into the Wild-type rats and its biodistribution was studied up to 120 min. As expected, major accumulation was observed in the bone. Absorbed dose of each human organ was calculated based on biodistribution in the rats using RADAR method. Bone surface and bone marrow with 0.112 and 0.053 mSv/MBq, respectively, received the highest absorbed dose. According to these results, the radiolabeled complex is a suitable and safe option for PET bone imaging.

Leveraging xAPI in a Corporate e-Learning Environment to Facilitate the Tracking, Modelling, and Predictive Analysis of Learner Behaviour

E-learning platforms, such as Blackboard have two major shortcomings: limited data capture as a result of the limitations of SCORM (Shareable Content Object Reference Model), and lack of incorporation of Artificial Intelligence (AI) and machine learning algorithms which could lead to better course adaptations. With the recent development of Experience Application Programming Interface (xAPI), a large amount of additional types of data can be captured and that opens a window of possibilities from which online education can benefit. In a corporate setting, where companies invest billions on the learning and development of their employees, some learner behaviours can be troublesome for they can hinder the knowledge development of a learner. Behaviours that hinder the knowledge development also raise ambiguity about learner’s knowledge mastery, specifically those related to gaming the system. Furthermore, a company receives little benefit from their investment if employees are passing courses without possessing the required knowledge and potential compliance risks may arise. Using xAPI and rules derived from a state-of-the-art review, we identified three learner behaviours, primarily related to guessing, in a corporate compliance course. The identified behaviours are: trying each option for a question, specifically for multiple-choice questions; selecting a single option for all the questions on the test; and continuously repeating tests upon failing as opposed to going over the learning material. These behaviours were detected on learners who repeated the test at least 4 times before passing the course. These findings suggest that gauging the mastery of a learner from multiple-choice questions test scores alone is a naive approach. Thus, next steps will consider the incorporation of additional data points, knowledge estimation models to model knowledge mastery of a learner more accurately, and analysis of the data for correlations between knowledge development and identified learner behaviours. Additional work could explore how learner behaviours could be utilised to make changes to a course. For example, course content may require modifications (certain sections of learning material may be shown to not be helpful to many learners to master the learning outcomes aimed at) or course design (such as the type and duration of feedback).

The Non-Stationary BINARMA(1,1) Process with Poisson Innovations: An Application on Accident Data

This paper considers the modelling of a non-stationary bivariate integer-valued autoregressive moving average of order one (BINARMA(1,1)) with correlated Poisson innovations. The BINARMA(1,1) model is specified using the binomial thinning operator and by assuming that the cross-correlation between the two series is induced by the innovation terms only. Based on these assumptions, the non-stationary marginal and joint moments of the BINARMA(1,1) are derived iteratively by using some initial stationary moments. As regards to the estimation of parameters of the proposed model, the conditional maximum likelihood (CML) estimation method is derived based on thinning and convolution properties. The forecasting equations of the BINARMA(1,1) model are also derived. A simulation study is also proposed where BINARMA(1,1) count data are generated using a multivariate Poisson R code for the innovation terms. The performance of the BINARMA(1,1) model is then assessed through a simulation experiment and the mean estimates of the model parameters obtained are all efficient, based on their standard errors. The proposed model is then used to analyse a real-life accident data on the motorway in Mauritius, based on some covariates: policemen, daily patrol, speed cameras, traffic lights and roundabouts. The BINARMA(1,1) model is applied on the accident data and the CML estimates clearly indicate a significant impact of the covariates on the number of accidents on the motorway in Mauritius. The forecasting equations also provide reliable one-step ahead forecasts.

A Bayesian Classification System for Facilitating an Institutional Risk Profile Definition

This paper presents an approach for easy creation and classification of institutional risk profiles supporting endangerment analysis of file formats. The main contribution of this work is the employment of data mining techniques to support set up of the most important risk factors. Subsequently, risk profiles employ risk factors classifier and associated configurations to support digital preservation experts with a semi-automatic estimation of endangerment group for file format risk profiles. Our goal is to make use of an expert knowledge base, accuired through a digital preservation survey in order to detect preservation risks for a particular institution. Another contribution is support for visualisation of risk factors for a requried dimension for analysis. Using the naive Bayes method, the decision support system recommends to an expert the matching risk profile group for the previously selected institutional risk profile. The proposed methods improve the visibility of risk factor values and the quality of a digital preservation process. The presented approach is designed to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and values of file format risk profiles. To facilitate decision-making, the aggregated information about the risk factors is presented as a multidimensional vector. The goal is to visualise particular dimensions of this vector for analysis by an expert and to define its profile group. The sample risk profile calculation and the visualisation of some risk factor dimensions is presented in the evaluation section.

Adsorption and Electrochemical Regeneration for Industrial Wastewater Treatment

Graphite intercalation compound (GIC) has been demonstrated to be a useful, low capacity and rapid adsorbent for the removal of organic micropollutants from water. The high electrical conductivity and low capacity of the material lends itself to electrochemical regeneration. Following electrochemical regeneration, equilibrium loading under similar conditions is reported to exceed that achieved by the fresh adsorbent. This behavior is reported in terms of the regeneration efficiency being greater than 100%. In this work, surface analysis techniques are employed to investigate the material in three states: ‘Fresh’, ‘Loaded’ and ‘Regenerated’. ‘Fresh’ GIC is shown to exhibit a hydrogen and oxygen rich surface layer approximately 150 nm thick. ‘Loaded’ GIC shows a similar but slightly thicker surface layer (approximately 370 nm thick) and significant enhancement in the hydrogen and oxygen abundance extending beyond 600 nm from the surface. 'Regenerated’ GIC shows an oxygen rich layer, slightly thicker than the fresh case at approximately 220 nm while showing a very much lower hydrogen enrichment at the surface. Results demonstrate that while the electrochemical regeneration effectively removes the phenol model pollutant, it also oxidizes the exposed carbon surface. These results may have a significant impact on the estimation of adsorbent life.

Comparison of Two-Phase Critical Flow Models for Estimation of Leak Flow Rate through Cracks

The estimation of leak flow rates through narrow cracks in structures is of importance for nuclear reactor safety, since the leak flow could be detected before occurrence of loss-of-coolant accidents. The two-phase critical leak flow rates are calculated using the system analysis code, and two representative non-homogeneous critical flow models, Henry-Fauske model and Ransom-Trapp model, are compared. The pressure decrease and vapor generation in the crack, and the leak flow rates are found to be larger for the Henry-Fauske model. It is shown that the leak flow rates are not affected by the structural temperature, but affected largely by the roughness of crack surface.

Aliasing Free and Additive Error in Spectra for Alpha Stable Signals

This work focuses on the symmetric alpha stable process with continuous time frequently used in modeling the signal with indefinitely growing variance, often observed with an unknown additive error. The objective of this paper is to estimate this error from discrete observations of the signal. For that, we propose a method based on the smoothing of the observations via Jackson polynomial kernel and taking into account the width of the interval where the spectral density is non-zero. This technique allows avoiding the “Aliasing phenomenon” encountered when the estimation is made from the discrete observations of a process with continuous time. We have studied the convergence rate of the estimator and have shown that the convergence rate improves in the case where the spectral density is zero at the origin. Thus, we set up an estimator of the additive error that can be subtracted for approaching the original signal without error.