Identification of LTI Autonomous All Pole System Using Eigenvector Algorithm

This paper presents a method for identification of a linear time invariant (LTI) autonomous all pole system using singular value decomposition. The novelty of this paper is two fold: First, MUSIC algorithm for estimating complex frequencies from real measurements is proposed. Secondly, using the proposed algorithm, we can identify the coefficients of differential equation that determines the LTI system by switching off our input signal. For this purpose, we need only to switch off the input, apply our complex MUSIC algorithm and determine the coefficients as symmetric polynomials in the complex frequencies. This method can be applied to unstable system and has higher resolution as compared to time series solution when, noisy data are used. The classical performance bound, Cramer Rao bound (CRB), has been used as a basis for performance comparison of the proposed method for multiple poles estimation in noisy exponential signal.

Evaluation of Classification Algorithms for Road Environment Detection

The road environment information is needed accurately for applications such as road maintenance and virtual 3D city modeling. Mobile laser scanning (MLS) produces dense point clouds from huge areas efficiently from which the road and its environment can be modeled in detail. Objects such as buildings, cars and trees are an important part of road environments. Different methods have been developed for detection of above such objects, but still there is a lack of accuracy due to the problems of illumination, environmental changes, and multiple objects with same features. In this work the comparison between different classifiers such as Multiclass SVM, kNN and Multiclass LDA for the road environment detection is analyzed. Finally the classification accuracy for kNN with LBP feature improved the classification accuracy as 93.3% than the other classifiers.

Heavy Metals Estimation in Coastal Areas Using Remote Sensing, Field Sampling and Classical and Robust Statistic

Sediments are an important source of accumulation of toxic contaminants within the aquatic environment. Bioassays are a powerful tool for the study of sediments in relation to their toxicity, but they can be expensive. This article presents a methodology to estimate the main physical property of intertidal sediments in coastal zones: heavy metals concentration. This study, which was developed in the Bay of Santander (Spain), applies classical and robust statistic to CASI-2 hyperspectral images to estimate heavy metals presence and ecotoxicity (TOC). Simultaneous fieldwork (radiometric and chemical sampling) allowed an appropriate atmospheric correction to CASI-2 images.

Forecasting the Volatility of Geophysical Time Series with Stochastic Volatility Models

This work is devoted to the study of modeling geophysical time series. A stochastic technique with time-varying parameters is used to forecast the volatility of data arising in geophysics. In this study, the volatility is defined as a logarithmic first-order autoregressive process. We observe that the inclusion of log-volatility into the time-varying parameter estimation significantly improves forecasting which is facilitated via maximum likelihood estimation. This allows us to conclude that the estimation algorithm for the corresponding one-step-ahead suggested volatility (with ±2 standard prediction errors) is very feasible since it possesses good convergence properties.

Scour Depth Prediction around Bridge Piers Using Neuro-Fuzzy and Neural Network Approaches

The prediction of scour depth around bridge piers is frequently considered in river engineering. One of the key aspects in efficient and optimum bridge structure design is considered to be scour depth estimation around bridge piers. In this study, scour depth around bridge piers is estimated using two methods, namely the Adaptive Neuro-Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN). Therefore, the effective parameters in scour depth prediction are determined using the ANN and ANFIS methods via dimensional analysis, and subsequently, the parameters are predicted. In the current study, the methods’ performances are compared with the nonlinear regression (NLR) method. The results show that both methods presented in this study outperform existing methods. Moreover, using the ratio of pier length to flow depth, ratio of median diameter of particles to flow depth, ratio of pier width to flow depth, the Froude number and standard deviation of bed grain size parameters leads to optimal performance in scour depth estimation.

Analysis of Translational Ship Oscillations in a Realistic Environment

To acquire accurate ship motions at the center of gravity, a single low-cost inertial sensor is utilized and applied on board to measure ship oscillating motions. As observations, the three axes accelerations and three axes rotational rates provided by the sensor are used. The mathematical model of processing the observation data includes determination of the distance vector between the sensor and the center of gravity in x, y, and z directions. After setting up the transfer matrix from sensor’s own coordinate system to the ship’s body frame, an extended Kalman filter is applied to deal with nonlinearities between the ship motion in the body frame and the observation information in the sensor’s frame. As a side effect, the method eliminates sensor noise and other unwanted errors. Results are not only roll and pitch, but also linear motions, in particular heave and surge at the center of gravity. For testing, we resort to measurements recorded on a small vessel in a well-defined sea state. With response amplitude operators computed numerically by a commercial software (Seaway), motion characteristics are estimated. These agree well with the measurements after processing with the suggested method.

Sparse Unmixing of Hyperspectral Data by Exploiting Joint-Sparsity and Rank-Deficiency

In this work, we exploit two assumed properties of the abundances of the observed signatures (endmembers) in order to reconstruct the abundances from hyperspectral data. Joint-sparsity is the first property of the abundances, which assumes the adjacent pixels can be expressed as different linear combinations of same materials. The second property is rank-deficiency where the number of endmembers participating in hyperspectral data is very small compared with the dimensionality of spectral library, which means that the abundances matrix of the endmembers is a low-rank matrix. These assumptions lead to an optimization problem for the sparse unmixing model that requires minimizing a combined l2,p-norm and nuclear norm. We propose a variable splitting and augmented Lagrangian algorithm to solve the optimization problem. Experimental evaluation carried out on synthetic and real hyperspectral data shows that the proposed method outperforms the state-of-the-art algorithms with a better spectral unmixing accuracy.

A Study of Mode Choice Model Improvement Considering Age Grouping

The purpose of this study is providing an improved mode choice model considering parameters including age grouping of prime-aged and old age. In this study, 2010 Household Travel Survey data were used and improper samples were removed through the analysis. Chosen alternative, date of birth, mode, origin code, destination code, departure time, and arrival time are considered from Household Travel Survey. By preprocessing data, travel time, travel cost, mode, and ratio of people aged 45 to 55 years, 55 to 65 years and over 65 years were calculated. After the manipulation, the mode choice model was constructed using LIMDEP by maximum likelihood estimation. A significance test was conducted for nine parameters, three age groups for three modes. Then the test was conducted again for the mode choice model with significant parameters, travel cost variable and travel time variable. As a result of the model estimation, as the age increases, the preference for the car decreases and the preference for the bus increases. This study is meaningful in that the individual and households characteristics are applied to the aggregate model.

The Cost and Benefit on the Investment in Safety and Health of the Enterprises in Thailand

The purpose of this study is to evaluate the monetary worthiness of investment and the usefulness of risk estimation as a tool employed by a production section of an electronic factory. This study employed the case study of accidents occurring in production areas. Data is collected from interviews with six production of safety coordinators and collect the information from the relevant section. The study will present the ratio of benefits compared with the operation costs for investment. The result showed that it is worthwhile for investment with the safety measures. In addition, the organizations must be able to analyze the causes of accidents about the benefits of investing in protective working process. They also need to quickly provide the manual for the staff to learn how to protect themselves from accidents and how to use all of the safety equipment.

Assessment of the Energy Balance Method in the Case of Masonry Domes

Masonry dome structures had been widely used for covering large spans in the past. The seismic assessment of these historical structures is very complicated due to the nonlinear behavior of the material, their rigidness, and special stability configuration. The assessment method based on energy balance concept, as well as the standard pushover analysis, is used to evaluate the effectiveness of these methods in the case of masonry dome structures. The Soltanieh dome building is used as an example to which two methods are applied. The performance points are given from superimposing the capacity, and demand curves in Acceleration Displacement Response Spectra (ADRS) and energy coordination are compared with the nonlinear time history analysis as the exact result. The results show a good agreement between the dynamic analysis and the energy balance method, but standard pushover method does not provide an acceptable estimation.

Facial Recognition on the Basis of Facial Fragments

There are many articles that attempt to establish the role of different facial fragments in face recognition. Various approaches are used to estimate this role. Frequently, authors calculate the entropy corresponding to the fragment. This approach can only give approximate estimation. In this paper, we propose to use a more direct measure of the importance of different fragments for face recognition. We propose to select a recognition method and a face database and experimentally investigate the recognition rate using different fragments of faces. We present two such experiments in the paper. We selected the PCNC neural classifier as a method for face recognition and parts of the LFW (Labeled Faces in the Wild) face database as training and testing sets. The recognition rate of the best experiment is comparable with the recognition rate obtained using the whole face.

Online Battery Equivalent Circuit Model Estimation on Continuous-Time Domain Using Linear Integral Filter Method

Equivalent circuit models (ECMs) are widely used in battery management systems in electric vehicles and other battery energy storage systems. The battery dynamics and the model parameters vary under different working conditions, such as different temperature and state of charge (SOC) levels, and therefore online parameter identification can improve the modelling accuracy. This paper presents a way of online ECM parameter identification using a continuous time (CT) estimation method. The CT estimation method has several advantages over discrete time (DT) estimation methods for ECM parameter identification due to the widely separated battery dynamic modes and fast sampling. The presented method can be used for online SOC estimation. Test data are collected using a lithium ion cell, and the experimental results show that the presented CT method achieves better modelling accuracy compared with the conventional DT recursive least square method. The effectiveness of the presented method for online SOC estimation is also verified on test data.

Lithium-Ion Battery State of Charge Estimation Using One State Hysteresis Model with Nonlinear Estimation Strategies

Battery state of charge (SOC) estimation is an important parameter as it measures the total amount of electrical energy stored at a current time. The SOC percentage acts as a fuel gauge if it is compared with a conventional vehicle. Estimating the SOC is, therefore, essential for monitoring the amount of useful life remaining in the battery system. This paper looks at the implementation of three nonlinear estimation strategies for Li-Ion battery SOC estimation. One of the most common behavioral battery models is the one state hysteresis (OSH) model. The extended Kalman filter (EKF), the smooth variable structure filter (SVSF), and the time-varying smoothing boundary layer SVSF are applied on this model, and the results are compared.

Artificial Neural Network Model Based Setup Period Estimation for Polymer Cutting

The paper presents the results and industrial applications in the production setup period estimation based on industrial data inherited from the field of polymer cutting. The literature of polymer cutting is very limited considering the number of publications. The first polymer cutting machine is known since the second half of the 20th century; however, the production of polymer parts with this kind of technology is still a challenging research topic. The products of the applying industrial partner must met high technical requirements, as they are used in medical, measurement instrumentation and painting industry branches. Typically, 20% of these parts are new work, which means every five years almost the entire product portfolio is replaced in their low series manufacturing environment. Consequently, it requires a flexible production system, where the estimation of the frequent setup periods' lengths is one of the key success factors. In the investigation, several (input) parameters have been studied and grouped to create an adequate training information set for an artificial neural network as a base for the estimation of the individual setup periods. In the first group, product information is collected such as the product name and number of items. The second group contains material data like material type and colour. In the third group, surface quality and tolerance information are collected including the finest surface and tightest (or narrowest) tolerance. The fourth group contains the setup data like machine type and work shift. One source of these parameters is the Manufacturing Execution System (MES) but some data were also collected from Computer Aided Design (CAD) drawings. The number of the applied tools is one of the key factors on which the industrial partners’ estimations were based previously. The artificial neural network model was trained on several thousands of real industrial data. The mean estimation accuracy of the setup periods' lengths was improved by 30%, and in the same time the deviation of the prognosis was also improved by 50%. Furthermore, an investigation on the mentioned parameter groups considering the manufacturing order was also researched. The paper also highlights the manufacturing introduction experiences and further improvements of the proposed methods, both on the shop floor and on the quotation preparation fields. Every week more than 100 real industrial setup events are given and the related data are collected.

Detecting Tomato Flowers in Greenhouses Using Computer Vision

This paper presents an image analysis algorithm to detect and count yellow tomato flowers in a greenhouse with uneven illumination conditions, complex growth conditions and different flower sizes. The algorithm is designed to be employed on a drone that flies in greenhouses to accomplish several tasks such as pollination and yield estimation. Detecting the flowers can provide useful information for the farmer, such as the number of flowers in a row, and the number of flowers that were pollinated since the last visit to the row. The developed algorithm is designed to handle the real world difficulties in a greenhouse which include varying lighting conditions, shadowing, and occlusion, while considering the computational limitations of the simple processor in the drone. The algorithm identifies flowers using an adaptive global threshold, segmentation over the HSV color space, and morphological cues. The adaptive threshold divides the images into darker and lighter images. Then, segmentation on the hue, saturation and volume is performed accordingly, and classification is done according to size and location of the flowers. 1069 images of greenhouse tomato flowers were acquired in a commercial greenhouse in Israel, using two different RGB Cameras – an LG G4 smartphone and a Canon PowerShot A590. The images were acquired from multiple angles and distances and were sampled manually at various periods along the day to obtain varying lighting conditions. Ground truth was created by manually tagging approximately 25,000 individual flowers in the images. Sensitivity analyses on the acquisition angle of the images, periods throughout the day, different cameras and thresholding types were performed. Precision, recall and their derived F1 score were calculated. Results indicate better performance for the view angle facing the flowers than any other angle. Acquiring images in the afternoon resulted with the best precision and recall results. Applying a global adaptive threshold improved the median F1 score by 3%. Results showed no difference between the two cameras used. Using hue values of 0.12-0.18 in the segmentation process provided the best results in precision and recall, and the best F1 score. The precision and recall average for all the images when using these values was 74% and 75% respectively with an F1 score of 0.73. Further analysis showed a 5% increase in precision and recall when analyzing images acquired in the afternoon and from the front viewpoint.

Jamun Juice Extraction Using Commercial Enzymes and Optimization of the Treatment with the Help of Physicochemical, Nutritional and Sensory Properties

Jamun (Syzygium cuminii L.) is one of the important indigenous minor fruit with high medicinal value. The jamun cultivation is unorganized and there is huge loss of this fruit every year. The perishable nature of the fruit makes its postharvest management further difficult. Due to the strong cell wall structure of pectin-protein bonds and hard seeds, extraction of juice becomes difficult. Enzymatic treatment has been commercially used for improvement of juice quality with high yield. The objective of the study was to optimize the best treatment method for juice extraction. Enzymes (Pectinase and Tannase) from different stains had been used and for each enzyme, best result obtained by using response surface methodology. Optimization had been done on the basis of physicochemical property, nutritional property, sensory quality and cost estimation. According to quality aspect, cost analysis and sensory evaluation, the optimizing enzymatic treatment was obtained by Pectinase from Aspergillus aculeatus strain. The optimum condition for the treatment was 44 oC with 80 minute with a concentration of 0.05% (w/w). At these conditions, 75% of yield with turbidity of 32.21NTU, clarity of 74.39%T, polyphenol content of 115.31 mg GAE/g, protein content of 102.43 mg/g have been obtained with a significant difference in overall acceptability.

Feature Vector Fusion for Image Based Human Age Estimation

Human faces, as important visual signals, express a significant amount of nonverbal info for usage in human-to-human communication. Age, specifically, is more significant among these properties. Human age estimation using facial image analysis as an automated method which has numerous potential real‐world applications. In this paper, an automated age estimation framework is presented. Support Vector Regression (SVR) strategy is utilized to investigate age prediction. This paper depicts a feature extraction taking into account Gray Level Co-occurrence Matrix (GLCM), which can be utilized for robust face recognition framework. It applies GLCM operation to remove the face's features images and Active Appearance Models (AAMs) to assess the human age based on image. A fused feature technique and SVR with GA optimization are proposed to lessen the error in age estimation.

Variogram Fitting Based on the Wilcoxon Norm

Within geostatistics research, effective estimation of the variogram points has been examined, particularly in developing robust alternatives. The parametric fit of these variogram points which eventually defines the kriging weights, however, has not received the same attention from a robust perspective. This paper proposes the use of the non-linear Wilcoxon norm over weighted non-linear least squares as a robust variogram fitting alternative. First, we introduce the concept of variogram estimation and fitting. Then, as an alternative to non-linear weighted least squares, we discuss the non-linear Wilcoxon estimator. Next, the robustness properties of the non-linear Wilcoxon are demonstrated using a contaminated spatial data set. Finally, under simulated conditions, increasing levels of contaminated spatial processes have their variograms points estimated and fit. In the fitting of these variogram points, both non-linear Weighted Least Squares and non-linear Wilcoxon fits are examined for efficiency. At all levels of contamination (including 0%), using a robust estimation and robust fitting procedure, the non-weighted Wilcoxon outperforms weighted Least Squares.

Real Time Video Based Smoke Detection Using Double Optical Flow Estimation

In this paper, we present a video based smoke detection algorithm based on TVL1 optical flow estimation. The main part of the algorithm is an accumulating system for motion angles and upward motion speed of the flow field. We optimized the usage of TVL1 flow estimation for the detection of smoke with very low smoke density. Therefore, we use adapted flow parameters and estimate the flow field on difference images. We show in theory and in evaluation that this improves the performance of smoke detection significantly. We evaluate the smoke algorithm using videos with different smoke densities and different backgrounds. We show that smoke detection is very reliable in varying scenarios. Further we verify that our algorithm is very robust towards crowded scenes disturbance videos.

Human Action Recognition System Based on Silhouette

Human action is recognized directly from the video sequences. The objective of this work is to recognize various human actions like run, jump, walk etc. Human action recognition requires some prior knowledge about actions namely, the motion estimation, foreground and background estimation. Region of interest (ROI) is extracted to identify the human in the frame. Then, optical flow technique is used to extract the motion vectors. Using the extracted features similarity measure based classification is done to recognize the action. From experimentations upon the Weizmann database, it is found that the proposed method offers a high accuracy.