Abstract: Airborne Laser Scanning (ALS) is one of the main technologies for generating high-resolution digital terrain models (DTMs). DTMs are crucial to several applications, such as topographic mapping, flood zone delineation, geographic information systems (GIS), hydrological modelling, spatial analysis, etc. Laser scanning system generates irregularly spaced three-dimensional cloud of points. Raw ALS data are mainly ground points (that represent the bare earth) and non-ground points (that represent buildings, trees, cars, etc.). Removing all the non-ground points from the raw data is referred to as filtering. Filtering heavily forested areas is considered a difficult and challenging task as the canopy stops laser pulses from reaching the terrain surface. This research presents an approach for removing non-ground points from raw ALS data in densely forested areas. Smoothing splines are exploited to interpolate and fit the noisy ALS data. The presented filter utilizes a weight function to allocate weights for each point of the data. Furthermore, unlike most of the methods, the presented filtering algorithm is designed to be automatic. Three different forested areas in the United Kingdom are used to assess the performance of the algorithm. The results show that the generated DTMs from the filtered data are accurate (when compared against reference terrain data) and the performance of the method is stable for all the heavily forested data samples. The average root mean square error (RMSE) value is 0.35 m.
Abstract: In this study, an automated building and tree detection
method is proposed using DSM data and true orthophoto image. A
multiscale matched filtering is used on DSM data. Therefore, first
watershed transform is applied. Then, Otsu’s thresholding method
is used as an adaptive threshold to segment each watershed region.
Detected objects are masked with NDVI to separate buildings and
trees. The proposed method is able to detect buildings and trees
without entering any elevation threshold. We tested our method on
ISPRS semantic labeling dataset and obtained promising results.
Abstract: Character detection is an important issue for character recognition of ancient Yi books. The accuracy of detection directly affects the recognition effect of ancient Yi books. Considering the complex layout, the lack of standard typesetting and the mixed arrangement between images and texts, we propose a character detection method for ancient Yi books based on connected components and regressive character segmentation. First, the scanned images of ancient Yi books are preprocessed with nonlocal mean filtering, and then a modified local adaptive threshold binarization algorithm is used to obtain the binary images to segment the foreground and background for the images. Second, the non-text areas are removed by the method based on connected components. Finally, the single character in the ancient Yi books is segmented by our method. The experimental results show that the method can effectively separate the text areas and non-text areas for ancient Yi books and achieve higher accuracy and recall rate in the experiment of character detection, and effectively solve the problem of character detection and segmentation in character recognition of ancient books.
Abstract: As part of the development of a 4D autopilot system for unmanned aerial vehicles (UAVs), i.e. a time-dependent robust trajectory generation and control algorithm, this work addresses the problem of optimal path control based on the flight sensors data output that may be unreliable due to noise on data acquisition and/or transmission under certain circumstances. Although several filtering methods, such as the Kalman-Bucy filter or the Linear Quadratic Gaussian/Loop Transfer Recover Control (LQG/LTR), are available, the utter complexity of the control system, together with the robustness and reliability required of such a system on a UAV for airworthiness certifiable autonomous flight, required the development of a proper robust filter for a nonlinear system, as a way of further mitigate errors propagation to the control system and improve its ,performance. As such, a nonlinear algorithm based upon the LQG/LTR, is validated through computational simulation testing, is proposed on this paper.
Abstract: This paper presents a web application for the improvement of images through recognition. The web application is based on the analysis of picture-based recognition methods that allow an improvement on the physical appearance of people posting in social networks. The basis relies on the study of tools that can correct or improve some features of the face, with the help of a wide collection of user images taken as reference to build a facial profile. Automatic facial profiling can be achieved with a deeper study of the Object Detection Library. It was possible to improve the initial images with the help of MATLAB and its filtering functions. The user can have a direct interaction with the program and manually adjust his preferences.
Abstract: In this paper, we demonstrate basic all-optical functions for 2R regeneration (Re-amplification and Re-shaping) based on self-similar spectral broadening in low normal dispersion and highly nonlinear fiber (ND-HNLF) to regenerate the signal through optical filtering including the transfer function characteristics, and output extinction ratio. Our approach of all-optical 2R regeneration is based on those of Mamyshev. The numerical study reveals the self-similar spectral broadening very effective for 2R all-optical regeneration; the proposed design presents high stability compared to a conventional regenerator using SPM broadening with reduction of the intensity fluctuations and improvement of the extinction ratio.
Abstract: This document describes an advanced system and methodology for Cross Traffic Alert (CTA), able to detect vehicles that move into the vehicle driving path from the left or right side. The camera is supposed to be not only on a vehicle still, e.g. at a traffic light or at an intersection, but also moving slowly, e.g. in a car park. In all of the aforementioned conditions, a driver’s short loss of concentration or distraction can easily lead to a serious accident. A valid support to avoid these kinds of car crashes is represented by the proposed system. It is an extension of our previous work, related to a clustering system, which only works on fixed cameras. Just a vanish point calculation and simple optical flow filtering, to eliminate motion vectors due to the car relative movement, is performed to let the system achieve high performances with different scenarios, cameras and resolutions. The proposed system just uses as input the optical flow, which is hardware implemented in the proposed platform and since the elaboration of the whole system is really speed and power consumption, it is inserted directly in the camera framework, allowing to execute all the processing in real-time.
Abstract: This paper presents an optimal duty-cycle modulation (ODCM) scheme for analog-to-digital conversion (ADC) systems. The overall ODCM-Based ADC problem is decoupled into optimal DCM and digital filtering sub-problems, while taking into account constraints of mutual design parameters between the two. Using a set of three lemmas and four morphological theorems, the ODCM sub-problem is modelled as a nonlinear cost function with nonlinear constraints. Then, a weighted least pth norm of the error between ideal and predicted frequency responses is used as a cost function for the digital filtering sub-problem. In addition, MATLAB fmincon and MATLAB iirlnorm tools are used as optimal DCM and least pth norm solvers respectively. Furthermore, the virtual simulation scheme of an overall prototyping ODCM-based ADC system is implemented and well tested with the help of Simulink tool according to relevant set of design data, i.e., 3 KHz of modulating bandwidth, 172 KHz of maximum modulation frequency and 25 MHZ of sampling frequency. Finally, the results obtained and presented show that the ODCM-based ADC achieves under 3 KHz of modulating bandwidth: 57 dBc of SINAD (signal-to-noise and distorsion), 58 dB of SFDR (Surpious free dynamic range) -80 dBc of THD (total harmonic distorsion), and 10 bits of minimum resolution. These performance levels appear to be a great challenge within the class of oversampling ADC topologies, with 2nd order IIR (infinite impulse response) decimation filter.
Abstract: System identification of an Unmanned Aerial Vehicle (UAV), to acquire its mathematical model, is a significant step in the process of aircraft flight automation. The need for reliable mathematical model is an established requirement for autopilot design, flight simulator development, aircraft performance appraisal, analysis of aircraft modifications, preflight testing of prototype aircraft and investigation of fatigue life and stress distribution etc. This research is aimed at system identification of a fixed wing UAV by means of specifically designed flight experiment. The purposely designed flight maneuvers were performed on the UAV and aircraft states were recorded during these flights. Acquired data were preprocessed for noise filtering and bias removal followed by parameter estimation of longitudinal dynamics transfer functions using MATLAB system identification toolbox. Black box identification based transfer function models, in response to elevator and throttle inputs, were estimated using least square error technique. The identification results show a high confidence level and goodness of fit between the estimated model and actual aircraft response.
Abstract: In recent years, object detection has gained much
attention and very encouraging research area in the field of computer
vision. The robust object boundaries detection in an image is
demanded in numerous applications of human computer interaction
and automated surveillance systems. Many methods and approaches
have been developed for automatic object detection in various fields,
such as automotive, quality control management and environmental
services. Inappropriately, to the best of our knowledge, object
detection under illumination with shadow consideration has not
been well solved yet. Furthermore, this problem is also one of
the major hurdles to keeping an object detection method from the
practical applications. This paper presents an approach to automatic
object detection in images under non-standardized environmental
conditions. A key challenge is how to detect the object, particularly
under uneven illumination conditions. Image capturing conditions
the algorithms need to consider a variety of possible environmental
factors as the colour information, lightening and shadows varies
from image to image. Existing methods mostly failed to produce the
appropriate result due to variation in colour information, lightening
effects, threshold specifications, histogram dependencies and colour
ranges. To overcome these limitations we propose an object detection
algorithm, with pre-processing methods, to reduce the interference
caused by shadow and illumination effects without fixed parameters.
We use the Y CrCb colour model without any specific colour
ranges and predefined threshold values. The segmented object regions
are further classified using morphological operations (Erosion and
Dilation) and contours. Proposed approach applied on a large image
data set acquired under various environmental conditions for wood
stack detection. Experiments show the promising result of the
proposed approach in comparison with existing methods.
Abstract: This paper investigates MIMO (Multiple-Input
Multiple-Output) adaptive filtering techniques for the application
of supervised source separation in the context of convolutive
mixtures. From the observation that there is correlation among the
signals of the different mixtures, an improvement in the NSAF
(Normalized Subband Adaptive Filter) algorithm is proposed in
order to accelerate its convergence rate. Simulation results with
mixtures of speech signals in reverberant environments show the
superior performance of the proposed algorithm with respect to the
performances of the NLMS (Normalized Least-Mean-Square) and
conventional NSAF, considering both the convergence speed and
SIR (Signal-to-Interference Ratio) after convergence.
Abstract: The heavy rainfall from 3rd to 22th January 2017 had swamped much area of Ranot district in southern Thailand. Due to heavy rainfall, the district was flooded which had a lot of effects on economy and social loss. The major objective of this study is to detect flooding extent using Sentinel-1A data and identify a number of damaged buildings over there. The data were collected in two stages as pre-flooding and during flood event. Calibration, speckle filtering, geometric correction, and histogram thresholding were performed with the data, based on intensity spectral values to classify thematic maps. The maps were used to identify flooding extent using change detection, along with the buildings digitized and collected on JOSM desktop. The numbers of damaged buildings were counted within the flooding extent with respect to building data. The total flooded areas were observed as 181.45 sq.km. These areas were mostly occurred at Ban khao, Ranot, Takhria, and Phang Yang sub-districts, respectively. The Ban khao sub-district had more occurrence than the others because this area is located at lower altitude and close to Thale Noi and Thale Luang lakes than others. The numbers of damaged buildings were high in Khlong Daen (726 features), Tha Bon (645 features), and Ranot sub-district (604 features), respectively. The final flood extent map might be very useful for the plan, prevention and management of flood occurrence area. The map of building damage can be used for the quick response, recovery and mitigation to the affected areas for different concern organization.
Abstract: This paper presents two of the most knowing kernel
adaptive filtering (KAF) approaches, the kernel least mean squares
and the kernel recursive least squares, in order to predict a new output
of nonlinear signal processing. Both of these methods implement a
nonlinear transfer function using kernel methods in a particular space
named reproducing kernel Hilbert space (RKHS) where the model is
a linear combination of kernel functions applied to transform the
observed data from the input space to a high dimensional feature
space of vectors, this idea known as the kernel trick. Then KAF is the
developing filters in RKHS. We use two nonlinear signal processing
problems, Mackey Glass chaotic time series prediction and nonlinear
channel equalization to figure the performance of the approaches
presented and finally to result which of them is the adapted one.
Abstract: The convergence rate of the least-mean-square (LMS)
algorithm deteriorates if the input signal to the filter is correlated.
In a system identification problem, this convergence rate can be
improved if the signal is white and/or if the system is sparse. We
recently proposed a sparse transform domain LMS-type algorithm
that uses a variable step-size for a sparse system identification.
The proposed algorithm provided high performance even if the
input signal is highly correlated. In this work, we investigate the
performance of the proposed TD-LMS algorithm for a large number
of filter tap which is also a critical issue for standard LMS algorithm.
Additionally, the optimum value of the most important parameter is
calculated for all experiments. Moreover, the convergence analysis
of the proposed algorithm is provided. The performance of the
proposed algorithm has been compared to different algorithms in a
sparse system identification setting of different sparsity levels and
different number of filter taps. Simulations have shown that the
proposed algorithm has prominent performance compared to the other
Abstract: In this paper, firstly, we present the mathematical modeling of finite impulse response (FIR) filter and Cascaded Integrator Comb (CIC) filter for sampling rate reduction and then an extension of Canonical signed digit (CSD) based efficient structure is presented in framework using hybrid signed digit (HSD) arithmetic. CSD representation imposed a restriction that two non-zero CSD coefficient bits cannot acquire adjacent bit positions and therefore, represented structure is not economical in terms of speed, area and power consumption. The HSD based structure gives optimum performance in terms of area and speed with 37.02% passband droop compensation.
Abstract: The Gravity Recovery and Climate Experiment (GRACE) has been a very successful project in determining math redistribution within the Earth system. Large deformations caused by earthquakes are in the high frequency band. Unfortunately, GRACE is only capable to provide reliable estimate at the low-to-medium frequency band for the gravitational changes. In this study, we computed the gravity changes after the 2012 Mw8.6 Indian Ocean earthquake off-Sumatra using the GRACE Level-2 monthly spherical harmonic (SH) solutions released by the University of Texas Center for Space Research (UTCSR). Moreover, we calculated gravity changes using different fault models derived from teleseismic data. The model predictions showed non-negligible discrepancies in gravity changes. However, after removing high-frequency signals, using Gaussian filtering 350 km commensurable GRACE spatial resolution, the discrepancies vanished, and the spatial patterns of total gravity changes predicted from all slip models became similar at the spatial resolution attainable by GRACE observations, and predicted-gravity changes were consistent with the GRACE-detected gravity changes. Nevertheless, the fault models, in which give different slip amplitudes, proportionally lead to different amplitude in the predicted gravity changes.
Abstract: An algebraic framework for processing graph signals
axiomatically designates the graph adjacency matrix as the shift
operator. In this setup, we often encounter a problem wherein we
know the filtered output and the filter coefficients, and need to
find out the input graph signal. Solution to this problem using
direct approach requires O(N3) operations, where N is the number
of vertices in graph. In this paper, we adapt the spectral graph
partitioning method for partitioning of graphs and use it to reduce
the computational cost of the filtering problem. We use the example
of denoising of the temperature data to illustrate the efficacy of the
Abstract: Battery state of charge (SOC) estimation is an important
parameter as it measures the total amount of electrical energy stored
at a current time. The SOC percentage acts as a fuel gauge if it
is compared with a conventional vehicle. Estimating the SOC is,
therefore, essential for monitoring the amount of useful life remaining
in the battery system. This paper looks at the implementation of three
nonlinear estimation strategies for Li-Ion battery SOC estimation.
One of the most common behavioral battery models is the one
state hysteresis (OSH) model. The extended Kalman filter (EKF),
the smooth variable structure filter (SVSF), and the time-varying
smoothing boundary layer SVSF are applied on this model, and the
results are compared.
Abstract: Supersonic nozzles are commonly used to purify natural gas in gas processing technology. As an innovated technology, it is employed to overcome the deficit of the traditional method, related to gas dynamics, thermodynamics and fluid dynamics theory. An indoor test rig is built to study the dehumidification process of moisture fluid. Humid air was chosen for the study. The working fluid was circulating in an open loop, which had provision for filtering, metering, and humidifying. A stainless steel supersonic separator is constructed together with the C-D nozzle system. The result shows that dehumidification enhances as NPR increases. This is due to the high intensity in the turbulence caused by the shock formation in the divergent section. Such disturbance strengthens the centrifugal force, pushing more particles toward the near-wall region. In return return, the pressure recovery factor, defined as the ratio of the outlet static pressure of the fluid to its inlet value, decreases with NPR.
Abstract: In recent years, e-learning recommender systems has attracted great attention as a solution towards addressing the problem of information overload in e-learning environments and providing relevant recommendations to online learners. E-learning recommenders continue to play an increasing educational role in aiding learners to find appropriate learning materials to support the achievement of their learning goals. Although general recommender systems have recorded significant success in solving the problem of information overload in e-commerce domains and providing accurate recommendations, e-learning recommender systems on the other hand still face some issues arising from differences in learner characteristics such as learning style, skill level and study level. Conventional recommendation techniques such as collaborative filtering and content-based deal with only two types of entities namely users and items with their ratings. These conventional recommender systems do not take into account the learner characteristics in their recommendation process. Therefore, conventional recommendation techniques cannot make accurate and personalized recommendations in e-learning environment. In this paper, we propose a recommendation technique combining collaborative filtering and ontology to recommend personalized learning materials to online learners. Ontology is used to incorporate the learner characteristics into the recommendation process alongside the ratings while collaborate filtering predicts ratings and generate recommendations. Furthermore, ontological knowledge is used by the recommender system at the initial stages in the absence of ratings to alleviate the cold-start problem. Evaluation results show that our proposed recommendation technique outperforms collaborative filtering on its own in terms of personalization and recommendation accuracy.