Abstract: Captured images may suffer from Gaussian blur due to poor lens focus or camera motion. Unsharp masking is a simple and effective technique to boost the image contrast and to improve digital images suffering from Gaussian blur. The technique is based on sharpening object edges by appending the scaled high-frequency components of the image to the original. The quality of the enhanced image is highly dependent on the characteristics of both the high-frequency components and the scaling/gain factor. Since the quality of an image may not be the same throughout, we propose an adaptive unsharp masking method in this paper. In this method, the gain factor is computed, considering the gradient variations, for individual pixels of the image. Subjective and objective image quality assessments are used to compare the performance of the proposed method both with the classic and the recently developed unsharp masking methods. The experimental results show that the proposed method has a better performance in comparison to the other existing methods.
Abstract: The motivation of our work is to detect different
terrain types traversed by a robot based on acoustic data from the
robot-terrain interaction. Different acoustic features and classifiers
were investigated, such as Mel-frequency cepstral coefficient and
Gamma-tone frequency cepstral coefficient for the feature extraction,
and Gaussian mixture model and Feed forward neural network for the
classification. We analyze the system’s performance by comparing
our proposed techniques with some other features surveyed from
distinct related works. We achieve precision and recall values between
87% and 100% per class, and an average accuracy at 95.2%. We also
study the effect of varying audio chunk size in the application phase
of the models and find only a mild impact on performance.
Abstract: The Gravity Recovery and Climate Experiment (GRACE) has been a very successful project in determining math redistribution within the Earth system. Large deformations caused by earthquakes are in the high frequency band. Unfortunately, GRACE is only capable to provide reliable estimate at the low-to-medium frequency band for the gravitational changes. In this study, we computed the gravity changes after the 2012 Mw8.6 Indian Ocean earthquake off-Sumatra using the GRACE Level-2 monthly spherical harmonic (SH) solutions released by the University of Texas Center for Space Research (UTCSR). Moreover, we calculated gravity changes using different fault models derived from teleseismic data. The model predictions showed non-negligible discrepancies in gravity changes. However, after removing high-frequency signals, using Gaussian filtering 350 km commensurable GRACE spatial resolution, the discrepancies vanished, and the spatial patterns of total gravity changes predicted from all slip models became similar at the spatial resolution attainable by GRACE observations, and predicted-gravity changes were consistent with the GRACE-detected gravity changes. Nevertheless, the fault models, in which give different slip amplitudes, proportionally lead to different amplitude in the predicted gravity changes.
Abstract: As 3D video is explored as a hot research topic in the last few decades, free-viewpoint TV (FTV) is no doubt a promising field for its better visual experience and incomparable interactivity. View synthesis is obviously a crucial technology for FTV; it enables to render images in unlimited numbers of virtual viewpoints with the information from limited numbers of reference view. In this paper, a novel hybrid synthesis framework is proposed and blending priority is explored. In contrast to the commonly used View Synthesis Reference Software (VSRS), the presented synthesis process is driven in consideration of the temporal correlation of image sequences. The temporal correlations will be exploited to produce fine synthesis results even near the foreground boundaries. As for the blending priority, this scheme proposed that one of the two reference views is selected to be the main reference view based on the distance between the reference views and virtual view, another view is chosen as the auxiliary viewpoint, just assist to fill the hole pixel with the help of background information. Significant improvement of the proposed approach over the state-of –the-art pixel-based virtual view synthesis method is presented, the results of the experiments show that subjective gains can be observed, and objective PSNR average gains range from 0.5 to 1.3 dB, while SSIM average gains range from 0.01 to 0.05.
Abstract: In this paper, a Joint Source Channel coding scheme
based on LDPC codes is investigated. We consider two concatenated
LDPC codes, one allows to compress a correlated source and the
second to protect it against channel degradations. The original
information can be reconstructed at the receiver by a joint decoder,
where the source decoder and the channel decoder run in parallel by
transferring extrinsic information. We investigate the performance of
the JSC LDPC code in terms of Bit-Error Rate (BER) in the case
of transmission over an Additive White Gaussian Noise (AWGN)
channel, and for different source and channel rate parameters.
We emphasize how JSC LDPC presents a performance tradeoff
depending on the channel state and on the source correlation. We
show that, the JSC LDPC is an efficient solution for a relatively
low Signal-to-Noise Ratio (SNR) channel, especially with highly
correlated sources. Finally, a source-channel rate optimization has
to be applied to guarantee the best JSC LDPC system performance
for a given channel.
Abstract: This article presents a method of using the one
dimensional piezo-electric patch on beam model for structural
identification. A hybrid element constituted of one dimensional
beam element and a PZT sensor is used with reduced material
properties. This model is convenient and simple for identification
of beams. Accuracy of this element is first verified against a
corresponding 3D finite element model (FEM). The structural
identification is carried out as an inverse problem whereby
parameters are identified by minimizing the deviation between
the predicted and measured voltage response of the patch, when
subjected to excitation. A non-classical optimization algorithm
Particle Swarm Optimization is used to minimize this objective
function. The signals are polluted with 5% Gaussian noise to
simulate experimental noise. The proposed method is applied on
beam structure and identified parameters are stiffness and damping.
The model is also validated experimentally.
Abstract: This paper presented a video watermarking algorithm based on wavelet chaotic neural network. First, to enhance binary image’s security, the algorithm encrypted it with double chaotic based on Arnold and Logistic map, Then, the host video was divided into some equal frames and distilled the key frame through chaotic sequence which generated by Logistic. Meanwhile, we distilled the low frequency coefficients of luminance component and self-adaptively embedded the processed image watermark into the low frequency coefficients of the wavelet transformed luminance component with the wavelet neural network. The experimental result suggested that the presented algorithm has better invisibility and robustness against noise, Gaussian filter, rotation, frame loss and other attacks.
Abstract: In this paper, we present the human action recognition method using the variational Bayesian HMM with the Dirichlet process mixture (DPM) of the Gaussian-Wishart emission model (GWEM). First, we define the Bayesian HMM based on the Dirichlet process, which allows an infinite number of Gaussian-Wishart components to support continuous emission observations. Second, we have considered an efficient variational Bayesian inference method that can be applied to drive the posterior distribution of hidden variables and model parameters for the proposed model based on training data. And then we have derived the predictive distribution that may be used to classify new action. Third, the paper proposes a process of extracting appropriate spatial-temporal feature vectors that can be used to recognize a wide range of human behaviors from input video image. Finally, we have conducted experiments that can evaluate the performance of the proposed method. The experimental results show that the method presented is more efficient with human action recognition than existing methods.
Abstract: Modelling realized volatility with high-frequency returns is popular as it is an unbiased and efficient estimator of return volatility. A computationally simple model is fitting the logarithms of the realized volatilities with a fractionally integrated long-memory Gaussian process. The Gaussianity assumption simplifies the parameter estimation using the Whittle approximation. Nonetheless, this assumption may not be met in the finite samples and there may be a need to normalize the financial series. Based on the empirical indices S&P500 and DAX, this paper examines the performance of the linear volatility model pre-treated with normalization compared to its existing counterpart. The empirical results show that by including normalization as a pre-treatment procedure, the forecast performance outperforms the existing model in terms of statistical and economic evaluations.
Abstract: Technological innovations in electronic world demand novel, compact, simple in design, less costly and effective heat transfer devices. Closed Loop Pulsating Heat Pipe (CLPHP) is a passive phase change heat transfer device and has potential to transfer heat quickly and efficiently from source to sink. Thermal performance of a CLPHP is governed by various parameters such as number of U-turns, orientations, input heat, working fluids and filling ratio. The present paper is an attempt to predict the thermal performance of a CLPHP using Artificial Neural Network (ANN). Filling ratio and heat input are considered as input parameters while thermal resistance is set as target parameter. Types of neural networks considered in the present paper are radial basis, generalized regression, linear layer, cascade forward back propagation, feed forward back propagation; feed forward distributed time delay, layer recurrent and Elman back propagation. Linear, logistic sigmoid, tangent sigmoid and Radial Basis Gaussian Function are used as transfer functions. Prediction accuracy is measured based on the experimental data reported by the researchers in open literature as a function of Mean Absolute Relative Deviation (MARD). The prediction of a generalized regression ANN model with spread constant of 4.8 is found in agreement with the experimental data for MARD in the range of ±1.81%.
Abstract: Ship detection is nowadays quite an important issue
in tasks related to sea traffic control, fishery management and ship
search and rescue. Although it has traditionally been carried out
by patrol ships or aircrafts, coverage and weather conditions and
sea state can become a problem. Synthetic aperture radars can
surpass these coverage limitations and work under any climatological
condition. A fast CFAR ship detector based on a robust statistical
modeling of sea clutter with respect to sea states in SAR images
is used. In this paper, the minimum SNR required to obtain a
given detection probability with a given false alarm rate for any
sea state is determined. A Gaussian target model using real SAR
data is considered. Results show that SNR does not depend heavily
on the class considered. Provided there is some variation in the
backscattering of targets in SAR imagery, the detection probability
is limited and a post-processing stage based on morphology would
be suitable.
Abstract: This paper presents a self-sustaining mobile system for
counting and classification of vehicles through processing video. It
proposes a counting and classification algorithm divided in four steps
that can be executed multiple times in parallel in a SBC (Single
Board Computer), like the Raspberry Pi 2, in such a way that it
can be implemented in real time. The first step of the proposed
algorithm limits the zone of the image that it will be processed.
The second step performs the detection of the mobile objects using
a BGS (Background Subtraction) algorithm based on the GMM
(Gaussian Mixture Model), as well as a shadow removal algorithm
using physical-based features, followed by morphological operations.
In the first step the vehicle detection will be performed by using
edge detection algorithms and the vehicle following through Kalman
filters. The last step of the proposed algorithm registers the vehicle
passing and performs their classification according to their areas.
An auto-sustainable system is proposed, powered by batteries and
photovoltaic solar panels, and the data transmission is done through
GPRS (General Packet Radio Service)eliminating the need of using
external cable, which will facilitate it deployment and translation to
any location where it could operate. The self-sustaining trailer will
allow the counting and classification of vehicles in specific zones
with difficult access.
Abstract: The wind is a random variable difficult to master, for this, we developed a mathematical and statistical methods enable to modeling and forecast wind power. Gaussian Processes (GP) is one of the most widely used families of stochastic processes for modeling dependent data observed over time, or space or time and space. GP is an underlying process formed by unrecognized operator’s uses to solve a problem. The purpose of this paper is to present how to forecast wind power by using the GP. The Gaussian process method for forecasting are presented. To validate the presented approach, a simulation under the MATLAB environment has been given.
Abstract: This paper will explore formation of HCl aerosol at atmospheric boundary layers and encourages the uptake of environmental modeling systems (EMSs) as a practice evaluation of gaseous emissions (“framework measures”) from small and medium-sized enterprises (SMEs). The conceptual model predicts greenhouse gas emissions to ecological points beyond landfill site operations. It focuses on incorporation traditional knowledge into baseline information for both measurement data and the mathematical results, regarding parameters influence model variable inputs. The paper has simplified parameters of aerosol processes based on the more complex aerosol process computations. The simple model can be implemented to both Gaussian and Eulerian rural dispersion models. Aerosol processes considered in this study were (i) the coagulation of particles, (ii) the condensation and evaporation of organic vapors, and (iii) dry deposition. The chemical transformation of gas-phase compounds is taken into account photochemical formulation with exposure effects according to HCl concentrations as starting point of risk assessment. The discussion set out distinctly aspect of sustainability in reflection inputs, outputs, and modes of impact on the environment. Thereby, models incorporate abiotic and biotic species to broaden the scope of integration for both quantification impact and assessment risks. The later environmental obligations suggest either a recommendation or a decision of what is a legislative should be achieved for mitigation measures of landfill gas (LFG) ultimately.
Abstract: This paper presents a technique for compact three
dimensional (3D) object model reconstruction using wavelet
networks. It consists to transform an input surface vertices
into signals,and uses wavelet network parameters for signal
approximations. To prove this, we use a wavelet network architecture
founded on several mother wavelet families. POLYnomials
WindOwed with Gaussians (POLYWOG) wavelet families are used
to maximize the probability to select the best wavelets which
ensure the good generalization of the network. To achieve a better
reconstruction, the network is trained several iterations to optimize the
wavelet network parameters until the error criterion is small enough.
Experimental results will shown that our proposed technique can
effectively reconstruct an irregular 3D object models when using
the optimized wavelet network parameters. We will prove that an
accurateness reconstruction depends on the best choice of the mother
wavelets.
Abstract: In this work, we present a Bayesian non-parametric
approach to model the motion control of ATVs. The motion control
model is based on a Dirichlet Process-Gaussian Process (DP-GP)
mixture model. The DP-GP mixture model provides a flexible
representation of patterns of control manoeuvres along trajectories
of different lengths and discretizations. The model also estimates the
number of patterns, sufficient for modeling the dynamics of the ATV.
Abstract: Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.
Abstract: Real Time Video Tracking is a challenging task for computing professionals. The performance of video tracking techniques is greatly affected by background detection and elimination process. Local regions of the image frame contain vital information of background and foreground. However, pixel-level processing of local regions consumes a good amount of computational time and memory space by traditional approaches. In our approach we have explored the concurrent computational ability of General Purpose Graphic Processing Units (GPGPU) to address this problem. The Gaussian Mixture Model (GMM) with adaptive weighted kernels is used for detecting the background. The weights of the kernel are influenced by local regions and are updated by inter-frame variations of these corresponding regions. The proposed system has been tested with GPU devices such as GeForce GTX 280, GeForce GTX 280 and Quadro K2000. The results are encouraging with maximum speed up 10X compared to sequential approach.
Abstract: We present in this work our model of road traffic
emissions (line sources) and dispersion of these emissions, named
DISPOLSPEM (Dispersion of Poly Sources and Pollutants Emission
Model). In its emission part, this model was designed to keep the
consistent bottom-up and top-down approaches. It also allows to
generate emission inventories from reduced input parameters being
adapted to existing conditions in Morocco and in the other developing
countries. While several simplifications are made, all the performance
of the model results are kept. A further important advantage of
the model is that it allows the uncertainty calculation and emission
rate uncertainty according to each of the input parameters. In the
dispersion part of the model, an improved line source model has
been developed, implemented and tested against a reference solution.
It provides improvement in accuracy over previous formulas of line
source Gaussian plume model, without being too demanding in terms
of computational resources. In the case study presented here, the
biggest errors were associated with the ends of line source sections;
these errors will be canceled by adjacent sections of line sources
during the simulation of a road network. In cases where the wind
is parallel to the source line, the use of the combination discretized
source and analytical line source formulas minimizes remarkably the
error. Because this combination is applied only for a small number
of wind directions, it should not excessively increase the calculation
time.
Abstract: Designing a controller for stochastic decentralized interconnected large scale systems usually involves a high degree of complexity and computation ability. Noise, observability, and controllability of all system states, connectivity, and channel bandwidth are other constraints to design procedures for distributed large scale systems. The quasi-steady state model investigated in this paper is a reduced order model of the original system using singular perturbation techniques. This paper results in an optimal control synthesis to design an observer based feedback controller by standard stochastic control theory techniques using Linear Quadratic Gaussian (LQG) approach and Kalman filter design with less complexity and computation requirements. Numerical example is given at the end to demonstrate the efficiency of the proposed method.