Abstract: The reconstruction from sparse-view projections is one
of important problems in computed tomography (CT) limited by
the availability or feasibility of obtaining of a large number of
projections. Traditionally, convex regularizers have been exploited
to improve the reconstruction quality in sparse-view CT, and the
convex constraint in those problems leads to an easy optimization
process. However, convex regularizers often result in a biased
approximation and inaccurate reconstruction in CT problems. Here,
we present a nonconvex, Lipschitz continuous and non-smooth
regularization model. The CT reconstruction is formulated as a
nonconvex constrained L1 − L2 minimization problem and solved
through a difference of convex algorithm and alternating direction
of multiplier method which generates a better result than L0 or L1
regularizers in the CT reconstruction. We compare our method with
previously reported high performance methods which use convex
regularizers such as TV, wavelet, curvelet, and curvelet+TV (CTV)
on the test phantom images. The results show that there are benefits in
using the nonconvex regularizer in the sparse-view CT reconstruction.
Abstract: The noise requirements for naval and research vessels
have seen an increasing demand for quieter ships in order to fulfil
current regulations and to reduce the effects on marine life. Hence,
new methods dedicated to the characterization of propeller noise,
which is the main source of noise in the far-field, are needed. The
study of cavitating propellers in closed-section is interesting for
analyzing hydrodynamic performance but could involve significant
difficulties for hydroacoustic study, especially due to reverberation
and boundary layer noise in the tunnel. The aim of this paper
is to present a numerical methodology for the identification of
hydroacoustic sources on marine propellers using hydrophone arrays
in a large hydrodynamic tunnel. The main difficulties are linked to the
reverberation of the tunnel and the boundary layer noise that strongly
reduce the signal-to-noise ratio. In this paper it is proposed to estimate
the reflection coefficients using an inverse method and some reference
transfer functions measured in the tunnel. This approach allows to
reduce the uncertainties of the propagation model used in the inverse
problem. In order to reduce the boundary layer noise, a cleaning
algorithm taking advantage of the low rank and sparse structure of the
cross-spectrum matrices of the acoustic and the boundary layer noise
is presented. This approach allows to recover the acoustic signal even
well under the boundary layer noise. The improvement brought by
this method is visible on acoustic maps resulting from beamforming
and DAMAS algorithms.
Abstract: In this paper, a data-driven dictionary approach is proposed for the automatic detection and classification of cardiovascular abnormalities. Electrocardiography (ECG) signal is represented by the trained complete dictionaries that contain prototypes or atoms to avoid the limitations of pre-defined dictionaries. The data-driven trained dictionaries simply take the ECG signal as input rather than extracting features to study the set of parameters that yield the most descriptive dictionary. The approach inherently learns the complicated morphological changes in ECG waveform, which is then used to improve the classification. The classification performance was evaluated with ECG data under two different preprocessing environments. In the first category, QT-database is baseline drift corrected with notch filter and it filters the 60 Hz power line noise. In the second category, the data are further filtered using fast moving average smoother. The experimental results on QT database confirm that our proposed algorithm shows a classification accuracy of 92%.
Abstract: Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.
Abstract: In recent decades, rapid and incorrect changes in land-use have been associated with consequences such as natural resources degradation and environmental pollution. Detecting changes in land-use is one of the tools for natural resource management and assessment of changes in ecosystems. The target of this research is studying the land-use changes in Haraz basin with an area of 677000 hectares in a 15 years period (1996 to 2011) using LANDSAT data. Therefore, the quality of the images was first evaluated. Various enhancement methods for creating synthetic bonds were used in the analysis. Separate training sites were selected for each image. Then the images of each period were classified in 9 classes using supervised classification method and the maximum likelihood algorithm. Finally, the changes were extracted in GIS environment. The results showed that these changes are an alarm for the HARAZ basin status in future. The reason is that 27% of the area has been changed, which is related to changing the range lands to bare land and dry farming and also changing the dense forest to sparse forest, horticulture, farming land and residential area.
Abstract: In designing a low-energy-consuming buildings, the heat transfer through a large glass or wall becomes critical. Multiple layers of the window glasses and walls are employed for the high insulation. The gravity driven air flow between window glasses or wall layers is a natural heat convection phenomenon being a key of the heat transfer. For the first step of the natural heat transfer analysis, in this study the development and application of a finite volume method for the numerical computation of viscous incompressible flows is presented. It will become a part of the natural convection analysis with high-order scheme, multi-grid method, and dual-time step in the future. A finite volume method based on a fully-implicit second-order is used to discretize and solve the fluid flow on unstructured grids composed of arbitrary-shaped cells. The integrations of the governing equation are discretised in the finite volume manner using a collocated arrangement of variables. The convergence of the SIMPLE segregated algorithm for the solution of the coupled nonlinear algebraic equations is accelerated by using a sparse matrix solver such as BiCGSTAB. The method used in the present study is verified by applying it to some flows for which either the numerical solution is known or the solution can be obtained using another numerical technique available in the other researches. The accuracy of the method is assessed through the grid refinement.
Abstract: Diffuse Optical Tomography (DOT) is a non-invasive imaging modality used in clinical diagnosis for earlier detection of carcinoma cells in brain tissue. It is a form of optical tomography which produces gives the reconstructed image of a human soft tissue with by using near-infra-red light. It comprises of two steps called forward model and inverse model. The forward model provides the light propagation in a biological medium. The inverse model uses the scattered light to collect the optical parameters of human tissue. DOT suffers from severe ill-posedness due to its incomplete measurement data. So the accurate analysis of this modality is very complicated. To overcome this problem, optical properties of the soft tissue such as absorption coefficient, scattering coefficient, optical flux are processed by the standard regularization technique called Levenberg - Marquardt regularization. The reconstruction algorithms such as Split Bregman and Gradient projection for sparse reconstruction (GPSR) methods are used to reconstruct the image of a human soft tissue for tumour detection. Among these algorithms, Split Bregman method provides better performance than GPSR algorithm. The parameters such as signal to noise ratio (SNR), contrast to noise ratio (CNR), relative error (RE) and CPU time for reconstructing images are analyzed to get a better performance.
Abstract: This paper presents a subband adaptive filter (SAF)
for a system identification where an impulse response is sparse
and disturbed with an impulsive noise. Benefiting from the uses
of l1-norm optimization and l0-norm penalty of the weight vector
in the cost function, the proposed l0-norm sign SAF (l0-SSAF)
achieves both robustness against impulsive noise and much improved
convergence behavior than the classical adaptive filters. Simulation
results in the system identification scenario confirm that the proposed
l0-norm SSAF is not only more robust but also faster and more
accurate than its counterparts in the sparse system identification in
the presence of impulsive noise.
Abstract: This work presents a new type of the affine projection
(AP) algorithms which incorporate the sparsity condition of a
system. To exploit the sparsity of the system, a weighted l1-norm
regularization is imposed on the cost function of the AP algorithm.
Minimizing the cost function with a subgradient calculus and
choosing two distinct weighting for l1-norm, two stochastic gradient
based sparsity regularized AP (SR-AP) algorithms are developed.
Experimental results exhibit that the SR-AP algorithms outperform
the typical AP counterparts for identifying sparse systems.
Abstract: Myoelectric control system is the fundamental
component of modern prostheses, which uses the myoelectric signals
from an individual’s muscles to control the prosthesis movements.
The surface electromyogram signal (sEMG) being noninvasive has
been used as an input to prostheses controllers for many years.
Recent technological advances has led to the development of
implantable myoelectric sensors which enable the internal
myoelectric signal (MES) to be used as input to these prostheses
controllers. The intramuscular measurement can provide focal
recordings from deep muscles of the forearm and independent signals
relatively free of crosstalk thus allowing for more independent
control sites. However, little work has been done to compare the two
inputs. In this paper we have compared the classification accuracy of
six pattern recognition based myoelectric controllers which use
surface myoelectric signals recorded using untargeted (symmetric)
surface electrode arrays to the same controllers with multichannel
intramuscular myolectric signals from targeted intramuscular
electrodes as inputs. There was no significant enhancement in the
classification accuracy as a result of using the intramuscular EMG
measurement technique when compared to the results acquired using
the surface EMG measurement technique. Impressive classification
accuracy (99%) could be achieved by optimally selecting only five
channels of surface EMG.
Abstract: Fabric textures are very common in our daily life.
However, the representation of fabric textures has never been explored
from neuroscience view. Theoretical studies suggest that primary
visual cortex (V1) uses a sparse code to efficiently represent natural
images. However, how the simple cells in V1 encode the artificial
textures is still a mystery. So, here we will take fabric texture as
stimulus to study the response of independent component analysis that
is established to model the receptive field of simple cells in V1. We
choose 140 types of fabrics to get the classical fabric textures as
materials. Experiment results indicate that the receptive fields of
simple cells have obvious selectivity in orientation, frequency and
phase when drifting gratings are used to determine their tuning
properties. Additionally, the distribution of optimal orientation and
frequency shows that the patch size selected from each original fabric
image has a significant effect on the frequency selectivity.
Abstract: Margin-Based Principle has been proposed for a long
time, it has been proved that this principle could reduce the
structural risk and improve the performance in both theoretical
and practical aspects. Meanwhile, feed-forward neural network is
a traditional classifier, which is very hot at present with a deeper
architecture. However, the training algorithm of feed-forward neural
network is developed and generated from Widrow-Hoff Principle that
means to minimize the squared error. In this paper, we propose
a new training algorithm for feed-forward neural networks based
on Margin-Based Principle, which could effectively promote the
accuracy and generalization ability of neural network classifiers
with less labelled samples and flexible network. We have conducted
experiments on four UCI open datasets and achieved good results
as expected. In conclusion, our model could handle more sparse
labelled and more high-dimension dataset in a high accuracy while
modification from old ANN method to our method is easy and almost
free of work.
Abstract: In recent years, the hair building fiber has become
popular, in other words, it is an effective method which helps people
who suffer hair loss or sparse hair since the hair building fiber is
capable to create a natural look of simulated hair rapidly. In the
markets, there are a lot of hair fiber brands that have been designed to
formulate an intense bond with hair strands and make the hair appear
more voluminous instantly. However, those products have their own
set of properties. Thus, in this report, some measurement techniques
are proposed to identify those products. Up to five different brands of
hair fiber are tested. The electrostatic and dielectric properties of the
hair fibers are macroscopically tested using design DC and high
frequency microwave techniques. Besides, the hair fibers are
microscopically analysis by magnifying the structures of the fiber
using scanning electron microscope (SEM). From the SEM photos,
the comparison of the uniformly shaped and broken rate of the hair
fibers in the different bulk samples can be observed respectively.
Abstract: Community structures widely exist in almost all real-life networks. Extensive researches have been carried out on detecting community structures in complex networks. However, many aspects of how community structures may affect the dynamics and properties of complex networks still remain unclear. In this work, we examine the impacts of community structures on the epidemic spreading and detection in complex networks. Extensive simulation results show that community structures may not help decrease the infection size at steady state, yet they could indeed help slow down the infection spreading. Also, networks with strong community structures may expect to have a smaller average infection size when equipped with a number of sparsely deployed monitors.
Abstract: In this paper, getting an high-efficiency parallel algorithm to solve sparse block pentadiagonal linear systems suitable for vectors and parallel processors, stair matrices are used to construct some parallel polynomial approximate inverse preconditioners. These preconditioners are appropriate when the desired target is to maximize parallelism. Moreover, some theoretical results about these preconditioners are presented and how to construct preconditioners effectively for any nonsingular block pentadiagonal H-matrices is also described. In addition, the availability of these preconditioners is illustrated with some numerical experiments arising from two dimensional biharmonic equation.
Abstract: Discrete wavelet transform (DWT) has been widely adopted in biomedical signal processing for denoising, compression
and so on. Choosing a suitable decomposition level (DL) in DWT is of paramount importance to its performance. In this paper, we propose to exploit sparseness of the transformed signals to determine the appropriate DL. Simulation results have shown that the sparseness of transformed signals after DWT increases with the increasing DLs. Additional Monte-Carlo simulation results have verified the effectiveness of sparseness measure in determining the DL.
Abstract: Text categorization is the problem of classifying text
documents into a set of predefined classes. After a preprocessing
step, the documents are typically represented as large sparse vectors.
When training classifiers on large collections of documents, both the
time and memory restrictions can be quite prohibitive. This justifies
the application of feature selection methods to reduce the
dimensionality of the document-representation vector. In this paper,
we present three feature selection methods: Information Gain,
Support Vector Machine feature selection called (SVM_FS) and
Genetic Algorithm with SVM (called GA_SVM). We show that the
best results were obtained with GA_SVM method for a relatively
small dimension of the feature vector.
Abstract: This paper proposed classification models that would
be used as a proxy for hard disk drive (HDD) functional test equitant
which required approximately more than two weeks to perform the
HDD status classification in either “Pass" or “Fail". These models
were constructed by using committee network which consisted of a
number of single neural networks. This paper also included the
method to solve the problem of sparseness data in failed part, which
was called “enforce learning method". Our results reveal that the
constructed classification models with the proposed method could
perform well in the sparse data conditions and thus the models,
which used a few seconds for HDD classification, could be used to
substitute the HDD functional tests.
Abstract: In the paper, a fast high-resolution range profile synthetic algorithm called orthogonal matching pursuit with sensing dictionary (OMP-SD) is proposed. It formulates the traditional HRRP synthetic to be a sparse approximation problem over redundant dictionary. As it employs a priori that the synthetic range profile (SRP) of targets are sparse, SRP can be accomplished even in presence of data lost. Besides, the computation complexity decreases from O(MNDK) flops for OMP to O(M(N + D)K) flops for OMP-SD by introducing sensing dictionary (SD). Simulation experiments illustrate its advantages both in additive white Gaussian noise (AWGN) and noiseless situation, respectively.
Abstract: In this paper we propose a novel method for human
face segmentation using the elliptical structure of the human head. It
makes use of the information present in the edge map of the image.
In this approach we use the fact that the eigenvalues of covariance
matrix represent the elliptical structure. The large and small
eigenvalues of covariance matrix are associated with major and
minor axial lengths of an ellipse. The other elliptical parameters are
used to identify the centre and orientation of the face. Since an
Elliptical Hough Transform requires 5D Hough Space, the Circular
Hough Transform (CHT) is used to evaluate the elliptical parameters.
Sparse matrix technique is used to perform CHT, as it squeeze zero
elements, and have only a small number of non-zero elements,
thereby having an advantage of less storage space and computational
time. Neighborhood suppression scheme is used to identify the valid
Hough peaks. The accurate position of the circumference pixels for
occluded and distorted ellipses is identified using Bresenham-s
Raster Scan Algorithm which uses the geometrical symmetry
properties. This method does not require the evaluation of tangents
for curvature contours, which are very sensitive to noise. The method
has been evaluated on several images with different face orientations.