Abstract: Subspace channel estimation methods have been
studied widely, where the subspace of the covariance matrix is
decomposed to separate the signal subspace from noise subspace. The
decomposition is normally done by using either the eigenvalue
decomposition (EVD) or the singular value decomposition (SVD) of
the auto-correlation matrix (ACM). However, the subspace
decomposition process is computationally expensive. This paper
considers the estimation of the multipath slow frequency hopping
(FH) channel using noise space based method. In particular, an
efficient method is proposed to estimate the multipath time delays by
applying multiple signal classification (MUSIC) algorithm which is
based on the null space extracted by the rank revealing LU (RRLU)
factorization. As a result, precise information is provided by the
RRLU about the numerical null space and the rank, (i.e., important
tool in linear algebra). The simulation results demonstrate the
effectiveness of the proposed novel method by approximately
decreasing the computational complexity to the half as compared
with RRQR methods keeping the same performance.
Abstract: In this work, a method of time delay estimation for
dual-channel acoustic signals (speech, music, etc.) recorded under
reverberant conditions is investigated. Standard methods based on
cross-correlation of the signals show poor results in cases involving
strong reverberation, large distances between microphones and
asynchronous recordings. Under similar conditions, a method based
on cross-correlation of temporal envelopes of the signals delivers a
delay estimation of acceptable quality. This method and its properties
are described and investigated in detail, including its limits of
applicability. The method’s optimal parameter estimation and a
comparison with other known methods of time delay estimation are
also provided.
Abstract: In this paper the problem of estimating the time delay
between two spatially separated noisy sinusoidal signals by system
identification modeling is addressed. The system is assumed to be
perturbed by both input and output additive white Gaussian noise. The
presence of input noise introduces bias in the time delay estimates.
Normally the solution requires a priori knowledge of the input-output
noise variance ratio. We utilize the cascade of a self-tuned filter with
the time delay estimator, thus making the delay estimates robust to
input noise. Simulation results are presented to confirm the superiority
of the proposed approach at low input signal-to-noise ratios.