Improved Closed Set Text-Independent Speaker Identification by Combining MFCC with Evidence from Flipped Filter Banks

A state of the art Speaker Identification (SI) system requires a robust feature extraction unit followed by a speaker modeling scheme for generalized representation of these features. Over the years, Mel-Frequency Cepstral Coefficients (MFCC) modeled on the human auditory system has been used as a standard acoustic feature set for SI applications. However, due to the structure of its filter bank, it captures vocal tract characteristics more effectively in the lower frequency regions. This paper proposes a new set of features using a complementary filter bank structure which improves distinguishability of speaker specific cues present in the higher frequency zone. Unlike high level features that are difficult to extract, the proposed feature set involves little computational burden during the extraction process. When combined with MFCC via a parallel implementation of speaker models, the proposed feature set outperforms baseline MFCC significantly. This proposition is validated by experiments conducted on two different kinds of public databases namely YOHO (microphone speech) and POLYCOST (telephone speech) with Gaussian Mixture Models (GMM) as a Classifier for various model orders.





References:
[1] J. P. Cambell, Jr., "Speaker Recognition:A Tutorial", Proceedings of The
IEEE, vol. 85, no. 9, pp. 1437-1462, Sept. 1997.
[2] S. B. Davis and P. Mermelstein, "Comparison of Parametric
Representation for Monosyllabic Word Recognition in Continuously
Spoken Sentences", IEEE Trans. On ASSP, vol. ASSP 28, no. 4, pp.
357-365, Aug. 1980.
[3] R. Vergin, B. O- Shaughnessy and A. Farhat, "Generalized Mel
frequency cepstral coefficients for large-vocabulary speakeridenpendent
continuous-speech recognition, IEEE Trans. On ASSP, vol.
7, no. 5, pp. 525-532, Sept. 1999.
[4] Ben Gold and Nelson Morgan, Speech and Audio Signal Processing,
Part- IV, Chap.14, pp. 189-203, John Willy & Sons,2002.
[5] U. G. Goldstein, "Speaker identifying features based on formant tracks",
J. Acoust. Soc. Am, vol. 59, No. 1, pp. 176-182, Jan. 1976.
[6] Rabiner. L, Juang B. H, Fundamentals of speech recognition, Chap. 2,
pp. 11-65, Pearson Education, First Indian Reprint, 2003.
[7] Daniel J. Mashao, Marshalleno Skosan, "Combining Classifier
Decisions for Robust Speaker Identification", Pattern Recog,, vol. 39,
pp. 147-155, 2006.
[8] Zheng F., Zhang, G. and Song, Z., "Comparison of different
implementations of MFCC", J. Computer Science & Technology, vol. 16
no. 6, pp. 582-589, Sept. 2001.
[9] Ganchev, T., Fakotakis, N., and Kokkinakis, G. "Comparative
Evaluation of Various MFCC Implementations on the Speaker
Verification Task", Proc. of SPECOM 2005, October 17-19, 2005.
Patras, Greece, vol. 1, pp.191-194.
[10] Faundez-Zanuy M. and Monte-Moreno E., "State-of-the-art in speaker
recognition", Aerospace and Electronic Systems Magazine, IEEE, vol.
20, No. 5, pp. 7-12, Mar. 2005.
[11] Yegnanarayana B., Prasanna S.R.M., Zachariah J.M. and Gupta C. S.,
"Combining evidence from source, suprasegmental and spectral features
for a fixed-text speaker verification system", IEEE Trans. Speech and
Audio Processing, Vol. 13, No. 4, pp. 575-582, July 2005.
[12] K. Sri Rama Murty and B. Yegnanarayana, "Combining evidence from
residual phase and MFCC features for speaker recognition", IEEE Signal
Processing Letters, vol 13, no. 1, pp. 52-55, Jan. 2006.
[13] S.R. Mahadeva Prasanna, Cheedella S. Gupta b, B. Yegnanarayana,
"Extraction of speaker-specific excitation information from linear
prediction residual of speech", Speech Communication, Vol. 48, Issue
10, pp. 1243-1261, October 2006.
[14] D. Reynolds, R. Rose, "Robust text-independent speaker identification
using gaussian mixture speaker models", IEEE Trans. Speech Audio
Process., vol. 3, no.1, pp. 72-83, Jan. 1995.
[15] D. O- Shaughnessy, Speech Communication Human and
Machine,Addison-Wesley, New York, 1987.
[16] J. Kittler, M. Hatef, R. Duin, J. Mataz, "On combining classifiers",
IEEE Trans. Pattern Anal. Mach. Intell. 20 (1998) 226-239.
[17] Daniel Garcia-Romero, Julian Fierrez-Aguilar, Joaquin Gonzalez-
Rodriguez, Javier Ortega-Garcia, "Using quality measures for multilevel
peaker recognition", Computer Speech and Language, Vol. 20, Issue 2-
3, pp. 192-209, Apr. 2006.
[18] J. Campbell, "Testing with the YOHO CDROM voice verification
corpus", ICASSP95, 1995, vol.1 pp. 341-344.
[19] Petrovska, D., et al. "POLYCOST: A Telephone-Speech Database for
Speaker Recognition", RLA2C, Avignon, France, April 20-23, 1998, pp.
211-214.
[20] H. Melin and J. Lindberg. "Guidelines for experiments on the polycost
database", In Proceedings of a COST 250 workshop on Application of
Speaker Recognition Techniques in Telephony, pp. 59- 69, Vigo, Spain,
November 1996.