Speaker verification is the process of verifying the claimed identity of a speaker based on the speech signal from the speaker (voiceprint). There are two types of speaker verification systems: Text-Independent Speaker Verification (TI-SV) and Text-Dependent Speaker Verification (TD-SV). TD-SV requires the speaker saying exactly the enrolled or given password. Text independent Speaker Verification is a process of verifying the identity without constraint on the speech content. Compared to TD-SV, it is more convenient because the user can speak freely to the system. However, it requires longer training and testing utterances to achieve good performance. We have worked on both text-dependent and text-independent speaker verification.
In TI-SV, we propose a novel interpretation of the Universal Background Model (UBM), and consider it as a mapping function that transforms the variable length observations (speech utterances) into a fixed dimensional feature vector (sufficient statistics). After this mapping, a similarity measurement is computed on the fixed dimensional features. With this novel interpretation, we proposed a new similarity measurement which produces more than 10% relative improvement over the conventional UBM-MAP framework in both equal error rate and detection cost function. Performance can be further improved by progressively refining the similarity measure in this vector space via an iterative cohort modeling scheme.
In TD-SV, we present a more robust approach than the classical likelihood ratio test (LRT). Our algorithm makes use of a hybrid generative-discriminative framework: it uses a generative model to learn the characteristics of a speaker and then a discriminative model to discriminate between a speaker and an impostor. One of the advantages of the proposed algorithm is that it does not require us to retrain the generative model. The proposed model, on an average, yields 36.41% relative improvement in EER over a LRT.
- M. Liu, T.S. Huang, and Z. Zhang, “Robust Local Scoring Function for Text-Independent Speaker Verification“, in Proc. International Conference on Pattern Recognition ICPR), pages 1146- 1149, August 20-24, 2006, Hong Kong.
- M. Liu, H. Ning, Z. Zhang and T. Huang, “A Novel Framework of Text-independent Speaker Verification based on Utterance Transform and Iterative Cohort Modeling“, in Proc. the Ninth International Conference on Spoken Language Processing (Interspeech 2006 – ICSLP), Pittsburgh, Pennsylvania, Sept. 17-21, 2006.
- M. Liu, Z. Zhang, M. Hasegawa-Johnson, and T. Huang, “Exploring Discriminative Learning For Text-Independent Speaker Recognition“, in Proc. International Conference on Multimedia & Expo (ICME 2007), pages 56-59, Beijing, China, July 2-5, 2007.
- M. Liu, X. Zhou, M. Hasegawa-Johnson, Z. Zhang, and T. Huang, “Frequency Domain Correspondence for Speaker Normalization“, in Proc. Interspeech 2007 – Eurospeech, pages 274-277, Antwerp, Belgium, August 27-31, 2007.
- A. Subramanya, Z. Zhang, A. Surendran, P. Nguyen, M. Narasimhan, and A. Acero, “A Generative-Discriminative Framework Using Ensemble Methods For Text-Dependent Speaker Verification’‘, in Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2007), Vol.IV, pages 225-228, Honolulu, Hawaii, April 15-20, 2007.