Speaker verification is the process of verifying the claimed identity of a speaker based on the speech signal from the speaker (voiceprint). There are two types of speaker verification systems: Text-Independent Speaker Verification (TI-SV) and Text-Dependent Speaker Verification (TD-SV). TD-SV requires the speaker saying exactly the enrolled or given password. Text independent Speaker Verification is a process of verifying the identity without constraint on the speech content. Compared to TD-SV, it is more convenient because the user can speak freely to the system. However, it requires longer training and testing utterances to achieve good performance. We have worked on both text-dependent and text-independent speaker verification.
In TI-SV, we propose a novel interpretation of the Universal Background Model (UBM), and consider it as a mapping function that transforms the variable length observations (speech utterances) into a fixed dimensional feature vector (sufficient statistics). After this mapping, a similarity measurement is computed on the fixed dimensional features. With this novel interpretation, we proposed a new similarity measurement which produces more than 10% relative improvement over the conventional UBM-MAP framework in both equal error rate and detection cost function. Performance can be further improved by progressively refining the similarity measure in this vector space via an iterative cohort modeling scheme.
In TD-SV, we present a more robust approach than the classical likelihood ratio test (LRT). Our algorithm makes use of a hybrid generative-discriminative framework: it uses a generative model to learn the characteristics of a speaker and then a discriminative model to discriminate between a speaker and an impostor. One of the advantages of the proposed algorithm is that it does not require us to retrain the generative model. The proposed model, on an average, yields 36.41% relative improvement in EER over a LRT.