In this paper, a quantitative and parametric model is presented that incorporates the role of context‐dependent formant transitions in phonetic decoding of speech. The idea is to use the locus equations established from speech science to constrain the model parameters so that the number of parameters for the context‐dependent model reduces to the same order as that for the context‐independent model. In contrast to the knowledge‐based, largely qualitative approaches employing also the formant transition information, our approach allows the parameters of the phonetic classifier, including the locus‐equation slopes and intercepts, in conjunction with the hidden Markovmodel parameters, to be subject to mathematical optimization via an effective and efficient training procedure developed in this study. Detailed analysis of the results obtained from automatic training shows that the estimates of the locus‐equation slopes and intercepts are consistent with those manually derived and reported in the speech science literature. Further, the experimental results using a phonetic classifier trained on the TIMIT database demonstrate the effectiveness of our new model, measured by a 15% classification error rate reduction, in comparison with a conventional statistical phonetic classifier under identical training and testing conditions.