This paper presents a new approach for formant tracking using a parameter-free non-linear predictor that maps formant frequencies and bandwidths into the acoustic feature space. The approach relies on decomposing the speech signal into two components: the ﬁrst component captures the mapping between formants and acoustic observations, while the second componentis intended to capture the residual in the signal. We build the mapping by quantizing the formant space and creating a predictor codebook. Formant tracking is achieved by: 1) EM training of the parameters of the residual component, and 2) searching the predictor codebook for the best formant values. We explore both MAP and MMSE methods for performing formant tracking with the proposed approach. Furthermore, we impose ﬁrst order continuity constraints on formant trajectories, and use Viterbi search to perform formant tracking. We present formant tracking results on data from the Switchboard corpus.