We present a novel acoustic model of speech, based on statistical hidden trajectory modeling (HTM) with bi-directional vocal tract resonance (VTR) target filtering, for speech recognition. The HTM consists of two stages of the generative process of speech: from the phone sequence to VTR dynamics and then to the cepstrum based acoustic observation. Two types of model implementation are detailed, one with straightforward two-stage cascading, and another which integrates over the statistical distribution of VTR in model construction and in computing acoustic likelihood. With the use of first-order Taylor series approximation to the nonlinearity in the VTR-to-cepstrum prediction component of HTM, the acoustic likelihood is established in an analytical form. It is a Gaussian with the time-varying mean that gives structured long-span context dependence over the entire utterance, and with the dynamically adjusted variance proportional to the squared “local slope” in the nonlinear mapping function from VTR to cepstrum. When the HTMparameters are trained via maximizing this “integrated” likelihood, dramatic reduction of an upper error bound is achieved in the standard TIMIT phonetic recognition task using a large-scale N-best rescoring paradigm.