An Empirical Study of Smoothing Techniques for Language Modeling

  • Stanley F. Chen
  • Joshua Goodman

Proceedings of the 34th Annual Meeting of the ACL |

We present a tutorial introduction to n-gram models for language modeling and survey the most widely-used smoothing algorithms for such models. We then present an extensive empirical comparison of several of these smoothing techniques, including those described by Jelinek and Mercer (1980), Katz (1987), Bell, Cleary, and Witten (1990), Ney, Essen, and Kneser (1994), and Kneser and Ney (1995). We in vestigate how factors such as training data size, training corpus (e.g., Brown versus Wall Street Journal), count cuto s, and n-gram order (bigram versus trigram) a ect the relative performance of these methods, which is measured through the cross-entropy of test data. Our results show that previous comparisons have not been complete enough to fully characterize smoothing algorithm performance. We introduce methodologies for analyzing smoothing algorithm ecacy in detail, and using these techniques we motivate a no vel variation of Kneser-Ney smoothing that consistently outperforms all other algorithms evaluated. Finally, results showing that improved language model smoothing leads to improved speech recognition performance are presented.