Abstract

In this paper we investigate random forest based language model adaptation. Large
amounts of out-of-domain data are used to grow the decision trees while very small
amounts of in-domain data are used to prune them back, so that the structure
of the trees are suitable for the desired domain while the probabilities in the tree nodes are
reliably estimated. Extensive experiments are carried out and results are reported on
a particular task of adapting Broadcast News language model to the MIT computer science
lecture domain. We show 0.80% and 0.60% absolute WER improvement over language model
interpolation and count merging techniques, respectively.

‚Äč