Abstract

“Efficient Training Algorithms for HMM’s using Incremental Estimation” investigates EM procedures that increase training speed. The authors’ claim that these are GEM procedures is incorrect. We discuss why this is so, provide an example of non-monotonic convergence to a local maximum in likelihood, and outline conditions that guarantee such convergence.