We present a method for conditional maximum likelihood estimation of Naive Bayes models that employs a well known technique relying on a generalization of the Baum-Eagon inequality from polynomials to rational functions. The main advantage of the procedure is that it keeps the model parameter values (probabilities) properly normalized at each iteration. We apply the model trained under the maximum likelihood and conditional maximum likelihood criteria, respectively, to a text classification problem. A simple modification of the algorithm increases the convergence speed significantly over a straightforward implementation. The model trained under the conditional maximum likelihood criterion achieves a relative improvement of 40% in classification accuracy over its maximum likelihood counterpart on a text classification task.