Unsupervised learning of linguistic structure is a difficult task. Frequently, standard techniques such as maximum-likelihood estimation yield poor results or are simply inappropriate (as when the class of models under consideration includes models of varying complexity). In this talk, I discuss how Bayesian statistical methods can be applied to the problem of unsupervised language learning to develop principled model-based systems and improve results. I first present some work on word segmentation, showing that maximum-likelihood estimation is inappropriate for this task and discussing a nonparametric Bayesian modeling solution. I then argue, using part-of-speech tagging as an example, that a Bayesian approach provides advantages even when maximum-likelihood (or maximum a posteriori) estimation is possible. I conclude by discussing some of the challenges that remain in pursuing a Bayesian approach to language learning.