Spoken dialog systems present a classic example of planning under uncertainty. Speech recognition errors are ubiquitous and impossible to detect reliably, so the state of the conversation can never be known with certainty. Despite this, the system must choose actions to make progress to a long term goal. As such, decision theory, and in particular partiallyobservable Markov decision processes (POMDPs), present an attractive approach to building spoken dialog systems. Initial work on “toy” dialog systems validated the beneﬁts of the POMDP approach; however, it also found that straightforward application of POMDPs could not scale to real-world problems. Subsequent work by a number of research teams has scaled up planning and belief monitoring, incorporated high-ﬁdelity user simulations, and married commercial development practices with automatic optimization. Today, statistical dialog systems are being ﬁelded by research labs for public use. This chapter traces the history of POMDPbased spoken dialog systems, and sketches avenues for future work.