The behaviour of spoken dialogue systems is traditionally determined by expert-coded rules. A new set of rules is therefore required for each new domain. Also, rule-based systems do not naturally deal with uncertainty in the input, which makes them sensitive to speech understanding errors. Finally, such systems do not improve over time. The partially observable Markov decision process (POMDP) has been proposed as an alternative model for dialogue. POMDPs can automatically be optimised to maximise the reward, a measure of success. They are robust to speech understanding errors, and can adapt to new data. A POMDP-based dialogue manager maintains a distribution over every possible dialogue state, the belief state. Based on that distribution, the system chooses the action that gives the highest expected reward. The main challenge with the POMDP-based approach, however, is to make it tractable to maintain the belief state and optimise the action selection. In this talk a methods will be presented to make POMDP-based dialogue management scalable and suitable for flexible real-world dialogue systems.