Speech translation tasks usually are different from text-based machine translation tasks, and the training data for speech translation tasks are usually very limited. Therefore, domain adaptation is crucial to achieve robust performance across different conditions in speech translation. In this paper, we study the problem of adapting a general-domain, writing-text style machine translation system to a travel-domain, speech translation task. We study a variety of domain adaptation techniques, including data selection and incorporation of multiple translation models, in a unified decoding process. The experimental results demonstrate significant BLEU score improvement on the targeting scenario after domain adaptation. The results also demonstrate robust translation performance achieved across multiple conditions via joint data selection and model combination. We finally analyze and compare the robust techniques developed for speech recognition and speech translation, and point out further directions for robust translation via variability-adaptive and discriminatively-adaptive learning.