We propose a discriminative approach for automatically training chatbots to provide relevant and interesting responses. In contrast to most prior work, our approach is not based on hard-wiring response rules, but rather relies on machine learning. We set ourselves the task of ranking a repository of responses to find the most suitable response. This work is just a first step towards the more general goal of then modifying the result to form a more appropriate response. We use a large corpus of public Twitter and LiveJournal conversations as training data for the learning task. Selecting an appropriate response from this repository, given new input from a user, is done in three phases. First, a fast filtering approach removes most irrelevant sentences. Second, a boosted tree ranker (using features that are very efficient to compute) further shrinks the set of candidate responses. Finally a more precise content-oriented ranking framework is used to output the final response. In addition to our offline repository of dialogs, we also exploit a smaller repository of human-generated and labeled instances. These data are collected through a web-application in which human users interact with the system and provide suggestions and feedback regarding the responses. The response selection is mainly based on content-oriented features and uses the “winnow” multiplicative weight online learning approach. Having a large corpus of noisy offline Twitter and LiveJournal data as a source knowledge domain, and a moderate repository of less noisy, labeled online conversations as a destination knowledge domain, we use a transfer learning method (Transfer AdaBoost) to build a classifier that outputs the final response. We show both qualitative and quantitative results on the performance of the various approaches used in the project.