We show how interpolants can be viewed as classifiers in supervised machine learning and the advantages this view provides. First, we are able to use off-the-shelf classification techniques, in particular support vector machines, for computing interpolants. Second, since classification algorithms are predictive, the interpolants computed via classification have a good chance of being generalized predicates or invariants. We show that we can find such relevant predicates for a number of benchmarks. Finally, the machine learning view also enables us to handle superficial non-linearities. Even if the underlying problem structure is linear, the constraints can give an impression that we are solving a non-linear problem. Since learning algorithms try to mine the underlying structure directly, we can discover the linear structure for such problems. This also demonstrates the wider applicability of our technique. We demonstrate the feasibility of our approach via experiments over benchmarks from various papers on program verification.