This report details the work undertaken this year towards class-specific segmentation. The aim is to take an image known to contain an object of a particular class, and return for each pixel a figure-ground segmentation value. A training corpus consisting of images and their ground-truth segmentation masks is used to learn shape and appearance models. Our shape model consists of local shape patches learned using a new translationally invariant clustering algorithm, together with learned adjacency statistics applied to enforce consistency between neighbouring patches. Our appearance model is a database of patches. Given a novel test image, hypotheses of underlying shape and appearance are constructed, and a final belief propagation algorithm enforces global consistency.