Abstract

Conditional Random Fields (CRFs) are popular models in
computer vision for solving labeling problems such as image denoising.
This paper tackles the rarely addressed but important problem of learn-
ing the full form of the potential functions of pairwise CRFs. We ex-
amine two popular learning techniques, maximum likelihood estimation
and maximum margin training. The main focus of the paper is on models
such as pairwise CRFs, that are simplistic (misspecified) and do not fit
the data well. We empirically demonstrate that for misspecified models
maximum-margin training with MAP prediction is superior to maximum
likelihood estimation with any other prediction method. Additionally we
examine the common belief that MLE is better at producing predictions
matching image statistics.

‚Äč