Provably Robust DPO: Aligning Language Models with Noisy Feedback

ICML 2024 |

Learning from preference-based feedback has recently gained traction as a promising approach to align language models with human interests. While these aligned generative models have demonstrated impressive capabilities across various tasks, their dependence on high-quality human preference data poses a bottleneck in practical applications. Specifically, noisy (incorrect and ambiguous) preference pairs in the dataset might restrict the language models from capturing human intent accurately. While practitioners have recently proposed heuristics to mitigate the effect of noisy preferences, a complete theoretical understanding of their workings remain elusive. In this work, we aim to bridge this gap by by introducing a general framework for policy optimization in the presence of random preference flips. We focus on the direct preference optimization (DPO) algorithm in particular since it assumes that preferences adhere to the Bradley-Terry-Luce (BTL) model, raising concerns about the impact of noisy data on the learned policy. We design a novel loss function, which de-bias the effect of noise on average, making a policy trained by minimizing that loss robust to the noise. We show that it is provably tolerant to noise and characterize its sub-optimality gap as a function of noise rate, dimension of the policy parameter, and sample size. Our experiments on IMDb sentiment generation and Anthropic’s helpful-harmless dataset show that our approach is robust to noise in preference labels compared to vanilla DPO and other heuristics proposed by practitioners.