Papers
arxiv:2310.12036

A General Theoretical Paradigm to Understand Learning from Human Preferences

Published on Oct 18, 2023
Authors:
,
,
,
,
,

Abstract

The prevalent deployment of learning from human preferences through reinforcement learning (RLHF) relies on two important approximations: the first assumes that pairwise preferences can be substituted with pointwise rewards. The second assumes that a reward model trained on these pointwise rewards can generalize from collected data to out-of-distribution data sampled by the policy. Recently, Direct Preference Optimisation (DPO) has been proposed as an approach that bypasses the second approximation and learn directly a policy from collected data without the reward modelling stage. However, this method still heavily relies on the first approximation. In this paper we try to gain a deeper theoretical understanding of these practical algorithms. In particular we derive a new general objective called PsiPO for learning from human preferences that is expressed in terms of pairwise preferences and therefore bypasses both approximations. This new general objective allows us to perform an in-depth analysis of the behavior of RLHF and DPO (as special cases of PsiPO) and to identify their potential pitfalls. We then consider another special case for PsiPO by setting Psi simply to Identity, for which we can derive an efficient optimisation procedure, prove performance guarantees and demonstrate its empirical superiority to DPO on some illustrative examples.

Community

In the paper, the authors point out that DPO suffers from overfitting due to

the strong assumption that pairwise preferences can be substituted with ELo-score (pointwise rewards) via a Bradley-Terry (BT) modelisation (Bradley and Terry, 1952). In particular, this assumption could be problematic when the (sampled) preferences are deterministic or nearly deterministic as it leads to over-fitting to the preference dataset at the expense of ignoring the KL-regularisation term (see Sec. 4.2).

Can someone explain what it means to sample preferences in a "deterministic" manner? I'd be interested in understanding if this is an academic concern or something that we expect to occur when working with empirical datasets like UltraFeedback, Anthropic HHH etc.

Maybe when the (empirical) pairwise probability p(y>y') is either 0 or 1 (cf section 4.2)

I think as they model preferences as being a distribution, each time you "sample" from the preferences for a pair there is a prob the chosen and rejected are not always the same. Whereas one often works with a fixed dataset, so the choice of chosen and rejected is deterministic.

In subsubsection 5.3.1 Asymptotic Setting of the paper, the authors give an example on how IPO can mitigate overfitting:

Now, let us derive the optimal policy for IPO. We have p∗(y1 ≻ μ) = 3/4 and p∗(y2 ≻ μ) = 1/4.

Can someone explain what μ means here? I am quite confused about what μ means throughout the paper as its first appearance goes as:

We denote μ ∈ ∆^Y_X the behavior policy. From a given context x, let y, y′ ∼ μ(x) be two actions generated independently by the reference policy.

That seems a little unclear.

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2310.12036 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2310.12036 in a Space README.md to link it from this page.

Collections including this paper 7