kawine commited on
Commit
630628f
·
1 Parent(s): d352c5b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -178,7 +178,7 @@ If you want to finetune a model to predict human preferences (e.g., for NLG eval
178
  2. **Use a sufficiently large model.**
179
  Finetuning a single FLAN-T5-xl model across all the training data should give you a test accuracy between 72-73% (across all domains on examples where the entire input fits within the token limit), ranging from 65-80% on individual subreddits.
180
  3. **Do in-domain prediction.** Out-of-domain performance will be poor if the subreddits are unrelated (e.g., if you fine-tune on `askculinary` preferences and test on `askcarguys` preferences).
181
- 4. **Train for fewer epochs.** The [InstructGPT paper](https://arxiv.org/abs/2203.02155) paper suggests training a reward model for only 1 epoch.
182
  Since the same comment appears in multiple preferences, it is easy to overfit to the data.
183
  5. **Training on less data may help.**
184
  Preferences with a large `score_ratio` (e.g., comment A having 2x the score of comment B) will provide a stronger signal for finetuning the model, so you may only want to consider preferences above a certain `score_ratio`.
 
178
  2. **Use a sufficiently large model.**
179
  Finetuning a single FLAN-T5-xl model across all the training data should give you a test accuracy between 72-73% (across all domains on examples where the entire input fits within the token limit), ranging from 65-80% on individual subreddits.
180
  3. **Do in-domain prediction.** Out-of-domain performance will be poor if the subreddits are unrelated (e.g., if you fine-tune on `askculinary` preferences and test on `askcarguys` preferences).
181
+ 4. **Train for fewer epochs.** The InstructGPT paper paper suggests training a reward model for only 1 epoch.
182
  Since the same comment appears in multiple preferences, it is easy to overfit to the data.
183
  5. **Training on less data may help.**
184
  Preferences with a large `score_ratio` (e.g., comment A having 2x the score of comment B) will provide a stronger signal for finetuning the model, so you may only want to consider preferences above a certain `score_ratio`.