Update README.md
Browse files
README.md
CHANGED
@@ -37,7 +37,7 @@ Most notably, SHP exploits the timestamp hack to infer preferences, rather than
|
|
37 |
| Dataset | Size | Comments + Scores | Preferences | Number of Domains |
|
38 |
| -------------------- | ---- | ------------------ | -------------| ------------------ |
|
39 |
| SHP | 385K | Yes | Yes | 18 |
|
40 |
-
| ELI5 |
|
41 |
|
42 |
|
43 |
## Data Structure
|
@@ -91,7 +91,7 @@ where the fields are:
|
|
91 |
- ```human_ref_B```: text of comment B (string)
|
92 |
- ```labels```: the preference label -- it is 1 if A is preferred to B; 0 if B is preferred to A. This was randomized such that the label distribution is roughly 50/50. (integer)
|
93 |
- ```seconds_difference```: how many seconds after the less preferred comment the more preferred one was created (will always be >= 0) (integer)
|
94 |
-
- ```score_ratio```: the ratio
|
95 |
|
96 |
|
97 |
## Dataset Design
|
@@ -163,7 +163,9 @@ In hyperlinks, only the referring text was kept and the URL was removed (if the
|
|
163 |
|
164 |
If you want to finetune a model to predict human preferences (e.g., for NLG evaluation or an RLHF reward model), here are some helpful tips:
|
165 |
|
166 |
-
1. **Use a sufficiently large model.**
|
|
|
|
|
167 |
2. **Do in-domain prediction.** Out-of-domain performance will be poor if the subreddits are unrelated (e.g., if you fine-tune on `askculinary` preferences and test on `askcarguys` preferences).
|
168 |
3. **Preprocess the data.** The total input length should fit under the model's token limit (usually 512 tokens).
|
169 |
Although models like FLAN-T5 use positional embeddings, we found that the loss would not converge if we finetuned it on inputs over 512 tokens.
|
|
|
37 |
| Dataset | Size | Comments + Scores | Preferences | Number of Domains |
|
38 |
| -------------------- | ---- | ------------------ | -------------| ------------------ |
|
39 |
| SHP | 385K | Yes | Yes | 18 |
|
40 |
+
| ELI5 | 270K | Yes | No | 3 |
|
41 |
|
42 |
|
43 |
## Data Structure
|
|
|
91 |
- ```human_ref_B```: text of comment B (string)
|
92 |
- ```labels```: the preference label -- it is 1 if A is preferred to B; 0 if B is preferred to A. This was randomized such that the label distribution is roughly 50/50. (integer)
|
93 |
- ```seconds_difference```: how many seconds after the less preferred comment the more preferred one was created (will always be >= 0) (integer)
|
94 |
+
- ```score_ratio```: the ratio of the more preferred comment's score to the less preferred comment's score (will be >= 1) (float)
|
95 |
|
96 |
|
97 |
## Dataset Design
|
|
|
163 |
|
164 |
If you want to finetune a model to predict human preferences (e.g., for NLG evaluation or an RLHF reward model), here are some helpful tips:
|
165 |
|
166 |
+
1. **Use a sufficiently large model.**
|
167 |
+
Finetuning a single FLAN-T5-xl model across all the training data should give you a test accuracy between 72-73% (across all domains), ranging from 65-80% on individual subreddits.
|
168 |
+
Finetuning a single model on just a single domain will give you better performance on that domain.
|
169 |
2. **Do in-domain prediction.** Out-of-domain performance will be poor if the subreddits are unrelated (e.g., if you fine-tune on `askculinary` preferences and test on `askcarguys` preferences).
|
170 |
3. **Preprocess the data.** The total input length should fit under the model's token limit (usually 512 tokens).
|
171 |
Although models like FLAN-T5 use positional embeddings, we found that the loss would not converge if we finetuned it on inputs over 512 tokens.
|