Update README.md
Browse files
README.md
CHANGED
@@ -79,14 +79,14 @@ Here's an example from `askculinary/train.json`:
|
|
79 |
where the fields are:
|
80 |
- ```post_id```: the ID of the Reddit post (string)
|
81 |
- ```domain```: the subreddit and split the example is drawn from, separated by an underscore (string)
|
82 |
-
- ```upvote_ratio```: the
|
83 |
- ```history```: the post title concatented to the post body (string)
|
84 |
- ```c_root_id_A```: the ID of comment A (string)
|
85 |
- ```c_root_id_B```: the ID of comment B (string)
|
86 |
- ```created_at_utc_A```: utc timestamp of when comment A was created (integer)
|
87 |
- ```created_at_utc_B```: utc timestamp of when comment B was created (integer)
|
88 |
-
- ```score_A```:
|
89 |
-
- ```score_B```:
|
90 |
- ```human_ref_A```: text of comment A (string)
|
91 |
- ```human_ref_B```: text of comment B (string)
|
92 |
- ```labels```: the preference label -- it is 1 if A is preferred to B; 0 if B is preferred to A. This was randomized such that the label distribution is roughly 50/50. (integer)
|
@@ -165,7 +165,6 @@ If you want to finetune a model to predict human preferences (e.g., for NLG eval
|
|
165 |
|
166 |
1. **Use a sufficiently large model.**
|
167 |
Finetuning a single FLAN-T5-xl model across all the training data should give you a test accuracy between 72-73% (across all domains), ranging from 65-80% on individual subreddits.
|
168 |
-
Finetuning a single model on just a single domain will give you better performance on that domain.
|
169 |
2. **Do in-domain prediction.** Out-of-domain performance will be poor if the subreddits are unrelated (e.g., if you fine-tune on `askculinary` preferences and test on `askcarguys` preferences).
|
170 |
3. **Preprocess the data.** The total input length should fit under the model's token limit (usually 512 tokens).
|
171 |
Although models like FLAN-T5 use positional embeddings, we found that the loss would not converge if we finetuned it on inputs over 512 tokens.
|
@@ -187,17 +186,22 @@ The orange line is from finetuning only on preferences with a 2+ score ratio and
|
|
187 |
|
188 |
We see that finetuning on less -- but higher quality -- data leads to higher accuracies on test data with a score ratio below 3.5, with no real downsides!
|
189 |
|
|
|
190 |
|
|
|
|
|
|
|
191 |
|
192 |
|
|
|
193 |
|
194 |
-
|
195 |
-
## Disclaimer
|
196 |
-
|
197 |
-
Although we filtered out posts with NSFW (over 18) content and chose an innocuous set of subreddits, some of the data may contain discriminatory or harmful language.
|
198 |
The data does not reflect the views of the dataset creators.
|
|
|
|
|
|
|
|
|
199 |
|
200 |
-
Reddit users on these subreddits are also not necessarily representative of the broader population, which one should keep in mind before using any models trained on this data.
|
201 |
As always, remember to evaluate!
|
202 |
|
203 |
|
|
|
79 |
where the fields are:
|
80 |
- ```post_id```: the ID of the Reddit post (string)
|
81 |
- ```domain```: the subreddit and split the example is drawn from, separated by an underscore (string)
|
82 |
+
- ```upvote_ratio```: the percent of votes received by the post that were positive (aka upvotes) (float)
|
83 |
- ```history```: the post title concatented to the post body (string)
|
84 |
- ```c_root_id_A```: the ID of comment A (string)
|
85 |
- ```c_root_id_B```: the ID of comment B (string)
|
86 |
- ```created_at_utc_A```: utc timestamp of when comment A was created (integer)
|
87 |
- ```created_at_utc_B```: utc timestamp of when comment B was created (integer)
|
88 |
+
- ```score_A```: (# positive votes - # negative votes + 1) received by comment A (integer)
|
89 |
+
- ```score_B```: (# positive votes - # negative votes + 1) received by comment B (integer)
|
90 |
- ```human_ref_A```: text of comment A (string)
|
91 |
- ```human_ref_B```: text of comment B (string)
|
92 |
- ```labels```: the preference label -- it is 1 if A is preferred to B; 0 if B is preferred to A. This was randomized such that the label distribution is roughly 50/50. (integer)
|
|
|
165 |
|
166 |
1. **Use a sufficiently large model.**
|
167 |
Finetuning a single FLAN-T5-xl model across all the training data should give you a test accuracy between 72-73% (across all domains), ranging from 65-80% on individual subreddits.
|
|
|
168 |
2. **Do in-domain prediction.** Out-of-domain performance will be poor if the subreddits are unrelated (e.g., if you fine-tune on `askculinary` preferences and test on `askcarguys` preferences).
|
169 |
3. **Preprocess the data.** The total input length should fit under the model's token limit (usually 512 tokens).
|
170 |
Although models like FLAN-T5 use positional embeddings, we found that the loss would not converge if we finetuned it on inputs over 512 tokens.
|
|
|
186 |
|
187 |
We see that finetuning on less -- but higher quality -- data leads to higher accuracies on test data with a score ratio below 3.5, with no real downsides!
|
188 |
|
189 |
+
### SteamSHP - An Open-Source Preference Model
|
190 |
|
191 |
+
We have finetuned a 6 billion-parameter FLAN-T5-XL model called [SteamSHP](https://huggingface.co/stanfordnlp/SteamSHP-preference-model) on both the SHP dataset and the helpfulness data from Anthropic's HH-RLHF.
|
192 |
+
It achieves 72.8% across all domains (including 73.1% on the HH-RLHF alone).
|
193 |
+
We encourage you to use it for NLG evaluation, for building reward models for RLHF, or for another purpose you deem fit!
|
194 |
|
195 |
|
196 |
+
## Biases and Limitations
|
197 |
|
198 |
+
Although we filtered out posts with NSFW (over 18) content, chose subreddits that were well-moderated and had policies against harassment and bigotry, some of the data may contain discriminatory or harmful language.
|
|
|
|
|
|
|
199 |
The data does not reflect the views of the dataset creators.
|
200 |
+
Reddit users on these subreddits are also not representative of the broader population. They are disproportionately from developed, Western, and English-speaking countries. One should keep that in mind before using any models trained on this data.
|
201 |
+
|
202 |
+
It is also worth noting that the comment more preferred by Redditors is not necessarily the more correct one, and though some comments do provide citations to justify their response, most do not.
|
203 |
+
There are exceptions to this, such as the `askhistorians` subreddit, which is heavily moderated and answers are expected to provide citations.
|
204 |
|
|
|
205 |
As always, remember to evaluate!
|
206 |
|
207 |
|