kawine commited on
Commit
39ec243
·
1 Parent(s): d0fe24e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -8
README.md CHANGED
@@ -35,7 +35,7 @@ Most notably, all the data in SHP is human-written, whereas the responses in HH-
35
 
36
  How is SHP different from other datasets that have scraped reddit, like [ELI5](https://huggingface.co/datasets/eli5#source-data)?
37
  Most notably, SHP exploits the timestamp hack to infer preferences, rather than only providing the comments.
38
- It also contains data from far more domains:
39
 
40
  | Dataset | Size | Comments + Scores | Preferences | Number of Domains |
41
  | -------------------- | ---- | ------------------ | -------------| ------------------ |
@@ -101,7 +101,7 @@ where the fields are:
101
 
102
  The data is sourced from Reddit, which is a public forum organized into topic-specific fora called *subreddits*.
103
  For example, the `askculinary` subreddit is where users ask cooking-related questions and are answered by other users.
104
- The score of a post/comment is the number of upvotes it gets from users, minus the number of downvotes it gets.
105
  The value of a score is relative; in subreddits(posts) with more traffic, there will be more higher-scoring posts(comments).
106
  Within a post, comments posted earlier will tend to have a higher score simply due to having more exposure.
107
 
@@ -156,7 +156,7 @@ We started with the top-scoring 1000 posts (of all time) and searched for the 25
156
 
157
  ### Preprocessing
158
 
159
- We tried to keep preprocessing to a minimum. Subreddit-specific abbreviations were expanded ("CMV" to "Change my view that").
160
  In hyperlinks, only the referring text was kept and the URL was removed (if the URL was written out, then it was kept).
161
 
162
 
@@ -166,13 +166,13 @@ In hyperlinks, only the referring text was kept and the URL was removed (if the
166
 
167
  If you want to finetune a model to predict human preferences (e.g., for NLG evaluation or an RLHF reward model), here are some helpful tips:
168
 
169
- 1. **Use a sufficiently large model.**
170
- Finetuning a single FLAN-T5-xl model across all the training data should give you a test accuracy between 72-73% (across all domains), ranging from 65-80% on individual subreddits.
171
- 2. **Do in-domain prediction.** Out-of-domain performance will be poor if the subreddits are unrelated (e.g., if you fine-tune on `askculinary` preferences and test on `askcarguys` preferences).
172
- 3. **Preprocess the data.** The total input length should fit under the model's token limit (usually 512 tokens).
173
  Although models like FLAN-T5 use positional embeddings, we found that the loss would not converge if we finetuned it on inputs over 512 tokens.
174
  To avoid this, truncate the post text (in the `history` field) as much as possible, such that the whole input is under 512 tokens (do not truncate the comment(s) however).
175
  If this is still over 512 tokens, simply skip the example.
 
 
 
176
  4. **Train for fewer epochs.** The [InstructGPT paper](https://arxiv.org/abs/2203.02155) paper suggests training a reward model for only 1 epoch.
177
  Since the same comment appears in multiple preferences, it is easy to overfit to the data.
178
  5. **Training on less data may help.**
@@ -188,6 +188,7 @@ The orange line is from finetuning only on preferences with a 2+ score ratio and
188
  ![Graph](curve.png)
189
 
190
  We see that finetuning on less -- but higher quality -- data leads to higher accuracies on test data with a score ratio below 3.5, with no real downsides!
 
191
 
192
  ### SteamSHP - An Open-Source Preference Model
193
 
@@ -200,7 +201,8 @@ We encourage you to use it for NLG evaluation, for building reward models for RL
200
 
201
  Although we filtered out posts with NSFW (over 18) content, chose subreddits that were well-moderated and had policies against harassment and bigotry, some of the data may contain discriminatory or harmful language.
202
  The data does not reflect the views of the dataset creators.
203
- Reddit users on these subreddits are also not representative of the broader population. They are disproportionately from developed, Western, and English-speaking countries. One should keep that in mind before using any models trained on this data.
 
204
 
205
  It is also worth noting that the comment more preferred by Redditors is not necessarily the more correct one, and though some comments do provide citations to justify their response, most do not.
206
  There are exceptions to this, such as the `askhistorians` subreddit, which is heavily moderated and answers are expected to provide citations.
 
35
 
36
  How is SHP different from other datasets that have scraped reddit, like [ELI5](https://huggingface.co/datasets/eli5#source-data)?
37
  Most notably, SHP exploits the timestamp hack to infer preferences, rather than only providing the comments.
38
+ It also contains data from more domains:
39
 
40
  | Dataset | Size | Comments + Scores | Preferences | Number of Domains |
41
  | -------------------- | ---- | ------------------ | -------------| ------------------ |
 
101
 
102
  The data is sourced from Reddit, which is a public forum organized into topic-specific fora called *subreddits*.
103
  For example, the `askculinary` subreddit is where users ask cooking-related questions and are answered by other users.
104
+ The score of a post/comment is 1 plus the number of upvotes it gets from users, minus the number of downvotes it gets.
105
  The value of a score is relative; in subreddits(posts) with more traffic, there will be more higher-scoring posts(comments).
106
  Within a post, comments posted earlier will tend to have a higher score simply due to having more exposure.
107
 
 
156
 
157
  ### Preprocessing
158
 
159
+ We tried to keep preprocessing to a minimum. Subreddit-specific abbreviations were expanded (e.g., "CMV" to "Change my view that").
160
  In hyperlinks, only the referring text was kept and the URL was removed (if the URL was written out, then it was kept).
161
 
162
 
 
166
 
167
  If you want to finetune a model to predict human preferences (e.g., for NLG evaluation or an RLHF reward model), here are some helpful tips:
168
 
169
+ 1. **Preprocess the data.** The total input length should fit under the model's token limit (usually 512 tokens).
 
 
 
170
  Although models like FLAN-T5 use positional embeddings, we found that the loss would not converge if we finetuned it on inputs over 512 tokens.
171
  To avoid this, truncate the post text (in the `history` field) as much as possible, such that the whole input is under 512 tokens (do not truncate the comment(s) however).
172
  If this is still over 512 tokens, simply skip the example.
173
+ 2. **Use a sufficiently large model.**
174
+ Finetuning a single FLAN-T5-xl model across all the training data should give you a test accuracy between 72-73% (across all domains on examples where the entire input fits within the token limit), ranging from 65-80% on individual subreddits.
175
+ 3. **Do in-domain prediction.** Out-of-domain performance will be poor if the subreddits are unrelated (e.g., if you fine-tune on `askculinary` preferences and test on `askcarguys` preferences).
176
  4. **Train for fewer epochs.** The [InstructGPT paper](https://arxiv.org/abs/2203.02155) paper suggests training a reward model for only 1 epoch.
177
  Since the same comment appears in multiple preferences, it is easy to overfit to the data.
178
  5. **Training on less data may help.**
 
188
  ![Graph](curve.png)
189
 
190
  We see that finetuning on less -- but higher quality -- data leads to higher accuracies on test data with a score ratio below 3.5, with no real downsides!
191
+ Note that any examples whose inputs did not fit within the token limit were left out of the experiment, since the model could not be expected to handle them.
192
 
193
  ### SteamSHP - An Open-Source Preference Model
194
 
 
201
 
202
  Although we filtered out posts with NSFW (over 18) content, chose subreddits that were well-moderated and had policies against harassment and bigotry, some of the data may contain discriminatory or harmful language.
203
  The data does not reflect the views of the dataset creators.
204
+ Reddit users on these subreddits are also not representative of the broader population. They are disproportionately from developed, Western, and English-speaking countries.
205
+ One should keep that in mind before using any models trained on this data.
206
 
207
  It is also worth noting that the comment more preferred by Redditors is not necessarily the more correct one, and though some comments do provide citations to justify their response, most do not.
208
  There are exceptions to this, such as the `askhistorians` subreddit, which is heavily moderated and answers are expected to provide citations.