kawine commited on
Commit
b69ea02
·
1 Parent(s): acbac73

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -19
README.md CHANGED
@@ -16,7 +16,7 @@ language:
16
 
17
  ## Summary
18
 
19
- SHP is a dataset of **385K aggregate human preferences** over Reddit comments in 18 different subject areas, from cooking to legal advice.
20
  It is primarily intended to be used for training reward models for RLHF and automatic evaluation models for NLG.
21
 
22
  Each example is a Reddit post and a pair of top-level comments for that post, where one comment is more preferred by Reddit users (in aggregate).
@@ -25,10 +25,10 @@ If A had been written before B, then we could not conclude this, since its highe
25
 
26
  How is SHP different from [Anthropic's HH-RLHF dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf)?
27
 
28
- | Dataset | Input | Output | No. Domains | Data Format |
29
- | -------------------- | -------------------------- | ---------------------------- | ------------------------- | ------------------------------------- |
30
- | SHP | Reddit post and comments | Aggregate Preference Label with Scores | 18 (cooking, cars, ...) | Question/Answer + Assertion/Response |
31
- | Anthropic/HH-RLHF | Dialogue history with LLM | Individual Preference Label | 2 (harmful, helpful) | Multi-turn Dialogue |
32
 
33
 
34
  ## Data Structure
@@ -89,7 +89,6 @@ where the fields are:
89
 
90
  The data is sourced from Reddit, which is a public forum organized into topic-specific fora called *subreddits*.
91
  For example, the `askculinary` subreddit is where users ask cooking-related questions and are answered by other users.
92
-
93
  The score of a post/comment is the number of upvotes it gets from users, minus the number of downvotes it gets.
94
  The value of a score is relative; in subreddits(posts) with more traffic, there will be more higher-scoring posts(comments).
95
  Within a post, comments posted earlier will tend to have a higher score simply due to having more exposure.
@@ -97,11 +96,9 @@ Within a post, comments posted earlier will tend to have a higher score simply d
97
 
98
  ### Subreddit Selection
99
 
100
- This may be due to the aggregate human preferences in SHP being more stable easier to predict than the individual human preferences in the Anthropic data, as well as our strict data filtering described above.
101
-
102
  SHP contains a train, validation, and test split for comments scraped from 18 different subreddits. We chose subreddits based on:
103
  1. whether they were well-known (subscriber count >= 50K)
104
- 2. whether they were actively moderated
105
  3. whether comments had to be rooted in some objectivity, instead of being entirely about personal experiences (e.g., `askscience` vs. `AskAmericans`)
106
 
107
  The train/validation/test splits were created by splitting the post IDs of a subreddit in 90%/5%/5% proportions respectively, so that no post would appear in multiple splits.
@@ -137,10 +134,13 @@ Given a post P and two comments (A,B) we only included the preference A > B in t
137
  3. Neither comment was made by a deleted user, a moderator, or the post creator. The post was not made by a deleted user or moderator.
138
  4. The post has a score >= 10 and each comment has a score >= 2 (upvoted at least once).
139
 
 
 
 
 
140
  Reddit makes it very difficult to get anything beyond the top 1000 posts for each subreddit.
141
- We started with the top-scoring 1000 posts (of all time) and searched for the 25 most similar posts to each one using Reddit's search function.
142
- By doing this recursively, we scraped up to 7500 unique post IDs for each subreddit and then used the AsyncPRAW API to scrape the top 50 comments from each post.
143
- We limited the scraping to 50 comments per post because the number of comments per post is Pareto-distributed, and we did not want a relatively small number of posts dominating the data.
144
 
145
  ### Preprocessing
146
 
@@ -154,24 +154,27 @@ In hyperlinks, only the referring text was kept and the URL was removed (if the
154
 
155
  If you want to finetune a model to predict human preferences (e.g., for NLG evaluation or an RLHF reward model), here are some helpful tips:
156
 
157
- 1. **Use a sufficiently large model.** With FLAN-T5-xl, you can get 65-85% percent accuracies depending on the subreddit.
158
  2. **Do in-domain prediction.** Out-of-domain performance will be poor if the subreddits are unrelated (e.g., if you fine-tune on `askculinary` preferences and test on `askcarguys` preferences).
159
  3. **Preprocess the data.** The total input length should fit under the model's token limit (usually 512 tokens).
160
- Although models like FLAN-T5 use positional embeddings, we found that the loss would not converge if we finetuned it on the entire input.
161
  To avoid this, truncate the post text (in the `history` field) as much as possible, such that the whole input is under 512 tokens (do not truncate the comment(s) however).
162
  If this is still over 512 tokens, simply skip the example.
163
  4. **Train for fewer epochs.** The [InstructGPT paper](https://arxiv.org/abs/2203.02155) paper suggests training a reward model for only 1 epoch.
164
  Since the same comment appears in multiple preferences, it is easy to overfit to the data.
165
  5. **Training on less data may help.**
166
- Preferences with a large score ratio (e.g., comment A having 2x the score of comment B) will provide a stronger signal for finetuning the model, so you may only want to consider preferences above a certain `score_ratio`.
167
  The number of preferences per post is Pareto-distributed, so to prevent the model from over-fitting to certain posts, you may want to limit the number of preferences from a particular post.
168
 
169
  ### Evaluating
170
 
171
- Since it is easier to predict stronger preferences than weaker ones (e.g., preferences with a big difference in comment score), we recommend reporting a performance curve instead of a single number.
172
- For example, here is the accuracy curve for a FLAN-T5-xl model trained on the askculinary ddata using the suggestions above.
 
 
 
173
 
174
- The orange line is without filtering the training data and the blue line is with training only on preferences with a 2+ score ratio and using no more than 5 preferences from each post to prevent overfitting:
175
 
176
 
177
 
@@ -182,7 +185,6 @@ The orange line is without filtering the training data and the blue line is with
182
 
183
  Although we filtered out posts with NSFW (over 18) content and chose an innocuous set of subreddits, some of the data may contain discriminatory or harmful language.
184
  The data does not reflect the views of the dataset creators.
185
- Please only engage with the data in accordance with your own personal risk tolerance.
186
 
187
  Reddit users on these subreddits are also not necessarily representative of the broader population, which one should keep in mind before using any models trained on this data.
188
  As always, remember to evaluate!
 
16
 
17
  ## Summary
18
 
19
+ SHP is a dataset of **385K aggregate human preferences** over responses to questions/instructions in 18 different subject areas, from cooking to legal advice.
20
  It is primarily intended to be used for training reward models for RLHF and automatic evaluation models for NLG.
21
 
22
  Each example is a Reddit post and a pair of top-level comments for that post, where one comment is more preferred by Reddit users (in aggregate).
 
25
 
26
  How is SHP different from [Anthropic's HH-RLHF dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf)?
27
 
28
+ | Dataset | Input | Output | No. Domains | Data Format | Response Length |
29
+ | -------------------- | -------------------------- | ---------------------------- | ------------------------- | ------------------------------------- | --------------- |
30
+ | SHP | Reddit post and comments | Aggregate Preference Label with Scores | 18 (cooking, cars, etc.) | Question/Instruction + Response | Short + Long |
31
+ | Anthropic/HH-RLHF | Dialogue with LLM | Individual Preference Label | 2 (harmful, helpful) | Multi-turn Dialogue | Short |
32
 
33
 
34
  ## Data Structure
 
89
 
90
  The data is sourced from Reddit, which is a public forum organized into topic-specific fora called *subreddits*.
91
  For example, the `askculinary` subreddit is where users ask cooking-related questions and are answered by other users.
 
92
  The score of a post/comment is the number of upvotes it gets from users, minus the number of downvotes it gets.
93
  The value of a score is relative; in subreddits(posts) with more traffic, there will be more higher-scoring posts(comments).
94
  Within a post, comments posted earlier will tend to have a higher score simply due to having more exposure.
 
96
 
97
  ### Subreddit Selection
98
 
 
 
99
  SHP contains a train, validation, and test split for comments scraped from 18 different subreddits. We chose subreddits based on:
100
  1. whether they were well-known (subscriber count >= 50K)
101
+ 2. whether posts were expected to pose a question or instruction that the top-level comments were meant to answer
102
  3. whether comments had to be rooted in some objectivity, instead of being entirely about personal experiences (e.g., `askscience` vs. `AskAmericans`)
103
 
104
  The train/validation/test splits were created by splitting the post IDs of a subreddit in 90%/5%/5% proportions respectively, so that no post would appear in multiple splits.
 
134
  3. Neither comment was made by a deleted user, a moderator, or the post creator. The post was not made by a deleted user or moderator.
135
  4. The post has a score >= 10 and each comment has a score >= 2 (upvoted at least once).
136
 
137
+ A post with `n` comments could have up to (`n` choose `2`) preferences in the data.
138
+ Since the number of comments per post is Pareto-distributed, to prevent a relatively small number of posts from dominating the data, we limited the scraping to 50 comments per post.
139
+ This means that each post could have up to (`50` choose `2`) comments in the dataset, though this is a much smaller number in practice, since all the criteria above need to be met.
140
+
141
  Reddit makes it very difficult to get anything beyond the top 1000 posts for each subreddit.
142
+ We started with the top-scoring 1000 posts (of all time) and searched for the 25 most similar posts to each one using Reddit's search function to get up to 7500 unique post IDs per subreddit.
143
+
 
144
 
145
  ### Preprocessing
146
 
 
154
 
155
  If you want to finetune a model to predict human preferences (e.g., for NLG evaluation or an RLHF reward model), here are some helpful tips:
156
 
157
+ 1. **Use a sufficiently large model.** With FLAN-T5-xl, you can get 65-85% accuracies depending on the subreddit.
158
  2. **Do in-domain prediction.** Out-of-domain performance will be poor if the subreddits are unrelated (e.g., if you fine-tune on `askculinary` preferences and test on `askcarguys` preferences).
159
  3. **Preprocess the data.** The total input length should fit under the model's token limit (usually 512 tokens).
160
+ Although models like FLAN-T5 use positional embeddings, we found that the loss would not converge if we finetuned it on inputs over 512 tokens.
161
  To avoid this, truncate the post text (in the `history` field) as much as possible, such that the whole input is under 512 tokens (do not truncate the comment(s) however).
162
  If this is still over 512 tokens, simply skip the example.
163
  4. **Train for fewer epochs.** The [InstructGPT paper](https://arxiv.org/abs/2203.02155) paper suggests training a reward model for only 1 epoch.
164
  Since the same comment appears in multiple preferences, it is easy to overfit to the data.
165
  5. **Training on less data may help.**
166
+ Preferences with a large `score_ratio` (e.g., comment A having 2x the score of comment B) will provide a stronger signal for finetuning the model, so you may only want to consider preferences above a certain `score_ratio`.
167
  The number of preferences per post is Pareto-distributed, so to prevent the model from over-fitting to certain posts, you may want to limit the number of preferences from a particular post.
168
 
169
  ### Evaluating
170
 
171
+ Since it is easier to predict strongly-held preferences than weakly-held ones, instead of reporting a single accuracy value, we recommend reporting a performance curve as a function of the `score_ratio`.
172
+ For example, here is the accuracy curve for a FLAN-T5-xl model trained on the askculinary data using the suggestions above.
173
+ The orange line is from finetuning only on preferences with a 2+ score ratio and using no more than 5 preferences from each post to prevent overfitting:
174
+
175
+ ![Graph](curve.png)
176
 
177
+ We see that finetuning on less -- but higher quality -- data leads to higher accuracies on test data with a score ratio below 3.5, with no real downsides!
178
 
179
 
180
 
 
185
 
186
  Although we filtered out posts with NSFW (over 18) content and chose an innocuous set of subreddits, some of the data may contain discriminatory or harmful language.
187
  The data does not reflect the views of the dataset creators.
 
188
 
189
  Reddit users on these subreddits are also not necessarily representative of the broader population, which one should keep in mind before using any models trained on this data.
190
  As always, remember to evaluate!