kawine commited on
Commit
659942d
·
1 Parent(s): 476ccb8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -211,13 +211,13 @@ Although we filtered out posts with NSFW (over 18) content, chose subreddits tha
211
  The data does not reflect the views of the dataset creators.
212
  Reddit users on these subreddits are also not representative of the broader population.
213
  Although subreddit-specific demographic information is not available, Reddit users overall are disproportionately male and from developed, Western, and English-speaking countries ([Pew Research](https://www.pewresearch.org/internet/2013/07/03/6-of-online-adults-are-reddit-users/)).
214
- One should keep that in mind before using any models trained on this data.
215
 
216
  ### Limitations
217
 
218
  The preference label in SHP is intended to reflect how *helpful* one response is relative to another, given an instruction/question.
219
  SHP is not intended for use in harm-minimization, as it was not designed to include the toxic content that would be necessary to learn a good toxicity detector.
220
- If you are looking for data where the preference label denotes less harm, we would recommend the harmfulness split of Anthropic's HH-RLHF data.
221
 
222
  Another limitation is that the more preferred response in SHP is not necessarily the more factual one.
223
  Though some comments do provide citations to justify their response, most do not.
 
211
  The data does not reflect the views of the dataset creators.
212
  Reddit users on these subreddits are also not representative of the broader population.
213
  Although subreddit-specific demographic information is not available, Reddit users overall are disproportionately male and from developed, Western, and English-speaking countries ([Pew Research](https://www.pewresearch.org/internet/2013/07/03/6-of-online-adults-are-reddit-users/)).
214
+ Please keep this in mind before using any models trained on this data.
215
 
216
  ### Limitations
217
 
218
  The preference label in SHP is intended to reflect how *helpful* one response is relative to another, given an instruction/question.
219
  SHP is not intended for use in harm-minimization, as it was not designed to include the toxic content that would be necessary to learn a good toxicity detector.
220
+ If you are looking for data where the preference label denotes less harm, we would recommend the harmfulness split of [Anthropic's HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf).
221
 
222
  Another limitation is that the more preferred response in SHP is not necessarily the more factual one.
223
  Though some comments do provide citations to justify their response, most do not.