kawine commited on
Commit
d0fe24e
·
1 Parent(s): 9d08092

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -3
README.md CHANGED
@@ -3,10 +3,12 @@ license: mit
3
  task_categories:
4
  - text-generation
5
  tags:
6
- - human-feedback
7
  - rlhf
8
  - preferences
9
  - reddit
 
 
10
  size_categories:
11
  - 100K<n<1M
12
  language:
@@ -20,7 +22,7 @@ SHP is a dataset of **385K aggregate human preferences** over responses to quest
20
  It is primarily intended to be used for training reward models for RLHF and automatic evaluation models for NLG.
21
 
22
  Each example is a Reddit post and a pair of top-level comments for that post, where one comment is more preferred by Reddit users (in aggregate).
23
- SHP exploits the fact that if comment A was written *after* comment B but has a higher score nonetheless, then A is definitively more preferred to B.
24
  If A had been written before B, then we could not conclude this, since its higher score could have been the result of more visibility from being written first.
25
 
26
  How is SHP different from [Anthropic's HH-RLHF dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf)?
@@ -32,7 +34,8 @@ Most notably, all the data in SHP is human-written, whereas the responses in HH-
32
  | HH-RLHF | 91K | Dialogue with LLM | Individual Human Preference Label | unclear (not labelled) | Multi-turn Dialogue | up to 1.5K T5 tokens |
33
 
34
  How is SHP different from other datasets that have scraped reddit, like [ELI5](https://huggingface.co/datasets/eli5#source-data)?
35
- Most notably, SHP exploits the timestamp hack to infer preferences, rather than only providing the comments:
 
36
 
37
  | Dataset | Size | Comments + Scores | Preferences | Number of Domains |
38
  | -------------------- | ---- | ------------------ | -------------| ------------------ |
 
3
  task_categories:
4
  - text-generation
5
  tags:
6
+ - human feedback
7
  - rlhf
8
  - preferences
9
  - reddit
10
+ - preference model
11
+ - RL
12
  size_categories:
13
  - 100K<n<1M
14
  language:
 
22
  It is primarily intended to be used for training reward models for RLHF and automatic evaluation models for NLG.
23
 
24
  Each example is a Reddit post and a pair of top-level comments for that post, where one comment is more preferred by Reddit users (in aggregate).
25
+ SHP exploits the fact that if comment A was written *after* comment B but has a higher score nonetheless, then A is ostensibly more preferred to B.
26
  If A had been written before B, then we could not conclude this, since its higher score could have been the result of more visibility from being written first.
27
 
28
  How is SHP different from [Anthropic's HH-RLHF dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf)?
 
34
  | HH-RLHF | 91K | Dialogue with LLM | Individual Human Preference Label | unclear (not labelled) | Multi-turn Dialogue | up to 1.5K T5 tokens |
35
 
36
  How is SHP different from other datasets that have scraped reddit, like [ELI5](https://huggingface.co/datasets/eli5#source-data)?
37
+ Most notably, SHP exploits the timestamp hack to infer preferences, rather than only providing the comments.
38
+ It also contains data from far more domains:
39
 
40
  | Dataset | Size | Comments + Scores | Preferences | Number of Domains |
41
  | -------------------- | ---- | ------------------ | -------------| ------------------ |