Update README.md
Browse files
README.md
CHANGED
@@ -20,10 +20,10 @@ language:
|
|
20 |
|
21 |
## Summary
|
22 |
|
23 |
-
SHP is a dataset of **385K
|
24 |
It is primarily intended to be used for training reward models for RLHF and automatic evaluation models for NLG.
|
25 |
|
26 |
-
Each example is a Reddit post and a pair of top-level comments for that post, where one comment is more preferred by Reddit users (
|
27 |
SHP exploits the fact that if comment A was written *after* comment B but has a higher score nonetheless, then A is ostensibly more preferred to B.
|
28 |
If A had been written before B, then we could not conclude this, since its higher score could have been the result of more visibility from being written first.
|
29 |
|
@@ -32,7 +32,7 @@ Most notably, all the data in SHP is naturally occurring and human-written, wher
|
|
32 |
|
33 |
| Dataset | Size | Input | Label | Domains | Data Format | Length |
|
34 |
| -------------------- | ---- | -------------------------- | ---------------------------- | ------------------------- | ------------------------------------- | --------------- |
|
35 |
-
| SHP | 385K | Naturally occurring human-written responses |
|
36 |
| HH-RLHF | 91K | Dialogue with LLM | Individual Human Preference | not labelled | Live Chat (Multi-turn) | up to 1.5K T5 tokens |
|
37 |
|
38 |
How is SHP different from other datasets that have scraped Reddit, like [ELI5](https://huggingface.co/datasets/eli5#source-data)?
|
|
|
20 |
|
21 |
## Summary
|
22 |
|
23 |
+
SHP is a dataset of **385K collective human preferences** over responses to questions/instructions in 18 different subject areas, from cooking to legal advice (see the [Design](https://huggingface.co/datasets/stanfordnlp/SHP#dataset-design) for a breakdown of the domains).
|
24 |
It is primarily intended to be used for training reward models for RLHF and automatic evaluation models for NLG.
|
25 |
|
26 |
+
Each example is a Reddit post and a pair of top-level comments for that post, where one comment is more preferred by Reddit users (collectively).
|
27 |
SHP exploits the fact that if comment A was written *after* comment B but has a higher score nonetheless, then A is ostensibly more preferred to B.
|
28 |
If A had been written before B, then we could not conclude this, since its higher score could have been the result of more visibility from being written first.
|
29 |
|
|
|
32 |
|
33 |
| Dataset | Size | Input | Label | Domains | Data Format | Length |
|
34 |
| -------------------- | ---- | -------------------------- | ---------------------------- | ------------------------- | ------------------------------------- | --------------- |
|
35 |
+
| SHP | 385K | Naturally occurring human-written responses | Collective Human Preference | 18 (labelled) | Question/Instruction + Response (Single-turn) | up to 10.1K T5 tokens |
|
36 |
| HH-RLHF | 91K | Dialogue with LLM | Individual Human Preference | not labelled | Live Chat (Multi-turn) | up to 1.5K T5 tokens |
|
37 |
|
38 |
How is SHP different from other datasets that have scraped Reddit, like [ELI5](https://huggingface.co/datasets/eli5#source-data)?
|