Update README.md
Browse files
README.md
CHANGED
@@ -32,7 +32,7 @@ Most notably, all the data in SHP is naturally occurring and human-written, wher
|
|
32 |
|
33 |
| Dataset | Size | Input | Label | # Domains | Data Format | Length |
|
34 |
| -------------------- | ---- | -------------------------- | ---------------------------- | ------------------------- | ------------------------------------- | --------------- |
|
35 |
-
| SHP | 385K | Naturally occurring human-written responses | Aggregate Human Preference | 18 (labelled) | Question/Instruction + Response
|
36 |
| HH-RLHF | 91K | Dialogue with LLM | Individual Human Preference | not labelled | Multi-turn Dialogue | up to 1.5K T5 tokens |
|
37 |
|
38 |
How is SHP different from other datasets that have scraped Reddit, like [ELI5](https://huggingface.co/datasets/eli5#source-data)?
|
|
|
32 |
|
33 |
| Dataset | Size | Input | Label | # Domains | Data Format | Length |
|
34 |
| -------------------- | ---- | -------------------------- | ---------------------------- | ------------------------- | ------------------------------------- | --------------- |
|
35 |
+
| SHP | 385K | Naturally occurring human-written responses | Aggregate Human Preference | 18 (labelled) | Question/Instruction + Response (Single-turn) | up to 10.1K T5 tokens |
|
36 |
| HH-RLHF | 91K | Dialogue with LLM | Individual Human Preference | not labelled | Multi-turn Dialogue | up to 1.5K T5 tokens |
|
37 |
|
38 |
How is SHP different from other datasets that have scraped Reddit, like [ELI5](https://huggingface.co/datasets/eli5#source-data)?
|