kawine commited on
Commit
ffe269e
·
1 Parent(s): 396d60b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -26,10 +26,10 @@ If A had been written before B, then we could not conclude this, since its highe
26
  How is SHP different from [Anthropic's HH-RLHF dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf)?
27
  Most notably, all the data in SHP is human-written, whereas the responses in HH-RLHF are machine-written, giving us two very different distributions.
28
 
29
- | Dataset | Size | Input | Output | Domain Labels | Data Format | Length |
30
  | -------------------- | ---- | -------------------------- | ---------------------------- | ------------------------- | ------------------------------------- | --------------- |
31
- | SHP | 385K | Human-written post and comments | Aggregate Human Preference Label + Scores | Yes | Question/Instruction + Response | up to 10.1K T5 tokens |
32
- | HH-RLHF | 91K | Dialogue with LLM | Individual Human Preference Label | No | Multi-turn Dialogue | up to 1.5K T5 tokens |
33
 
34
  How is SHP different from other datasets that have scraped reddit, like [ELI5](https://huggingface.co/datasets/eli5#source-data)?
35
  Most notably, SHP exploits the timestamp hack to infer preferences, rather than only providing the comments:
 
26
  How is SHP different from [Anthropic's HH-RLHF dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf)?
27
  Most notably, all the data in SHP is human-written, whereas the responses in HH-RLHF are machine-written, giving us two very different distributions.
28
 
29
+ | Dataset | Size | Input | Output | # Domains | Data Format | Length |
30
  | -------------------- | ---- | -------------------------- | ---------------------------- | ------------------------- | ------------------------------------- | --------------- |
31
+ | SHP | 385K | Human-written post and comments | Aggregate Human Preference Label + Scores | 18 (labelled) | Question/Instruction + Response | up to 10.1K T5 tokens |
32
+ | HH-RLHF | 91K | Dialogue with LLM | Individual Human Preference Label | unclear (not labelled) | Multi-turn Dialogue | up to 1.5K T5 tokens |
33
 
34
  How is SHP different from other datasets that have scraped reddit, like [ELI5](https://huggingface.co/datasets/eli5#source-data)?
35
  Most notably, SHP exploits the timestamp hack to infer preferences, rather than only providing the comments: