Update README.md
Browse files
README.md
CHANGED
@@ -13,4 +13,37 @@ language:
|
|
13 |
pretty_name: CommonGen-Lite
|
14 |
size_categories:
|
15 |
- n<1K
|
16 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
pretty_name: CommonGen-Lite
|
14 |
size_categories:
|
15 |
- n<1K
|
16 |
+
---
|
17 |
+
|
18 |
+
|
19 |
+
|
20 |
+
# CommonGen-Lite
|
21 |
+
Evaluating LLMs with [CommonGen](https://inklab.usc.edu/CommonGen/) using [CommonGen-lite](https://huggingface.co/datasets/allenai/commongen_lite) dataset (400 examples + 900 human references).
|
22 |
+
We use GPT-4 to evaluate the constrained text generation ability of LLMs. Please see more in our [paper](https://arxiv.org/abs/1911.03705).
|
23 |
+
|
24 |
+
Github: [https://github.com/allenai/CommonGen-Eval](https://github.com/allenai/CommonGen-Eval)
|
25 |
+
|
26 |
+
|
27 |
+
## Leaderboard
|
28 |
+
|
29 |
+
| model | len | cover | pos | win_tie | overall |
|
30 |
+
|----------------------------|-------|---------|-------|-----------|-----------|
|
31 |
+
| human | 12.84 | 99.00 | 98.11 | 100.00 | 97.13 |
|
32 |
+
| gpt-4-0613 | 14.13 | 97.44 | 91.78 | 50.44 | 45.11 |
|
33 |
+
| gpt-4-1106-preview | 14.90 | 96.33 | 90.11 | 50.78 | 44.08 |
|
34 |
+
| gpt-3.5-turbo | 12.76 | 92.11 | 83.00 | 49.78 | 38.06 |
|
35 |
+
| Yi-34b-chat | 13.45 | 80.11 | 75.11 | 39.44 | 23.73 |
|
36 |
+
| vicuna-13b-v1.5 | 15.02 | 85.89 | 79.56 | 27.44 | 18.75 |
|
37 |
+
| tulu-2-dpo-70b | 17.89 | 88.78 | 80.11 | 23.00 | 16.36 |
|
38 |
+
| Mixtral-8x7B-Instruct-v0.1 | 20.15 | 84.11 | 73.33 | 17.89 | 11.03 |
|
39 |
+
| Llama-2-7b-chat-hf | 16.06 | 88.56 | 76.44 | 15.44 | 10.45 |
|
40 |
+
| zephyr-7b-beta | 15.76 | 82.44 | 72.78 | 16.89 | 10.13 |
|
41 |
+
| Yi-6b-chat | 13.32 | 71.67 | 63.56 | 22.11 | 10.07 |
|
42 |
+
|
43 |
+
- **length**: the number of words on average in the generated sentences
|
44 |
+
- **cover**: the percentage of examples where all given concepts are covered by model outputs
|
45 |
+
- **PoS**: the percentage of examples where the part-of-speech (PoS) of ALL given concepts are correct in model outputs
|
46 |
+
- **win_tie**: the percentage of examples where GPT-4-turbo prefers the model outputs over the human-written references (or thinks they are equally good)
|
47 |
+
- **overall**: `cover%` x `pos%` x `win_tie%`
|
48 |
+
|
49 |
+
Link: [https://inklab.usc.edu/CommonGen/leaderboard.html](https://inklab.usc.edu/CommonGen/leaderboard.html)
|