Update README.md
Browse files
README.md
CHANGED
@@ -74,17 +74,16 @@ We evaluated our model on four benchmark datasets, which include `ARC-Challenge`
|
|
74 |
We used the [lm-evaluation-harness repository](https://github.com/EleutherAI/lm-evaluation-harness), specifically commit [b281b0921b636bc36ad05c0b0b0763bd6dd43463](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463).
|
75 |
|
76 |
### Main Results
|
77 |
-
| Model
|
78 |
-
|
79 |
-
| **Llama-2-70b-instruct-v2** (***Ours***, ***Local Reproduction***) | **72.7** | **71.6** | **87.7** |
|
80 |
-
| Llama-2-70b-instruct (Ours,
|
81 |
-
| llama-65b-instruct (Ours,
|
82 |
-
| Llama-2-70b-hf | 67.3 | 67.3 | 87.3 | 69.8 | 44.9 |
|
83 |
-
| llama-30b-instruct-2048 (Ours, Open LLM Leaderboard) | 67.0 | 64.9 | 84.9 | 61.9 | 56.3 |
|
84 |
-
| llama-30b-instruct
|
85 |
-
| llama-
|
86 |
-
|
|
87 |
-
| falcon-40b-instruct | 63.4 | 61.6 | 84.3 | 55.4 | 52.5 | | |
|
88 |
|
89 |
### Scripts
|
90 |
- Prepare evaluation environments:
|
|
|
74 |
We used the [lm-evaluation-harness repository](https://github.com/EleutherAI/lm-evaluation-harness), specifically commit [b281b0921b636bc36ad05c0b0b0763bd6dd43463](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463).
|
75 |
|
76 |
### Main Results
|
77 |
+
| Model | H4 Avg. | ARC | HellaSwag | MMLU | TruthfulQA | | MT_Bench |
|
78 |
+
|-|-|-|-|-|-|-|-|
|
79 |
+
| **Llama-2-70b-instruct-v2** (***Ours***, ***Local Reproduction***) | **72.7** | **71.6** | **87.7** | 69.7 | **61.6** | | **7.44063** |
|
80 |
+
| Llama-2-70b-instruct (Ours, Open LLM Leaderboard) | 72.3 | 70.9 | 87.5 | **69.8** | 61 | | 7.24375 |
|
81 |
+
| llama-65b-instruct (Ours, Open LLM Leaderboard) | 69.4 | 67.6 | 86.5 | 64.9 | 58.8 | | |
|
82 |
+
| Llama-2-70b-hf | 67.3 | 67.3 | 87.3 | 69.8 | 44.9 | | |
|
83 |
+
| llama-30b-instruct-2048 (Ours, Open LLM Leaderboard) | 67.0 | 64.9 | 84.9 | 61.9 | 56.3 | | |
|
84 |
+
| llama-30b-instruct (Ours, Open LLM Leaderboard) | 65.2 | 62.5 | 86.2 | 59.4 | 52.8 | | |
|
85 |
+
| llama-65b | 64.2 | 63.5 | 86.1 | 63.9 | 43.4 | | |
|
86 |
+
| falcon-40b-instruct | 63.4 | 61.6 | 84.3 | 55.4 | 52.5 | | |
|
|
|
87 |
|
88 |
### Scripts
|
89 |
- Prepare evaluation environments:
|