Update README.md
Browse files
README.md
CHANGED
@@ -75,7 +75,7 @@ We used the [lm-evaluation-harness repository](https://github.com/EleutherAI/lm-
|
|
75 |
|
76 |
### Main Results
|
77 |
| Model | H4 Avg. | ARC | HellaSwag | MMLU | TruthfulQA | | MT_Bench |
|
78 |
-
|
79 |
| **Llama-2-70b-instruct-v2** (***Ours***, ***Local Reproduction***) | **72.7** | **71.6** | **87.7** | 69.7 | **61.6** | | **7.44063** |
|
80 |
| Llama-2-70b-instruct (Ours, Open LLM Leaderboard) | 72.3 | 70.9 | 87.5 | **69.8** | 61 | | 7.24375 |
|
81 |
| llama-65b-instruct (Ours, Open LLM Leaderboard) | 69.4 | 67.6 | 86.5 | 64.9 | 58.8 | | |
|
|
|
75 |
|
76 |
### Main Results
|
77 |
| Model | H4 Avg. | ARC | HellaSwag | MMLU | TruthfulQA | | MT_Bench |
|
78 |
+
|--------------------------------------------------------------------|----------|----------|----------|------|----------|-|-------------|
|
79 |
| **Llama-2-70b-instruct-v2** (***Ours***, ***Local Reproduction***) | **72.7** | **71.6** | **87.7** | 69.7 | **61.6** | | **7.44063** |
|
80 |
| Llama-2-70b-instruct (Ours, Open LLM Leaderboard) | 72.3 | 70.9 | 87.5 | **69.8** | 61 | | 7.24375 |
|
81 |
| llama-65b-instruct (Ours, Open LLM Leaderboard) | 69.4 | 67.6 | 86.5 | 64.9 | 58.8 | | |
|