Update README.md
Browse files
README.md
CHANGED
@@ -89,7 +89,7 @@ We use [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-eval
|
|
89 |
|
90 |
## AGIEval Performance
|
91 |
|
92 |
-
We compare our results to
|
93 |
|
94 |
We find **129%** of the base model's performance on AGI Eval, averaging **0.397**.
|
95 |
As well, we significantly improve upon the official `mistralai/Mistral-7B-Instruct-v0.1` finetuning, achieving **119%** of their performance.
|
|
|
89 |
|
90 |
## AGIEval Performance
|
91 |
|
92 |
+
We compare our results to the base Mistral-7B model (using LM Evaluation Harness).
|
93 |
|
94 |
We find **129%** of the base model's performance on AGI Eval, averaging **0.397**.
|
95 |
As well, we significantly improve upon the official `mistralai/Mistral-7B-Instruct-v0.1` finetuning, achieving **119%** of their performance.
|