Adding Evaluation Results (#1)
Browse files- Adding Evaluation Results (77be1ad6b9b4a9f3534b8d716ff19d3e59b99cbc)
Co-authored-by: Open LLM Leaderboard PR Bot <[email protected]>
README.md
CHANGED
@@ -52,3 +52,17 @@ The following hyperparameters were used during training:
|
|
52 |
- Pytorch 2.1.0+cu121
|
53 |
- Datasets 2.14.5
|
54 |
- Tokenizers 0.14.1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
52 |
- Pytorch 2.1.0+cu121
|
53 |
- Datasets 2.14.5
|
54 |
- Tokenizers 0.14.1
|
55 |
+
|
56 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
57 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_w95__megachat)
|
58 |
+
|
59 |
+
| Metric | Value |
|
60 |
+
|-----------------------|---------------------------|
|
61 |
+
| Avg. | 30.38 |
|
62 |
+
| ARC (25-shot) | 30.8 |
|
63 |
+
| HellaSwag (10-shot) | 54.35 |
|
64 |
+
| MMLU (5-shot) | 25.55 |
|
65 |
+
| TruthfulQA (0-shot) | 39.85 |
|
66 |
+
| Winogrande (5-shot) | 56.99 |
|
67 |
+
| GSM8K (5-shot) | 0.99 |
|
68 |
+
| DROP (3-shot) | 4.16 |
|