Adding Evaluation Results
#2
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -69,4 +69,17 @@ Also thanks to Meta for LLaMA.
|
|
69 |
|
70 |
Each model and LoRA was hand picked and considered for what it could contribute to this ensemble.
|
71 |
Thanks to each and every one of you for your incredible work developing some of the best things
|
72 |
-
to come out of this community.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
69 |
|
70 |
Each model and LoRA was hand picked and considered for what it could contribute to this ensemble.
|
71 |
Thanks to each and every one of you for your incredible work developing some of the best things
|
72 |
+
to come out of this community.
|
73 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
74 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_CalderaAI__13B-BlueMethod)
|
75 |
+
|
76 |
+
| Metric | Value |
|
77 |
+
|-----------------------|---------------------------|
|
78 |
+
| Avg. | 51.76 |
|
79 |
+
| ARC (25-shot) | 59.64 |
|
80 |
+
| HellaSwag (10-shot) | 82.07 |
|
81 |
+
| MMLU (5-shot) | 50.34 |
|
82 |
+
| TruthfulQA (0-shot) | 47.74 |
|
83 |
+
| Winogrande (5-shot) | 77.11 |
|
84 |
+
| GSM8K (5-shot) | 7.81 |
|
85 |
+
| DROP (3-shot) | 37.62 |
|