Adding Evaluation Results
#3
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -60,4 +60,17 @@ Also thanks to Meta for LLaMA.
|
|
60 |
|
61 |
Each model and LoRA was hand picked and considered for what it could contribute to this ensemble.
|
62 |
Thanks to each and every one of you for your incredible work developing some of the best things
|
63 |
-
to come out of this community.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
60 |
|
61 |
Each model and LoRA was hand picked and considered for what it could contribute to this ensemble.
|
62 |
Thanks to each and every one of you for your incredible work developing some of the best things
|
63 |
+
to come out of this community.
|
64 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
65 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_digitous__13B-Chimera)
|
66 |
+
|
67 |
+
| Metric | Value |
|
68 |
+
|-----------------------|---------------------------|
|
69 |
+
| Avg. | 52.19 |
|
70 |
+
| ARC (25-shot) | 57.59 |
|
71 |
+
| HellaSwag (10-shot) | 81.5 |
|
72 |
+
| MMLU (5-shot) | 49.86 |
|
73 |
+
| TruthfulQA (0-shot) | 52.59 |
|
74 |
+
| Winogrande (5-shot) | 77.27 |
|
75 |
+
| GSM8K (5-shot) | 10.69 |
|
76 |
+
| DROP (3-shot) | 35.84 |
|