TeeZee leaderboard-pr-bot commited on
Commit
e73f0ef
·
verified ·
1 Parent(s): 5ecbbf8

Adding Evaluation Results (#2)

Browse files

- Adding Evaluation Results (6fd24e3e0261f0e26331bb3c63fd3a4d0b51b667)


Co-authored-by: Open LLM Leaderboard PR Bot <[email protected]>

Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -130,4 +130,17 @@ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-le
130
  - small quality loss can be observed comparing to base model, as described in the DUS paper
131
  - this merge has best evaluation results, so it will be finetuned to 'recover' from the merge
132
  - finetunig will be done on 5-10% of openorca dataset and full DPO datasets used by SOLAR
133
- - v03 > v01 > v02 - based on average evaluation scores, removing 1/4 of total layers seems to be the correct way to scale DUS
 
 
 
 
 
 
 
 
 
 
 
 
 
 
130
  - small quality loss can be observed comparing to base model, as described in the DUS paper
131
  - this merge has best evaluation results, so it will be finetuned to 'recover' from the merge
132
  - finetunig will be done on 5-10% of openorca dataset and full DPO datasets used by SOLAR
133
+ - v03 > v01 > v02 - based on average evaluation scores, removing 1/4 of total layers seems to be the correct way to scale DUS
134
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
135
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TeeZee__GALAXY-XB-v.03)
136
+
137
+ | Metric |Value|
138
+ |---------------------------------|----:|
139
+ |Avg. |63.37|
140
+ |AI2 Reasoning Challenge (25-Shot)|61.77|
141
+ |HellaSwag (10-Shot) |83.59|
142
+ |MMLU (5-Shot) |64.55|
143
+ |TruthfulQA (0-shot) |44.19|
144
+ |Winogrande (5-shot) |81.06|
145
+ |GSM8k (5-shot) |45.03|
146
+