leaderboard-pr-bot commited on
Commit
b5080f1
1 Parent(s): 9884c50

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +31 -0
README.md CHANGED
@@ -21,6 +21,9 @@ model-index:
21
  - type: acc_norm
22
  value: 60.32
23
  name: normalized accuracy
 
 
 
24
  source:
25
  url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NLUHOPOE/experiment2-cause-non-qLoRa
26
  name: Open LLM Leaderboard
@@ -37,6 +40,9 @@ model-index:
37
  - type: acc_norm
38
  value: 82.92
39
  name: normalized accuracy
 
 
 
40
  source:
41
  url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NLUHOPOE/experiment2-cause-non-qLoRa
42
  name: Open LLM Leaderboard
@@ -54,6 +60,9 @@ model-index:
54
  - type: acc
55
  value: 62.3
56
  name: accuracy
 
 
 
57
  source:
58
  url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NLUHOPOE/experiment2-cause-non-qLoRa
59
  name: Open LLM Leaderboard
@@ -70,6 +79,8 @@ model-index:
70
  metrics:
71
  - type: mc2
72
  value: 45.47
 
 
73
  source:
74
  url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NLUHOPOE/experiment2-cause-non-qLoRa
75
  name: Open LLM Leaderboard
@@ -87,6 +98,9 @@ model-index:
87
  - type: acc
88
  value: 78.06
89
  name: accuracy
 
 
 
90
  source:
91
  url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NLUHOPOE/experiment2-cause-non-qLoRa
92
  name: Open LLM Leaderboard
@@ -104,6 +118,9 @@ model-index:
104
  - type: acc
105
  value: 33.59
106
  name: accuracy
 
 
 
107
  source:
108
  url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NLUHOPOE/experiment2-cause-non-qLoRa
109
  name: Open LLM Leaderboard
@@ -144,3 +161,17 @@ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-le
144
  |Winogrande (5-shot) |78.06|
145
  |GSM8k (5-shot) |33.59|
146
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  - type: acc_norm
22
  value: 60.32
23
  name: normalized accuracy
24
+ - type: acc_norm
25
+ value: 61.09
26
+ name: normalized accuracy
27
  source:
28
  url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NLUHOPOE/experiment2-cause-non-qLoRa
29
  name: Open LLM Leaderboard
 
40
  - type: acc_norm
41
  value: 82.92
42
  name: normalized accuracy
43
+ - type: acc_norm
44
+ value: 83.72
45
+ name: normalized accuracy
46
  source:
47
  url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NLUHOPOE/experiment2-cause-non-qLoRa
48
  name: Open LLM Leaderboard
 
60
  - type: acc
61
  value: 62.3
62
  name: accuracy
63
+ - type: acc
64
+ value: 64.13
65
+ name: accuracy
66
  source:
67
  url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NLUHOPOE/experiment2-cause-non-qLoRa
68
  name: Open LLM Leaderboard
 
79
  metrics:
80
  - type: mc2
81
  value: 45.47
82
+ - type: mc2
83
+ value: 47.34
84
  source:
85
  url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NLUHOPOE/experiment2-cause-non-qLoRa
86
  name: Open LLM Leaderboard
 
98
  - type: acc
99
  value: 78.06
100
  name: accuracy
101
+ - type: acc
102
+ value: 79.48
103
+ name: accuracy
104
  source:
105
  url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NLUHOPOE/experiment2-cause-non-qLoRa
106
  name: Open LLM Leaderboard
 
118
  - type: acc
119
  value: 33.59
120
  name: accuracy
121
+ - type: acc
122
+ value: 40.41
123
+ name: accuracy
124
  source:
125
  url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NLUHOPOE/experiment2-cause-non-qLoRa
126
  name: Open LLM Leaderboard
 
161
  |Winogrande (5-shot) |78.06|
162
  |GSM8k (5-shot) |33.59|
163
 
164
+
165
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
166
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NLUHOPOE__experiment2-cause-non)
167
+
168
+ | Metric |Value|
169
+ |---------------------------------|----:|
170
+ |Avg. |62.69|
171
+ |AI2 Reasoning Challenge (25-Shot)|61.09|
172
+ |HellaSwag (10-Shot) |83.72|
173
+ |MMLU (5-Shot) |64.13|
174
+ |TruthfulQA (0-shot) |47.34|
175
+ |Winogrande (5-shot) |79.48|
176
+ |GSM8k (5-shot) |40.41|
177
+