win10 commited on
Commit
bf7d346
·
verified ·
1 Parent(s): 1382fac

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +117 -9
README.md CHANGED
@@ -1,4 +1,12 @@
1
  ---
 
 
 
 
 
 
 
 
2
  base_model:
3
  - sthenno/tempesthenno-0126-ckpt150
4
  - ToastyPigeon/Qwen2.5-14B-Instruct-1M-Unalign
@@ -7,14 +15,101 @@ base_model:
7
  - sthenno-com/miscii-14b-1225
8
  - sthenno/tempesthenno-nuslerp-0124
9
  - huihui-ai/Qwen2.5-14B-Instruct-1M-abliterated
10
- library_name: transformers
11
- tags:
12
- - mergekit
13
- - merge
14
- language:
15
- - en
16
- - zh
17
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  ---
19
 
20
  # merge
@@ -59,4 +154,17 @@ parameters:
59
  dtype: bfloat16
60
  normalize: true
61
 
62
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ - zh
5
+ license: apache-2.0
6
+ library_name: transformers
7
+ tags:
8
+ - mergekit
9
+ - merge
10
  base_model:
11
  - sthenno/tempesthenno-0126-ckpt150
12
  - ToastyPigeon/Qwen2.5-14B-Instruct-1M-Unalign
 
15
  - sthenno-com/miscii-14b-1225
16
  - sthenno/tempesthenno-nuslerp-0124
17
  - huihui-ai/Qwen2.5-14B-Instruct-1M-abliterated
18
+ model-index:
19
+ - name: miscii-14b-1M-0128
20
+ results:
21
+ - task:
22
+ type: text-generation
23
+ name: Text Generation
24
+ dataset:
25
+ name: IFEval (0-Shot)
26
+ type: HuggingFaceH4/ifeval
27
+ args:
28
+ num_few_shot: 0
29
+ metrics:
30
+ - type: inst_level_strict_acc and prompt_level_strict_acc
31
+ value: 41.81
32
+ name: strict accuracy
33
+ source:
34
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=win10/miscii-14b-1M-0128
35
+ name: Open LLM Leaderboard
36
+ - task:
37
+ type: text-generation
38
+ name: Text Generation
39
+ dataset:
40
+ name: BBH (3-Shot)
41
+ type: BBH
42
+ args:
43
+ num_few_shot: 3
44
+ metrics:
45
+ - type: acc_norm
46
+ value: 37.27
47
+ name: normalized accuracy
48
+ source:
49
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=win10/miscii-14b-1M-0128
50
+ name: Open LLM Leaderboard
51
+ - task:
52
+ type: text-generation
53
+ name: Text Generation
54
+ dataset:
55
+ name: MATH Lvl 5 (4-Shot)
56
+ type: hendrycks/competition_math
57
+ args:
58
+ num_few_shot: 4
59
+ metrics:
60
+ - type: exact_match
61
+ value: 40.18
62
+ name: exact match
63
+ source:
64
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=win10/miscii-14b-1M-0128
65
+ name: Open LLM Leaderboard
66
+ - task:
67
+ type: text-generation
68
+ name: Text Generation
69
+ dataset:
70
+ name: GPQA (0-shot)
71
+ type: Idavidrein/gpqa
72
+ args:
73
+ num_few_shot: 0
74
+ metrics:
75
+ - type: acc_norm
76
+ value: 17.67
77
+ name: acc_norm
78
+ source:
79
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=win10/miscii-14b-1M-0128
80
+ name: Open LLM Leaderboard
81
+ - task:
82
+ type: text-generation
83
+ name: Text Generation
84
+ dataset:
85
+ name: MuSR (0-shot)
86
+ type: TAUR-Lab/MuSR
87
+ args:
88
+ num_few_shot: 0
89
+ metrics:
90
+ - type: acc_norm
91
+ value: 28.75
92
+ name: acc_norm
93
+ source:
94
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=win10/miscii-14b-1M-0128
95
+ name: Open LLM Leaderboard
96
+ - task:
97
+ type: text-generation
98
+ name: Text Generation
99
+ dataset:
100
+ name: MMLU-PRO (5-shot)
101
+ type: TIGER-Lab/MMLU-Pro
102
+ config: main
103
+ split: test
104
+ args:
105
+ num_few_shot: 5
106
+ metrics:
107
+ - type: acc
108
+ value: 38.79
109
+ name: accuracy
110
+ source:
111
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=win10/miscii-14b-1M-0128
112
+ name: Open LLM Leaderboard
113
  ---
114
 
115
  # merge
 
154
  dtype: bfloat16
155
  normalize: true
156
 
157
+ ```
158
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
159
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/win10__miscii-14b-1M-0128-details)
160
+
161
+ | Metric |Value|
162
+ |-------------------|----:|
163
+ |Avg. |34.08|
164
+ |IFEval (0-Shot) |41.81|
165
+ |BBH (3-Shot) |37.27|
166
+ |MATH Lvl 5 (4-Shot)|40.18|
167
+ |GPQA (0-shot) |17.67|
168
+ |MuSR (0-shot) |28.75|
169
+ |MMLU-PRO (5-shot) |38.79|
170
+