Boof2015 commited on
Commit
3ff50c1
1 Parent(s): e084561

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +123 -7
README.md CHANGED
@@ -1,15 +1,118 @@
1
  ---
2
- base_model:
3
- - PygmalionAI/pygmalion-2-13b
4
- - Undi95/Amethyst-13B
5
  library_name: transformers
6
  tags:
7
  - mergekit
8
  - merge
9
- license: cc-by-sa-4.0
10
- language:
11
- - en
12
  inference: false
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  ---
14
  ## Supernova-experimental
15
  This is an experimental model that was created for the development of NovaAI.
@@ -35,4 +138,17 @@ Below is an instruction that describes a task. Write a response that appropriate
35
 
36
  The following models were included in the merge:
37
  * [PygmalionAI/pygmalion-2-13b](https://huggingface.co/PygmalionAI/pygmalion-2-13b)
38
- * [Undi95/Amethyst-13B](https://huggingface.co/Undi95/Amethyst-13B)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: cc-by-sa-4.0
5
  library_name: transformers
6
  tags:
7
  - mergekit
8
  - merge
9
+ base_model:
10
+ - PygmalionAI/pygmalion-2-13b
11
+ - Undi95/Amethyst-13B
12
  inference: false
13
+ model-index:
14
+ - name: Supernova-experimental
15
+ results:
16
+ - task:
17
+ type: text-generation
18
+ name: Text Generation
19
+ dataset:
20
+ name: AI2 Reasoning Challenge (25-Shot)
21
+ type: ai2_arc
22
+ config: ARC-Challenge
23
+ split: test
24
+ args:
25
+ num_few_shot: 25
26
+ metrics:
27
+ - type: acc_norm
28
+ value: 63.05
29
+ name: normalized accuracy
30
+ source:
31
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=theNovaAI/Supernova-experimental
32
+ name: Open LLM Leaderboard
33
+ - task:
34
+ type: text-generation
35
+ name: Text Generation
36
+ dataset:
37
+ name: HellaSwag (10-Shot)
38
+ type: hellaswag
39
+ split: validation
40
+ args:
41
+ num_few_shot: 10
42
+ metrics:
43
+ - type: acc_norm
44
+ value: 83.66
45
+ name: normalized accuracy
46
+ source:
47
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=theNovaAI/Supernova-experimental
48
+ name: Open LLM Leaderboard
49
+ - task:
50
+ type: text-generation
51
+ name: Text Generation
52
+ dataset:
53
+ name: MMLU (5-Shot)
54
+ type: cais/mmlu
55
+ config: all
56
+ split: test
57
+ args:
58
+ num_few_shot: 5
59
+ metrics:
60
+ - type: acc
61
+ value: 56.59
62
+ name: accuracy
63
+ source:
64
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=theNovaAI/Supernova-experimental
65
+ name: Open LLM Leaderboard
66
+ - task:
67
+ type: text-generation
68
+ name: Text Generation
69
+ dataset:
70
+ name: TruthfulQA (0-shot)
71
+ type: truthful_qa
72
+ config: multiple_choice
73
+ split: validation
74
+ args:
75
+ num_few_shot: 0
76
+ metrics:
77
+ - type: mc2
78
+ value: 49.37
79
+ source:
80
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=theNovaAI/Supernova-experimental
81
+ name: Open LLM Leaderboard
82
+ - task:
83
+ type: text-generation
84
+ name: Text Generation
85
+ dataset:
86
+ name: Winogrande (5-shot)
87
+ type: winogrande
88
+ config: winogrande_xl
89
+ split: validation
90
+ args:
91
+ num_few_shot: 5
92
+ metrics:
93
+ - type: acc
94
+ value: 77.35
95
+ name: accuracy
96
+ source:
97
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=theNovaAI/Supernova-experimental
98
+ name: Open LLM Leaderboard
99
+ - task:
100
+ type: text-generation
101
+ name: Text Generation
102
+ dataset:
103
+ name: GSM8k (5-shot)
104
+ type: gsm8k
105
+ config: main
106
+ split: test
107
+ args:
108
+ num_few_shot: 5
109
+ metrics:
110
+ - type: acc
111
+ value: 28.73
112
+ name: accuracy
113
+ source:
114
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=theNovaAI/Supernova-experimental
115
+ name: Open LLM Leaderboard
116
  ---
117
  ## Supernova-experimental
118
  This is an experimental model that was created for the development of NovaAI.
 
138
 
139
  The following models were included in the merge:
140
  * [PygmalionAI/pygmalion-2-13b](https://huggingface.co/PygmalionAI/pygmalion-2-13b)
141
+ * [Undi95/Amethyst-13B](https://huggingface.co/Undi95/Amethyst-13B)
142
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
143
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_theNovaAI__Supernova-experimental)
144
+
145
+ | Metric |Value|
146
+ |---------------------------------|----:|
147
+ |Avg. |59.79|
148
+ |AI2 Reasoning Challenge (25-Shot)|63.05|
149
+ |HellaSwag (10-Shot) |83.66|
150
+ |MMLU (5-Shot) |56.59|
151
+ |TruthfulQA (0-shot) |49.37|
152
+ |Winogrande (5-shot) |77.35|
153
+ |GSM8k (5-shot) |28.73|
154
+