Update README.md
Browse files
README.md
CHANGED
@@ -21,6 +21,20 @@ Spaetzle-v12-7b is a merge of the following models using [LazyMergekit](https://
|
|
21 |
* [ResplendentAI/Flora_DPO_7B](https://huggingface.co/ResplendentAI/Flora_DPO_7B)
|
22 |
* on the basis of [mayflowergmbh/Wiedervereinigung-7b-dpo-laser](https://huggingface.co/mayflowergmbh/Wiedervereinigung-7b-dpo-laser)
|
23 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
## 🧩 Configuration
|
25 |
|
26 |
```yaml
|
|
|
21 |
* [ResplendentAI/Flora_DPO_7B](https://huggingface.co/ResplendentAI/Flora_DPO_7B)
|
22 |
* on the basis of [mayflowergmbh/Wiedervereinigung-7b-dpo-laser](https://huggingface.co/mayflowergmbh/Wiedervereinigung-7b-dpo-laser)
|
23 |
|
24 |
+
As expected, this is a little bit worse in general English tasks over Spaetzle-v12-7b, but a tiny little bit better on German tasks, at least some: e.g. it reaches an EQ-Bench (de)
|
25 |
+
score of 64.81, but only
|
26 |
+
|
27 |
+
| Metric |Value|
|
28 |
+
|---------------------------------|----:|
|
29 |
+
|Avg. |69.36|
|
30 |
+
|AI2 Reasoning Challenge (25-Shot)|65.96|
|
31 |
+
|HellaSwag (10-Shot) |86.16|
|
32 |
+
|MMLU (5-Shot) |63.48|
|
33 |
+
|TruthfulQA (0-shot) |57.84|
|
34 |
+
|Winogrande (5-shot) |80.03|
|
35 |
+
|GSM8k (5-shot) |62.70|
|
36 |
+
|
37 |
+
|
38 |
## 🧩 Configuration
|
39 |
|
40 |
```yaml
|