Update README.md
Browse files
README.md
CHANGED
@@ -145,6 +145,19 @@ Gemmalpaca-2B outperforms gemma-2b and gemma-2b-it on Nous' benchmark suite (eva
|
|
145 |
| [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) [📄](https://gist.github.com/mlabonne/db0761e74175573292acf497da9e5d95) | 36.1 | 23.76 | 43.6 | 47.64 | 29.41 |
|
146 |
| [google/gemma-2b](https://huggingface.co/google/gemma-2b) [📄](https://gist.github.com/mlabonne/7df1f238c515a5f63a750c8792cef59e) | 34.26 | 22.7 | 43.35 | 39.96 | 31.03 |
|
147 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
148 |
## 🧩 Configuration
|
149 |
|
150 |
It was trained using [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) with the following configuration.
|
@@ -221,16 +234,3 @@ special_tokens:
|
|
221 |
```
|
222 |
|
223 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
224 |
-
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
225 |
-
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__Gemmalpaca-2B)
|
226 |
-
|
227 |
-
| Metric |Value|
|
228 |
-
|---------------------------------|----:|
|
229 |
-
|Avg. |45.65|
|
230 |
-
|AI2 Reasoning Challenge (25-Shot)|48.72|
|
231 |
-
|HellaSwag (10-Shot) |71.36|
|
232 |
-
|MMLU (5-Shot) |36.30|
|
233 |
-
|TruthfulQA (0-shot) |41.24|
|
234 |
-
|Winogrande (5-shot) |65.59|
|
235 |
-
|GSM8k (5-shot) |10.69|
|
236 |
-
|
|
|
145 |
| [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) [📄](https://gist.github.com/mlabonne/db0761e74175573292acf497da9e5d95) | 36.1 | 23.76 | 43.6 | 47.64 | 29.41 |
|
146 |
| [google/gemma-2b](https://huggingface.co/google/gemma-2b) [📄](https://gist.github.com/mlabonne/7df1f238c515a5f63a750c8792cef59e) | 34.26 | 22.7 | 43.35 | 39.96 | 31.03 |
|
147 |
|
148 |
+
### [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
149 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__Gemmalpaca-2B)
|
150 |
+
|
151 |
+
| Metric |Value|
|
152 |
+
|---------------------------------|----:|
|
153 |
+
|Avg. |45.65|
|
154 |
+
|AI2 Reasoning Challenge (25-Shot)|48.72|
|
155 |
+
|HellaSwag (10-Shot) |71.36|
|
156 |
+
|MMLU (5-Shot) |36.30|
|
157 |
+
|TruthfulQA (0-shot) |41.24|
|
158 |
+
|Winogrande (5-shot) |65.59|
|
159 |
+
|GSM8k (5-shot) |10.69|
|
160 |
+
|
161 |
## 🧩 Configuration
|
162 |
|
163 |
It was trained using [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) with the following configuration.
|
|
|
234 |
```
|
235 |
|
236 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|