Adding Evaluation Results
#2
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -1,8 +1,8 @@
|
|
1 |
---
|
2 |
license: other
|
3 |
-
base_model: KnutJaegersberg/Qwen-1_8B-Llamafied
|
4 |
tags:
|
5 |
- generated_from_trainer
|
|
|
6 |
model-index:
|
7 |
- name: qwen-1.8b-vi-pt
|
8 |
results: []
|
@@ -57,3 +57,17 @@ The following hyperparameters were used during training:
|
|
57 |
- Pytorch 2.0.1+cu118
|
58 |
- Datasets 2.14.6
|
59 |
- Tokenizers 0.14.1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: other
|
|
|
3 |
tags:
|
4 |
- generated_from_trainer
|
5 |
+
base_model: KnutJaegersberg/Qwen-1_8B-Llamafied
|
6 |
model-index:
|
7 |
- name: qwen-1.8b-vi-pt
|
8 |
results: []
|
|
|
57 |
- Pytorch 2.0.1+cu118
|
58 |
- Datasets 2.14.6
|
59 |
- Tokenizers 0.14.1
|
60 |
+
|
61 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
62 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_qnguyen3__quan-1.8b-base)
|
63 |
+
|
64 |
+
| Metric |Value|
|
65 |
+
|---------------------------------|----:|
|
66 |
+
|Avg. |43.35|
|
67 |
+
|AI2 Reasoning Challenge (25-Shot)|36.95|
|
68 |
+
|HellaSwag (10-Shot) |58.46|
|
69 |
+
|MMLU (5-Shot) |45.44|
|
70 |
+
|TruthfulQA (0-shot) |41.60|
|
71 |
+
|Winogrande (5-shot) |57.93|
|
72 |
+
|GSM8k (5-shot) |19.71|
|
73 |
+
|