Adding Evaluation Results
Browse filesThis is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr
The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.
If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions
README.md
CHANGED
@@ -1,10 +1,113 @@
|
|
1 |
---
|
|
|
|
|
2 |
license: other
|
3 |
license_name: yi-license
|
4 |
license_link: https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE
|
5 |
-
language:
|
6 |
-
- yue
|
7 |
pipeline_tag: text-generation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
---
|
9 |
|
10 |
# CantoneseLLM
|
@@ -50,4 +153,17 @@ output = tokenizer.decode(output[0], skip_special_tokens=True)
|
|
50 |
|
51 |
The model is intended to use for Cantonese language understanding and generation tasks, it may not be suitable for other Chinese languages. The model is trained on a diverse range of Cantonese text, including news, Wikipedia, and textbooks, it may not be suitable for informal or dialectal Cantonese, it may contain bias and misinformation, please use it with caution.
|
52 |
|
53 |
-
We found the model is not well trained on the updated Hong Kong knowledge, it may due to the corpus is not large enough to brainwash the original model. We will continue to improve the model and corpus in the future.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- yue
|
4 |
license: other
|
5 |
license_name: yi-license
|
6 |
license_link: https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE
|
|
|
|
|
7 |
pipeline_tag: text-generation
|
8 |
+
model-index:
|
9 |
+
- name: CantoneseLLM-6B-preview202402
|
10 |
+
results:
|
11 |
+
- task:
|
12 |
+
type: text-generation
|
13 |
+
name: Text Generation
|
14 |
+
dataset:
|
15 |
+
name: AI2 Reasoning Challenge (25-Shot)
|
16 |
+
type: ai2_arc
|
17 |
+
config: ARC-Challenge
|
18 |
+
split: test
|
19 |
+
args:
|
20 |
+
num_few_shot: 25
|
21 |
+
metrics:
|
22 |
+
- type: acc_norm
|
23 |
+
value: 55.63
|
24 |
+
name: normalized accuracy
|
25 |
+
source:
|
26 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hon9kon9ize/CantoneseLLM-6B-preview202402
|
27 |
+
name: Open LLM Leaderboard
|
28 |
+
- task:
|
29 |
+
type: text-generation
|
30 |
+
name: Text Generation
|
31 |
+
dataset:
|
32 |
+
name: HellaSwag (10-Shot)
|
33 |
+
type: hellaswag
|
34 |
+
split: validation
|
35 |
+
args:
|
36 |
+
num_few_shot: 10
|
37 |
+
metrics:
|
38 |
+
- type: acc_norm
|
39 |
+
value: 75.8
|
40 |
+
name: normalized accuracy
|
41 |
+
source:
|
42 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hon9kon9ize/CantoneseLLM-6B-preview202402
|
43 |
+
name: Open LLM Leaderboard
|
44 |
+
- task:
|
45 |
+
type: text-generation
|
46 |
+
name: Text Generation
|
47 |
+
dataset:
|
48 |
+
name: MMLU (5-Shot)
|
49 |
+
type: cais/mmlu
|
50 |
+
config: all
|
51 |
+
split: test
|
52 |
+
args:
|
53 |
+
num_few_shot: 5
|
54 |
+
metrics:
|
55 |
+
- type: acc
|
56 |
+
value: 63.07
|
57 |
+
name: accuracy
|
58 |
+
source:
|
59 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hon9kon9ize/CantoneseLLM-6B-preview202402
|
60 |
+
name: Open LLM Leaderboard
|
61 |
+
- task:
|
62 |
+
type: text-generation
|
63 |
+
name: Text Generation
|
64 |
+
dataset:
|
65 |
+
name: TruthfulQA (0-shot)
|
66 |
+
type: truthful_qa
|
67 |
+
config: multiple_choice
|
68 |
+
split: validation
|
69 |
+
args:
|
70 |
+
num_few_shot: 0
|
71 |
+
metrics:
|
72 |
+
- type: mc2
|
73 |
+
value: 42.26
|
74 |
+
source:
|
75 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hon9kon9ize/CantoneseLLM-6B-preview202402
|
76 |
+
name: Open LLM Leaderboard
|
77 |
+
- task:
|
78 |
+
type: text-generation
|
79 |
+
name: Text Generation
|
80 |
+
dataset:
|
81 |
+
name: Winogrande (5-shot)
|
82 |
+
type: winogrande
|
83 |
+
config: winogrande_xl
|
84 |
+
split: validation
|
85 |
+
args:
|
86 |
+
num_few_shot: 5
|
87 |
+
metrics:
|
88 |
+
- type: acc
|
89 |
+
value: 74.11
|
90 |
+
name: accuracy
|
91 |
+
source:
|
92 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hon9kon9ize/CantoneseLLM-6B-preview202402
|
93 |
+
name: Open LLM Leaderboard
|
94 |
+
- task:
|
95 |
+
type: text-generation
|
96 |
+
name: Text Generation
|
97 |
+
dataset:
|
98 |
+
name: GSM8k (5-shot)
|
99 |
+
type: gsm8k
|
100 |
+
config: main
|
101 |
+
split: test
|
102 |
+
args:
|
103 |
+
num_few_shot: 5
|
104 |
+
metrics:
|
105 |
+
- type: acc
|
106 |
+
value: 30.71
|
107 |
+
name: accuracy
|
108 |
+
source:
|
109 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hon9kon9ize/CantoneseLLM-6B-preview202402
|
110 |
+
name: Open LLM Leaderboard
|
111 |
---
|
112 |
|
113 |
# CantoneseLLM
|
|
|
153 |
|
154 |
The model is intended to use for Cantonese language understanding and generation tasks, it may not be suitable for other Chinese languages. The model is trained on a diverse range of Cantonese text, including news, Wikipedia, and textbooks, it may not be suitable for informal or dialectal Cantonese, it may contain bias and misinformation, please use it with caution.
|
155 |
|
156 |
+
We found the model is not well trained on the updated Hong Kong knowledge, it may due to the corpus is not large enough to brainwash the original model. We will continue to improve the model and corpus in the future.
|
157 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
158 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_hon9kon9ize__CantoneseLLM-6B-preview202402)
|
159 |
+
|
160 |
+
| Metric |Value|
|
161 |
+
|---------------------------------|----:|
|
162 |
+
|Avg. |56.93|
|
163 |
+
|AI2 Reasoning Challenge (25-Shot)|55.63|
|
164 |
+
|HellaSwag (10-Shot) |75.80|
|
165 |
+
|MMLU (5-Shot) |63.07|
|
166 |
+
|TruthfulQA (0-shot) |42.26|
|
167 |
+
|Winogrande (5-shot) |74.11|
|
168 |
+
|GSM8k (5-shot) |30.71|
|
169 |
+
|