ikala-ray commited on
Commit
73f7358
1 Parent(s): 5dc359f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -6
README.md CHANGED
@@ -1,5 +1,6 @@
1
  ---
2
- license: afl-3.0
 
3
  task_categories:
4
  - question-answering
5
  language:
@@ -24,8 +25,4 @@ size_categories:
24
 
25
  We present TMMLU+ a traditional Chinese massive multitask language understanding dataset. TMMLU+ is a multiple-choice question-answering dataset with 66 subjects from elementary to professional level.
26
 
27
- TMMLU+ dataset is 6 times larger and contains more balanced subjects compared to the previous version, [TMMLU](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval/data/TMMLU). We included benchmark results in TMMLU+ from closed-source models and 20 open-weight Chinese large language models of parameters ranging from 1.8B to 72B. Benchmark results show Traditional Chinese variants still lag behind those trained on Simplified Chinese major models.
28
-
29
-
30
-
31
-
 
1
  ---
2
+ license: other
3
+ license_name: creative-commons-by-nc
4
  task_categories:
5
  - question-answering
6
  language:
 
25
 
26
  We present TMMLU+ a traditional Chinese massive multitask language understanding dataset. TMMLU+ is a multiple-choice question-answering dataset with 66 subjects from elementary to professional level.
27
 
28
+ TMMLU+ dataset is 6 times larger and contains more balanced subjects compared to the previous version, [TMMLU](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval/data/TMMLU). We included benchmark results in TMMLU+ from closed-source models and 20 open-weight Chinese large language models of parameters ranging from 1.8B to 72B. Benchmark results show Traditional Chinese variants still lag behind those trained on Simplified Chinese major models.