File size: 1,356 Bytes
093556e
73f7358
 
2f56cfe
 
 
 
 
 
 
 
 
 
 
 
 
 
7dac5b8
093556e
2f56cfe
 
5dc359f
 
 
2f56cfe
802309a
94b7540
73f7358
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
---
license: other
license_name: creative-commons-by-nc
task_categories:
- question-answering
language:
- zh
tags:
- traditional chinese
- finance
- medical
- taiwan
- benchmark
- zh-tw
- zh-hant
pretty_name: tmmlu++
size_categories:
- 100K<n<1M
---
# TMMLU+ : Large scale traditional chinese massive multitask language understanding

<p align="center">
<img src="https://huggingface.co/datasets/ikala/tmmluplus/resolve/main/cover.png" alt="A close-up image of a neat paper note with a white background. The text 'TMMLU+' is written horizontally across the center of the note in bold, black. Join us to work in multimodal LLM : https://ikala.ai/recruit/" style="max-width: 400" width=400 />
</p>

We present TMMLU+ a traditional Chinese massive multitask language understanding dataset. TMMLU+ is a multiple-choice question-answering dataset with 66 subjects from elementary to professional level. 

TMMLU+ dataset is 6 times larger and contains more balanced subjects compared to the previous version, [TMMLU](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval/data/TMMLU). We included benchmark results in TMMLU+ from closed-source models and 20 open-weight Chinese large language models of parameters ranging from 1.8B to 72B. Benchmark results show Traditional Chinese variants still lag behind those trained on Simplified Chinese major models.