|
--- |
|
license: other |
|
license_name: creative-commons-by-nc |
|
task_categories: |
|
- question-answering |
|
language: |
|
- zh |
|
tags: |
|
- traditional chinese |
|
- finance |
|
- medical |
|
- taiwan |
|
- benchmark |
|
- zh-tw |
|
- zh-hant |
|
pretty_name: tmmlu++ |
|
size_categories: |
|
- 100K<n<1M |
|
--- |
|
# TMMLU+ : Large scale traditional chinese massive multitask language understanding |
|
|
|
<p align="center"> |
|
<img src="https://huggingface.co/datasets/ikala/tmmluplus/resolve/main/cover.png" alt="A close-up image of a neat paper note with a white background. The text 'TMMLU+' is written horizontally across the center of the note in bold, black. Join us to work in multimodal LLM : https://ikala.ai/recruit/" style="max-width: 400" width=400 /> |
|
</p> |
|
|
|
We present TMMLU+ a traditional Chinese massive multitask language understanding dataset. TMMLU+ is a multiple-choice question-answering dataset with 66 subjects from elementary to professional level. |
|
|
|
TMMLU+ dataset is 6 times larger and contains more balanced subjects compared to the previous version, [TMMLU](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval/data/TMMLU). We included benchmark results in TMMLU+ from closed-source models and 20 open-weight Chinese large language models of parameters ranging from 1.8B to 72B. Benchmark results show Traditional Chinese variants still lag behind those trained on Simplified Chinese major models. |