Update README.md
Browse files
README.md
CHANGED
@@ -2246,3 +2246,94 @@ configs:
|
|
2246 |
- split: test
|
2247 |
path: veterinary_pharmacology/test-*
|
2248 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2246 |
- split: test
|
2247 |
path: veterinary_pharmacology/test-*
|
2248 |
---
|
2249 |
+
# TMMLU+ : Large scale traditional chinese massive multitask language understanding
|
2250 |
+
|
2251 |
+
We present TMMLU+, a traditional Chinese massive multitask language understanding dataset. TMMLU+ is a multiple-choice question-answering dataset featuring 66 subjects, ranging from elementary to professional level.
|
2252 |
+
|
2253 |
+
The TMMLU+ dataset is six times larger and contains more balanced subjects compared to its predecessor, [TMMLU](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval/data/TMMLU). We have included benchmark results in TMMLU+ from closed-source models and 20 open-weight Chinese large language models, with parameters ranging from 1.8B to 72B. The benchmark results show that Traditional Chinese variants still lag behind those trained on major Simplified Chinese models.
|
2254 |
+
|
2255 |
+
## Leaderboard ( Last Updated : 2025/04/05 )
|
2256 |
+
|
2257 |
+
Model | humanities | social sciences | STEM | Others | Average
|
2258 |
+
-----------------------------------------------------------------------------------------------------------
|
2259 |
+
openrouter/quasar-alpha | 70.38 | 83.07 | 87.89 | 76.42 | 79.44
|
2260 |
+
deepseek-v3 | 73.19 | 81.93 | 82.93 | 74.41 | 78.11
|
2261 |
+
Qwen/Qwen2.5-72B-Instruct-Turbo | 67.59 | 79.36 | 82.57 | 72.65 | 75.54
|
2262 |
+
gpt-4o-2024-08-06 | 65.48 | 78.23 | 81.39 | 71.24 | 74.08
|
2263 |
+
claude-3-5-sonnet-20240620 | 73.23 | 78.27 | 68.50 | 69.35 | 72.34
|
2264 |
+
gemini-2.0-flash-001 | 67.81 | 75.24 | 74.79 | 65.92 | 70.94
|
2265 |
+
gemini-2.0-flash-lite-001 | 65.26 | 75.12 | 72.73 | 65.50 | 69.65
|
2266 |
+
gemini-2.0-flash-lite-preview-02-05 | 64.66 | 73.48 | 70.00 | 63.90 | 68.01
|
2267 |
+
meta-llama/llama-3.2-90b-vision-instruct | 68.93 | 75.77 | 67.19 | 59.79 | 67.92
|
2268 |
+
qwen/qwen2.5-vl-32b-instruct | 62.93 | 79.24 | 63.61 | 65.16 | 67.73
|
2269 |
+
Qwen/QwQ-32B-Preview | 57.98 | 70.94 | 72.87 | 63.59 | 66.35
|
2270 |
+
claude-3-opus-20240229 | 60.34 | 70.12 | 67.43 | 62.32 | 65.05
|
2271 |
+
gemini-1.5-pro | 61.84 | 70.29 | 66.18 | 60.30 | 64.65
|
2272 |
+
gpt-4o-mini-2024-07-18 | 55.01 | 67.09 | 73.16 | 61.36 | 64.15
|
2273 |
+
mistralai/Mistral-Small-24B-Instruct-2501 | 54.56 | 68.32 | 73.25 | 59.25 | 63.85
|
2274 |
+
llama-3.1-70b-versatile | 64.94 | 70.14 | 58.63 | 61.33 | 63.76
|
2275 |
+
Qwen/Qwen2.5-7B-Instruct-Turbo | 54.42 | 64.51 | 68.01 | 58.83 | 61.44
|
2276 |
+
yentinglin/Llama-3-Taiwan-8B-Instruct | 61.51 | 67.61 | 52.05 | 58.60 | 59.94
|
2277 |
+
meta-llama/llama-4-scout | 53.86 | 61.83 | 60.76 | 57.78 | 58.56
|
2278 |
+
google/gemma-3-27b-it | 59.75 | 58.32 | 55.65 | 49.21 | 55.73
|
2279 |
+
claude-3-sonnet-20240229 | 52.06 | 59.38 | 49.87 | 51.64 | 53.24
|
2280 |
+
meta-llama/llama-4-maverick | 52.53 | 49.88 | 54.82 | 51.41 | 52.16
|
2281 |
+
Qwen2-7B-Instruct | 55.66 | 66.40 | 27.18 | 55.32 | 51.14
|
2282 |
+
gemma2-9b-it | 45.38 | 55.76 | 49.89 | 48.92 | 49.99
|
2283 |
+
claude-3-haiku-20240307 | 47.48 | 54.48 | 48.47 | 48.77 | 49.80
|
2284 |
+
gemini-1.5-flash | 42.99 | 53.42 | 53.47 | 46.56 | 49.11
|
2285 |
+
reka-flash | 44.07 | 52.68 | 46.04 | 43.43 | 46.56
|
2286 |
+
meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo | 44.03 | 50.95 | 42.75 | 45.19 | 45.73
|
2287 |
+
mixtral-8x7b-32768 | 44.75 | 50.34 | 32.60 | 43.76 | 42.86
|
2288 |
+
meta-llama/Llama-3-70b-chat-hf | 37.50 | 47.02 | 34.44 | 39.51 | 39.62
|
2289 |
+
RekaAI/reka-flash-3 | 34.76 | 40.70 | 42.98 | 35.37 | 38.45
|
2290 |
+
google/gemma-7b-it | 34.00 | 35.70 | 31.89 | 33.79 | 33.84
|
2291 |
+
reka-edge | 31.84 | 39.40 | 30.02 | 32.36 | 33.41
|
2292 |
+
meta-llama/Llama-3-8b-chat-hf | 28.91 | 34.19 | 31.52 | 31.79 | 31.60
|
2293 |
+
taide/Llama3-TAIDE-LX-8B-Chat-Alpha1 | 27.02 | 36.64 | 25.33 | 27.96 | 29.24
|
2294 |
+
|
2295 |
+
|
2296 |
+
|
2297 |
+
## How to use
|
2298 |
+
|
2299 |
+
```python
|
2300 |
+
from datasets import load_dataset
|
2301 |
+
task_list = [
|
2302 |
+
'engineering_math', 'dentistry', 'traditional_chinese_medicine_clinical_medicine', 'clinical_psychology', 'technical', 'culinary_skills', 'mechanical', 'logic_reasoning', 'real_estate',
|
2303 |
+
'general_principles_of_law', 'finance_banking', 'anti_money_laundering', 'ttqav2', 'marketing_management', 'business_management', 'organic_chemistry', 'advance_chemistry',
|
2304 |
+
'physics', 'secondary_physics', 'human_behavior', 'national_protection', 'jce_humanities', 'politic_science', 'agriculture', 'official_document_management',
|
2305 |
+
'financial_analysis', 'pharmacy', 'educational_psychology', 'statistics_and_machine_learning', 'management_accounting', 'introduction_to_law', 'computer_science', 'veterinary_pathology',
|
2306 |
+
'accounting', 'fire_science', 'optometry', 'insurance_studies', 'pharmacology', 'taxation', 'trust_practice', 'geography_of_taiwan', 'physical_education', 'auditing', 'administrative_law',
|
2307 |
+
'education_(profession_level)', 'economics', 'veterinary_pharmacology', 'nautical_science', 'occupational_therapy_for_psychological_disorders',
|
2308 |
+
'basic_medical_science', 'macroeconomics', 'trade', 'chinese_language_and_literature', 'tve_design', 'junior_science_exam', 'junior_math_exam', 'junior_chinese_exam',
|
2309 |
+
'junior_social_studies', 'tve_mathematics', 'tve_chinese_language', 'tve_natural_sciences', 'junior_chemistry', 'music', 'education', 'three_principles_of_people',
|
2310 |
+
'taiwanese_hokkien'
|
2311 |
+
]
|
2312 |
+
for task in task_list:
|
2313 |
+
val = load_dataset('syntaxsynth/tmmluplus', task)['validation']
|
2314 |
+
dev = load_dataset('syntaxsynth/tmmluplus', task)['train']
|
2315 |
+
test = load_dataset('syntaxsynth/tmmluplus', task)['test']
|
2316 |
+
```
|
2317 |
+
|
2318 |
+
Statistic on all four categories : STEM, Social Science, Humanities, Other
|
2319 |
+
|
2320 |
+
| Category | Test | Dev | Validation |
|
2321 |
+
|----------------------------------|-------|------|------------|
|
2322 |
+
| STEM | 3458 | 70 | 385 |
|
2323 |
+
| Social Sciences | 5958 | 90 | 665 |
|
2324 |
+
| Humanities | 1763 | 35 | 197 |
|
2325 |
+
| Other (Business, Health, Misc.) | 8939 | 135 | 995 |
|
2326 |
+
| **Total** | 20118 | 330 | 2242 |
|
2327 |
+
|
2328 |
+
|
2329 |
+
## Citation
|
2330 |
+
|
2331 |
+
```
|
2332 |
+
@inproceedings{tam2024tmmlu+,
|
2333 |
+
title={Tmmlu+: An improved traditional chinese evaluation suite for foundation models},
|
2334 |
+
author={Tam, Zhi Rui and Pai, Ya Ting and Lee, Yen-Wei and Shuai, Hong-Han and Chen, Jun-Da and Chu, Wei Min and Cheng, Sega},
|
2335 |
+
booktitle={First Conference on Language Modeling},
|
2336 |
+
year={2024}
|
2337 |
+
}
|
2338 |
+
```
|
2339 |
+
|