The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

TMMLU+ : Large scale traditional chinese massive multitask language understanding

A close-up image of a neat paper note with a white background. The text 'TMMLU+' is written horizontally across the center of the note in bold, black. Join us to work in multimodal LLM : https://ikala.ai/recruit/

We present TMMLU+, a traditional Chinese massive multitask language understanding dataset. TMMLU+ is a multiple-choice question-answering dataset featuring 66 subjects, ranging from elementary to professional level.

The TMMLU+ dataset is six times larger and contains more balanced subjects compared to its predecessor, TMMLU. We have included benchmark results in TMMLU+ from closed-source models and 20 open-weight Chinese large language models, with parameters ranging from 1.8B to 72B. The benchmark results show that Traditional Chinese variants still lag behind those trained on major Simplified Chinese models.

from datasets import load_dataset
task_list = [
             'engineering_math', 'dentistry', 'traditional_chinese_medicine_clinical_medicine', 'clinical_psychology', 'technical', 'culinary_skills', 'mechanical', 'logic_reasoning', 'real_estate',
             'general_principles_of_law', 'finance_banking', 'anti_money_laundering', 'ttqav2', 'marketing_management', 'business_management', 'organic_chemistry', 'advance_chemistry',
             'physics', 'secondary_physics', 'human_behavior', 'national_protection', 'jce_humanities', 'politic_science', 'agriculture', 'official_document_management',
             'financial_analysis', 'pharmacy', 'educational_psychology', 'statistics_and_machine_learning', 'management_accounting', 'introduction_to_law', 'computer_science', 'veterinary_pathology',
             'accounting', 'fire_science', 'optometry', 'insurance_studies', 'pharmacology', 'taxation', 'trust_practice', 'geography_of_taiwan', 'physical_education', 'auditing', 'administrative_law',
             'education_(profession_level)', 'economics', 'veterinary_pharmacology', 'nautical_science', 'occupational_therapy_for_psychological_disorders',
             'basic_medical_science', 'macroeconomics', 'trade', 'chinese_language_and_literature', 'tve_design', 'junior_science_exam', 'junior_math_exam', 'junior_chinese_exam',
             'junior_social_studies', 'tve_mathematics', 'tve_chinese_language', 'tve_natural_sciences', 'junior_chemistry', 'music', 'education', 'three_principles_of_people',
             'taiwanese_hokkien', 
             'linear_algebra'
            ]
for task in task_list:
  val = load_dataset('ZoneTwelve/tmmluplus', task)['validation']
  dev = load_dataset('ZoneTwelve/tmmluplus', task)['train']
  test = load_dataset('ZoneTwelve/tmmluplus', task)['test']

For each dataset split

for row in test:
  print(row)
  break
>> Dataset({
    features: ['question', 'A', 'B', 'C', 'D', 'answer'],
    num_rows: 11
})

Statistic on all four categories : STEM, Social Science, Humanities, Other

Category Test Dev Validation
STEM 3458 70 385
Social Sciences 5958 90 665
Humanities 1763 35 197
Other (Business, Health, Misc.) 8939 135 995
Total 20118 330 2242

Benchmark on direct prompting

model STEM Social Science Humanities Other Average
Qwen/Qwen-72B 61.12 71.65 63.00 61.31 64.27
gpt-4-0613 60.36 67.36 56.03 57.62 60.34
Qwen/Qwen-72B-Chat 55.15 66.20 55.65 57.19 58.55
Qwen/Qwen-14B 46.94 56.69 49.43 48.81 50.47
Gemini-pro 45.38 57.29 48.80 48.21 49.92
01-ai/Yi-34B-Chat 40.24 56.77 53.99 47.58 49.64
Qwen/Qwen-14B-Chat 43.86 53.29 44.78 45.13 46.77
01-ai/Yi-6B-Chat 39.62 50.24 44.44 44.26 44.64
Claude-1.3 42.65 49.33 42.16 44.14 44.57
gpt-3.5-turbo-0613 41.56 46.72 36.73 42.03 41.76
CausalLM/14B 39.83 44.50 39.61 41.97 41.48
Skywork/Skywork-13B-base 36.93 47.27 41.04 40.10 41.33
Qwen/Qwen-7B 37.53 45.48 38.09 38.96 40.01
Qwen/Qwen-7B-Chat 33.32 44.64 40.27 39.89 39.53
vivo-ai/BlueLM-7B-Base 33.94 41.52 37.38 38.74 37.90
baichuan-inc/Baichuan2-13B-Chat 29.64 43.73 37.36 39.88 37.65
Qwen/Qwen-1_8B 32.65 38.95 38.34 35.27 36.30
Claude-2 39.65 39.09 28.59 37.47 36.20
THUDM/chatglm3-6b 31.05 39.31 35.64 35.60 35.40
deepseek-ai/deepseek-llm-7b-chat 29.82 42.29 34.24 34.31 35.17
CausalLM/7B 31.03 38.17 35.87 35.39 35.11
Azure99/blossom-v3_1-mistral-7b 32.80 36.91 32.36 34.53 34.15
microsoft/Orca-2-13b 24.69 39.18 33.60 31.99 32.37
Qwen/Qwen-1_8B-Chat 26.60 36.36 31.81 31.96 31.68
TigerResearch/tigerbot-13b-chat-v3 24.73 29.63 25.72 27.22 26.82
hongyin/mistral-7b-80k 24.26 23.76 22.56 24.57 23.79
deepseek-ai/deepseek-llm-67b-chat 19.10 26.06 21.51 21.77 22.11
yentinglin/Taiwan-LLM-13B-v2.0-chat 18.53 27.65 17.77 21.49 21.36
GeneZC/MiniChat-3B 17.66 23.35 22.71 20.34 21.02
LinkSoul/Chinese-Llama-2-7b 16.55 18.39 12.97 16.13 16.01
yentinglin/Taiwan-LLM-7B-v2.1-chat 14.99 16.23 15.00 16.22 15.61
Claude-instant-1 12.52 17.13 15.10 13.57 14.58
FlagAlpha/Atom-7B 5.60 13.57 7.71 11.84 9.68

Results via ievals ( settings : 0-shot direct answering )

Citation

@article{ikala2023eval,
  title={An Improved Traditional Chinese Evaluation Suite for Foundation Model},
  author={Tam, Zhi-Rui and Pai, Ya-Ting},
  journal={arXiv},
  year={2023}
}

CONTENT WARNING This is a modification of ikala/tmmluplus, with minor alterations made to facilitate the implementation for lm-evaluation-harness purposes. More details on Discussions

Downloads last month
1,396