AYueksel commited on
Commit
fcbe5bb
·
verified ·
1 Parent(s): 4e7cbb8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -8
README.md CHANGED
@@ -114,13 +114,24 @@ Dataset is divided into four categories Natural Sciences, Mathematics, Language,
114
  ## Citation
115
 
116
  ```
117
- @misc{yüksel2024turkishmmlumeasuringmassivemultitask,
118
- title={TurkishMMLU: Measuring Massive Multitask Language Understanding in Turkish},
119
- author={Arda Yüksel and Abdullatif Köksal and Lütfi Kerem Şenel and Anna Korhonen and Hinrich Schütze},
120
- year={2024},
121
- eprint={2407.12402},
122
- archivePrefix={arXiv},
123
- primaryClass={cs.CL},
124
- url={https://arxiv.org/abs/2407.12402},
 
 
 
 
 
 
 
 
 
 
 
125
  }
126
  ```
 
114
  ## Citation
115
 
116
  ```
117
+ @inproceedings{yuksel-etal-2024-turkishmmlu,
118
+ title = "{T}urkish{MMLU}: Measuring Massive Multitask Language Understanding in {T}urkish",
119
+ author = {Y{\"u}ksel, Arda and
120
+ K{\"o}ksal, Abdullatif and
121
+ Senel, L{\"u}tfi Kerem and
122
+ Korhonen, Anna and
123
+ Schuetze, Hinrich},
124
+ editor = "Al-Onaizan, Yaser and
125
+ Bansal, Mohit and
126
+ Chen, Yun-Nung",
127
+ booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2024",
128
+ month = nov,
129
+ year = "2024",
130
+ address = "Miami, Florida, USA",
131
+ publisher = "Association for Computational Linguistics",
132
+ url = "https://aclanthology.org/2024.findings-emnlp.413/",
133
+ doi = "10.18653/v1/2024.findings-emnlp.413",
134
+ pages = "7035--7055",
135
+ abstract = "Multiple choice question answering tasks evaluate the reasoning, comprehension, and mathematical abilities of Large Language Models (LLMs). While existing benchmarks employ automatic translation for multilingual evaluation, this approach is error-prone and potentially introduces culturally biased questions, especially in social sciences. We introduce the first multitask, multiple-choice Turkish QA benchmark, TurkishMMLU, to evaluate LLMs' understanding of the Turkish language. TurkishMMLU includes over 10,000 questions, covering 9 different subjects from Turkish high-school education curricula. These questions are written by curriculum experts, suitable for the high-school curricula in Turkey, covering subjects ranging from natural sciences and math questions to more culturally representative topics such as Turkish Literature and the history of the Turkish Republic. We evaluate over 20 LLMs, including multilingual open-source (e.g., Gemma, Llama, MT5), closed-source (GPT 4o, Claude, Gemini), and Turkish-adapted (e.g., Trendyol) models. We provide an extensive evaluation, including zero-shot and few-shot evaluation of LLMs, chain-of-thought reasoning, and question difficulty analysis along with model performance. We provide an in-depth analysis of the Turkish capabilities and limitations of current LLMs to provide insights for future LLMs for the Turkish language. We publicly release our code for the dataset and evaluation: https://github.com/ArdaYueksel/TurkishMMLU"
136
  }
137
  ```