Update utils.py
Browse filesadd some dataset details
utils.py
CHANGED
@@ -33,6 +33,24 @@ Welcome to the MMLU-Pro leaderboard, showcasing the performance of various advan
|
|
33 |
|
34 |
The MMLU-Pro dataset consists of approximately 12,000 intricate questions that challenge the comprehension and reasoning abilities of LLMs. Below you can find the accuracies of different models tested on this dataset.
|
35 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
For detailed information about the dataset, visit our page on Hugging Face: MMLU-Pro at Hugging Face. If you are interested in replicating these results or wish to evaluate your models using our dataset, access our evaluation scripts available on GitHub: TIGER-AI-Lab/MMLU-Pro.
|
37 |
"""
|
38 |
|
|
|
33 |
|
34 |
The MMLU-Pro dataset consists of approximately 12,000 intricate questions that challenge the comprehension and reasoning abilities of LLMs. Below you can find the accuracies of different models tested on this dataset.
|
35 |
|
36 |
+
## 1. What's new about MMLU-Pro
|
37 |
+
|
38 |
+
Compared to the original MMLU, there are three major differences:
|
39 |
+
|
40 |
+
- The original MMLU dataset only contains 4 options, MMLU-Pro increases it to 10 options. The increase in options will make the evaluation more realistic and challenging. The random guessing will lead to a much lower score.
|
41 |
+
- The original MMLU dataset contains mostly knowledge-driven questions without requiring much reasoning. Therefore, PPL results are normally better than CoT. In our dataset, we increase the problem difficulty and integrate more reasoning-focused problems. In MMLU-Pro, CoT can be 20% higher than PPL.
|
42 |
+
- Due to the increase of options, we found that the model performance becomes more robust. For example, Llama-2-7B performance variance on MMLU-Pro is within 1% with several different prompts. In contrast, the performance variance on original MMLU can be as huge as 4-5%.
|
43 |
+
|
44 |
+
## 2. Dataset Summary
|
45 |
+
|
46 |
+
- **Questions and Options:** Each question within the dataset typically has **ten** multiple-choice options, except for some that were reduced during the manual review process to remove unreasonable choices. This increase from the original **four** options per question is designed to enhance complexity and robustness, necessitating deeper reasoning to discern the correct answer among a larger pool of potential distractors.
|
47 |
+
|
48 |
+
- **Sources:** The dataset consolidates questions from several sources:
|
49 |
+
- **Original MMLU Questions:** Part of the dataset is coming from the original MMLU dataset. We remove the trivial and ambiguous questions.
|
50 |
+
- **STEM Website:** Hand picking high-quality STEM problems from the Internet.
|
51 |
+
- **TheoremQA:** High-quality human-annotated questions requiring theorems to solve.
|
52 |
+
- **Scibench:** Science questions from college exams.
|
53 |
+
|
54 |
For detailed information about the dataset, visit our page on Hugging Face: MMLU-Pro at Hugging Face. If you are interested in replicating these results or wish to evaluate your models using our dataset, access our evaluation scripts available on GitHub: TIGER-AI-Lab/MMLU-Pro.
|
55 |
"""
|
56 |
|