rank
int64 1
112
| model
stringlengths 5
65
| accuracy
float64 10.6
89.7
| parameters
float64 1.5
540
⌀ | extra_training_data
stringclasses 2
values | paper
stringlengths 0
110
| code
stringclasses 3
values | result
stringclasses 3
values | year
int64 2.02k
2.02k
| tags
sequencelengths 0
3
|
---|---|---|---|---|---|---|---|---|---|
101 | MetaMath 13B | 22.5 | 13 | Yes | MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models | Yes | No | 2,023 | [
"fine-tuned"
] |
102 | davinci-002 175B | 19.1 | 175 | No | Solving Quantitative Reasoning Problems with Language Models | Yes | No | 2,022 | [] |
103 | Branch-Train-MiX 4x7B (sampling top-2 experts) | 17.8 | null | No | Branch-Train-MiX: Mixing Expert LLMs into a Mixture-of-Experts LLM | Yes | No | 2,024 | [] |
104 | GAL 120B (5-shot) | 16.6 | 120 | No | Galactica: A Large Language Model for Science | Yes | No | 2,022 | [] |
105 | LLaMA 33B-maj1@k | 15.2 | 33 | No | LLaMA: Open and Efficient Foundation Language Models | Yes | No | 2,023 | [
"majority voting"
] |
106 | Minerva 8B | 14.1 | 8 | No | Solving Quantitative Reasoning Problems with Language Models | Yes | No | 2,022 | [] |
107 | WizardMath-13B-V1.0 | 14 | 13 | Yes | WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct | Yes | No | 2,023 | [] |
108 | LLaMA 65B | 10.6 | 65 | No | LLaMA: Open and Efficient Foundation Language Models | Yes | No | 2,023 | [] |
109 | GAL 30B (5-shot) | 12.7 | 30 | No | Galactica: A Large Language Model for Science | Yes | No | 2,022 | [] |
110 | Mistral 7B (maj@4) | 13.1 | 7 | No | Mistral 7B | Yes | No | 2,023 | [] |
111 | GAL 30B <work> | 11.4 | 30 | No | Galactica: A Large Language Model for Science | Yes | No | 2,022 | [] |
112 | WizardMath-7B-V1.0 | 10.7 | 7 | Yes | WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct | Yes | No | 2,023 | [] |
Subsets and Splits