Update README.md
Browse files* Update table with latest numbers and newer models
* Adds Llama-3.3-70B MATH and HE footnote
README.md
CHANGED
@@ -97,17 +97,16 @@ To understand the capabilities, we compare `phi-4` with a set of models over Ope
|
|
97 |
|
98 |
At the high-level overview of the model quality on representative benchmarks. For the table below, higher numbers indicate better performance:
|
99 |
|
100 |
-
| **Category** | **Benchmark** | **
|
101 |
|------------------------------|---------------|-----------|-----------------|----------------------|----------------------|--------------------|-------------------|-----------------|
|
102 |
-
| Popular Aggregated Benchmark | MMLU |
|
103 |
-
|
|
104 |
-
|
|
105 |
-
|
|
106 |
-
| Factual Knowledge | SimpleQA |
|
107 |
-
|
|
108 |
-
|
109 |
-
|
110 |
-
Overall, `phi-4` with only 14B parameters achieves a similar level of science and math capabilities as much larger models. Moreover, the model outperforms bigger models in reasoning capability, performing similarly to Llama-3.1-405B. However, it is still fundamentally limited by its size for certain tasks. The model simply does not have the capacity to store too much world knowledge, which can be seen for example with low performance on SimpleQA. However, we believe such weakness can be resolved by augmenting `phi-4` with a search engine.
|
111 |
|
112 |
## Usage
|
113 |
|
|
|
97 |
|
98 |
At the high-level overview of the model quality on representative benchmarks. For the table below, higher numbers indicate better performance:
|
99 |
|
100 |
+
| **Category** | **Benchmark** | **phi-4** (14B) | **phi-3** (14B) | **Qwen 2.5** (14B instruct) | **GPT-4o-mini** | **Llama-3.3** (70B instruct) | **Qwen 2.5** (72B instruct) | **GPT-4o** |
|
101 |
|------------------------------|---------------|-----------|-----------------|----------------------|----------------------|--------------------|-------------------|-----------------|
|
102 |
+
| Popular Aggregated Benchmark | MMLU | 84.8 | 77.9 | 79.9 | 81.8 | 86.3 | 85.3 | **88.1** |
|
103 |
+
| Science | GPQA | **56.1** | 31.2 | 42.9 | 40.9 | 49.1 | 49.0 | 50.6 |
|
104 |
+
| Math | MGSM<br>MATH | 80.6<br>**80.4** | 53.5<br>44.6 | 79.6<br>75.6 | 86.5<br>73.0 | 89.1<br>66.3* | 87.3<br>80.0 | **90.4**<br>74.6 |
|
105 |
+
| Code Generation | HumanEval | 82.6 | 67.8 | 72.1 | 86.2 | 78.9* | 80.4 | **90.6** |
|
106 |
+
| Factual Knowledge | SimpleQA | 3.0 | 7.6 | 5.4 | 9.9 | 20.9 | 10.2 | **39.4** |
|
107 |
+
| Reasoning | DROP | 75.5 | 68.3 | 85.5 | 79.3 | **90.2** | 76.7 | 80.9 |
|
108 |
+
|
109 |
+
\* These scores are lower than those reported by Meta, perhaps because simple-evals has a strict formatting requirement that Llama models have particular trouble following. We use the simple-evals framework because it is reproducible, but Meta reports 77 for MATH and 88 for HumanEval on Llama-3.3-70B.
|
|
|
110 |
|
111 |
## Usage
|
112 |
|