Achal Dave
commited on
Commit
·
62f139c
1
Parent(s):
87d9f45
Update
Browse files
README.md
CHANGED
@@ -16,7 +16,7 @@ DCLM-1B is a 1.4 billion parameter language model trained on the DCLM-Baseline d
|
|
16 |
Here are the evaluation results for DCLM-1B on various tasks (using [llm-foundry](https://github.com/mosaicml/llm-foundry) eval suite), compared to recently released small models on key benchmarks.
|
17 |
As described in the paper, Core accuracy is the average of centered accuracy on
|
18 |
22 tasks (including HellaSwag and ARC-E), Extended is centered accuracy averaged
|
19 |
-
over 53 tasks.
|
20 |
|
21 |
|
22 |
| Model | Params | Tokens | Open dataset? | Core | MMLU | Extended |
|
|
|
16 |
Here are the evaluation results for DCLM-1B on various tasks (using [llm-foundry](https://github.com/mosaicml/llm-foundry) eval suite), compared to recently released small models on key benchmarks.
|
17 |
As described in the paper, Core accuracy is the average of centered accuracy on
|
18 |
22 tasks (including HellaSwag and ARC-E), Extended is centered accuracy averaged
|
19 |
+
over 53 tasks.
|
20 |
|
21 |
|
22 |
| Model | Params | Tokens | Open dataset? | Core | MMLU | Extended |
|