Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
json
Sub-tasks:
language-modeling
Languages:
English
Size:
10K - 100K
ArXiv:
License:
Update README.md
Browse files
README.md
CHANGED
@@ -28,8 +28,8 @@ configs:
|
|
28 |
- data/arxiv_math.jsonl
|
29 |
---
|
30 |
|
31 |
-
This is the dataset
|
32 |
-
We find that LLMs’ intelligence – reflected by benchmark scores – almost **linearly** correlates with their ability to compress external text corpora. We measure intelligence along three key abilities: knowledge and commonsense, coding, and mathematical reasoning, and provide corresponding
|
33 |
|
34 |
|
35 |
### Load the data
|
|
|
28 |
- data/arxiv_math.jsonl
|
29 |
---
|
30 |
|
31 |
+
This is the compression corpora dataset used in the paper "Compression Represents Intelligence Linearly".
|
32 |
+
We find that LLMs’ intelligence – reflected by benchmark scores – almost **linearly** correlates with their ability to compress external text corpora. We measure intelligence along three key abilities: knowledge and commonsense, coding, and mathematical reasoning, and provide the corresponding compression corpora here respectively named cc, python, and arxiv_math.
|
33 |
|
34 |
|
35 |
### Load the data
|