File size: 1,142 Bytes
761ca16
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72024b6
 
761ca16
72024b6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
---
dataset_info:
  features:
  - name: text
    dtype: string
  - name: meta
    struct:
    - name: pile_set_name
      dtype: string
  splits:
  - name: train
    num_bytes: 1704309682
    num_examples: 563984
  - name: validation
    num_bytes: 53500741
    num_examples: 17478
  - name: test
    num_bytes: 52482166
    num_examples: 17511
  download_size: 1054128998
  dataset_size: 1810292589
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
  - split: test
    path: data/test-*
language:
- en
---


This dataset includes all Wikipedia documents from the 00.jsonl.zst partition of The Pile. It was created with this script:

```
pile_path = "data/the_pile/train/00.jsonl.zst"

with zstd.open(pile_path, 'r') as fr:
    with open("/tmp/wiki.jsonl", "w") as fw:
        for i, line in enumerate(tqdm(fr)):
            doc = json.loads(line)
            source = doc['meta']['pile_set_name']
            if source == "Wikipedia (en)":
                fw.write(json.dumps(doc) + "\n")
```

The validation and test sets are the full official releases.