|
--- |
|
dataset_info: |
|
features: |
|
- name: text |
|
dtype: string |
|
- name: meta |
|
struct: |
|
- name: pile_set_name |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 1704309682 |
|
num_examples: 563984 |
|
- name: validation |
|
num_bytes: 53500741 |
|
num_examples: 17478 |
|
- name: test |
|
num_bytes: 52482166 |
|
num_examples: 17511 |
|
download_size: 1054128998 |
|
dataset_size: 1810292589 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
- split: validation |
|
path: data/validation-* |
|
- split: test |
|
path: data/test-* |
|
language: |
|
- en |
|
--- |
|
|
|
|
|
This dataset includes all Wikipedia documents from the 00.jsonl.zst partition of The Pile. It was created with this script: |
|
|
|
``` |
|
pile_path = "data/the_pile/train/00.jsonl.zst" |
|
|
|
with zstd.open(pile_path, 'r') as fr: |
|
with open("/tmp/wiki.jsonl", "w") as fw: |
|
for i, line in enumerate(tqdm(fr)): |
|
doc = json.loads(line) |
|
source = doc['meta']['pile_set_name'] |
|
if source == "Wikipedia (en)": |
|
fw.write(json.dumps(doc) + "\n") |
|
``` |
|
|
|
The validation and test sets are the full official releases. |
|
|