Hibiki711's picture
Update README.md
3c71816 verified
metadata
dataset_info:
  features:
    - name: tokens
      dtype: int64
  splits:
    - name: train
      num_bytes: 77760409152
      num_examples: 9720051144
  download_size: 31455581823
  dataset_size: 77760409152
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: odc-by
task_categories:
  - fill-mask
  - text-generation
language:
  - en
pretty_name: FineWeb EDU 10BT Tokenized (BERT)

fw-bert-tokenized-flattened

Just a tokenized and flattened version of the 10 billion token sample of https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu with the bert-base-uncased tokenizer. Practically a huge array of tokens with each doc sepatated by [SEP].