Datasets:

ArXiv:
License:

sample

#4
by KnutJaegersberg - opened

can you also upload a proper random sample (from across all files)?, small just for analysis

Essential AI org

Hey @KnutJaegersberg ! Once this dataset is done uploading, we are planning on uploading a 1T token random sample that is partitioned by Free Decimal Correspondence level 2. We are happy to upload a 100B token (or smaller) random sample if it would be useful.

hi @Research-EAI it would be super beneficial to have samplings across 1B, 10B, 100B and 1T on perhaps a separate dataset as well as this one

It seems as though the size of this dataset causes issues with the dataset viewers, etc :)

I'd like smaller samples as well. 10b is a handy size to peek into.

Thank you @sumuks the samples datasets are good. Any details on how exactly did you sample them?

hey @codelion - these were randomly sampled across the various common crawl snapshots available in essential web. they’re roughly evenly distributed across the years. i’ll add the code to the repo when i get the chance

@sumuks thanks for replying, I was asking since I am looking to sample various pretraining datasets at various sizes to compare (like FineWeb, DCLM) and was looking at what can be the best approach that doesn’t require processing the full dataset but maintains the distribution in terms of instance length.

unfortunately, i didn’t take into account the instance length, just the temporality by sampling the different parquet files at random inside each snapshot

@sumuks I have released the datasets I sampled and added yours as well to a collection here - https://huggingface.co/collections/codelion/pre-training-dataset-samples-686bd760abf1a43b0ce32829

Sign up or log in to comment