|
--- |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: "*.parquet" |
|
license: odc-by |
|
--- |
|
Gemstones Training Dataset - Parallel workers sharded version |
|
|
|
This data is a reprocessed version of the first 1B rows of the Dolma v1.7 dataset (https://huggingface.co/datasets/allenai/dolma). |
|
|
|
The data is encoded using the Pythia tokenizer: https://huggingface.co/EleutherAI/pythia-160m |
|
|
|
**Disclaimer:** this is an approximation of the dataset used to train the Gemstones model suite. |
|
Due to the randomized and sharded nature of the distributed training code, the only way to perfectly |
|
reproduce the training batches across the gpus is to run the training code. |
|
This repo is the result of an attempt to simulate the way in which the training code loaded the data and |
|
stream it out to a portable file format for use in downstream analyses of the model suite. |
|
|
|
# Loading |
|
|
|
This data should be loadable using `load_dataset` in the standard manner to auto-download the data. |
|
Alternately, the dataset can be cloned using git to materialize the files locally, and then loaded |
|
using the default `parquet` builder as described here: https://huggingface.co/docs/datasets/en/loading#parquet |
|
|
|
# Sharding format: worker parallel |
|
|
|
This version of the dataset approximates the specific subsets of the data that each of the distributed |
|
workers (GPUs) would have individually loaded and passed through the local copy of the model during |
|
dataparallel training. Since the Gemstones suite of models was trained on a variety of topologies |
|
(the 50M models were trained on 8 nodes while the 2B models used 64 nodes) the distributed reading |
|
format was chosen such that different topologies would read the data in similar orders. |
|
|
|
Specifically, a round-robin reading order ensured that while an 8 node set of workers would each be responsible for more |
|
data than individual workers in a larger 64 node configuration, the first files read by the smaller |
|
configuration would be the same as the first files read by the workers in the larger configuration. |
|
Eg. if workers `1` and `2` in a 2 worker job got files `[A,B]` and `[C,D]`, then workers `1`, `2`, `3`, and `4` in a larger 4 worker job would receive files `[A]`, `[B]`, `[C]`, `[D]` respectively. This way, periodically, all models would be guaranteed to |
|
have seen all of the same rows of the dataset during training. The sync granularity is determined by the largest configuration, so 64 nodes = 512 gpus, loading 4 raw files at a time each containing 2048 x 2049 = ~4M tokens, means synchronization every 512 x 4 x 2048 x 2049 = ~8.6B tokens. |
|
|
|
This recreation assumes the ~1B Gemstones model sizes which were trained on 32 nodes * 8 gpus per node = 256 worker shards |
|
at a microbatch size of 8 over packed sequences of 2048 tokens. |
|
They were trained for 82998 steps at a batch size of ~4M tokens to reach ~350B tokens. |
|
|
|
The 256 workers each received a slice of the total dataset represented by a subset of |
|
the thousands of raw training format files (for reference, this format is defined by the `packed_cycle_dataset.py` file in this repo). |
|
The raw files were first shuffled globally, and then each worker's slice was defined by this round-robin |
|
strided indexing of the shuffled filelist: `filenames[shard_id:max_num_files:num_shards]`. Then, each worker |
|
loaded 4 files at a time, and shuffled the "blocks" of 2048 tokens each in a temporary buffer so |
|
that the contents of the 4 packed files were not read in the exact order in which the tokens appeared in them. |
|
|
|
The `train_mock_data_order_file.py` uses a pool of cpu workers |
|
to mimic a distributed set of gpus, and passes their process ids into the dataset implementation |
|
so that each worker in the pool receives its subset of the data and loads it as it would have during training. |
|
Then, the subsets of data are wrapped in dataloaders and read in microbatches before being written out |
|
to the parquet file format. |
|
|
|
Each shard named like `worker_{worker_rank}-of-{total_num_workers}_ordered_dataset.parquet` represents the ordered microbatches that one of the 256 gpus would |
|
have drawn and passed through its copy of the model during training. |
|
|