Datasets:

Formats:
parquet
Libraries:
Datasets
Dask
License:
jwkirchenbauer commited on
Commit
a5e4388
·
verified ·
1 Parent(s): e87afff

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +46 -1
README.md CHANGED
@@ -5,4 +5,49 @@ configs:
5
  - split: train
6
  path: "*.parquet"
7
  ---
8
- Gemstones Training Dataset - Linearized version
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  - split: train
6
  path: "*.parquet"
7
  ---
8
+ Gemstones Training Dataset - Linearized version
9
+
10
+ This data is a reporcessed version of the first 1B rows of the Dolma v1.7 dataset (https://huggingface.co/datasets/allenai/dolma).
11
+
12
+ **Disclaimer:** this is an approximation of the dataset used to train the Gemstones model suite.
13
+ Due to the randomized and sharded nature of the distributed training code, the only way to perfectly
14
+ reproduce the training batches across the gpus is/was the run the training code.
15
+ This repo is the result of an attempt to simulate the way in which the training code loaded the data and
16
+ stream it out to a portable file format for use in downstream analyses of the model suite.
17
+
18
+ # Sharding format: worker parallel
19
+
20
+ This version of the dataset approximates the order of the dataset _as if_ a model was being trained
21
+ on a single gpu without data parallelism. In reality, specific subsets of the data were loaded by the distributed
22
+ workers (GPUs) passed through a local copy of the model during
23
+ dataparallel training.
24
+ Since the Gemstones suite of models was trained on a variety of topologies
25
+ (the 50M models were trained on 8 nodes while the 2B models used 64 nodes) the distributed reading
26
+ format was chosen such that different topologies would read the data in similar orders.
27
+
28
+ Specifically, a round-robin reading order ensured that while an 8 node set of workers would each be responsible for more
29
+ data than individual workers in a larger 64 node configuration, the first files read by the smaller
30
+ configuration would be the same as the first files read by the workers in the larger configuration.
31
+ Eg. if workers `1` and `2` in a 2 worker job got files `[A,B]` and `[C,D]`, then workers `1`, `2`, `3`, and `4` in a larger 4 worker job would receive files `[A]`, `[B]`, `[C]`, `[D]` respectively. This way, periodically, all models would be guaranteed to
32
+ have seen all of the same rows of the dataset during training. The sync granularity is determined by the largest configuration, so 64 nodes = 512 gpus, loading 4 raw files at a time each containing 2048*2049=~4M tokens, means synchronization every 512*4*2048*2049 = ~8.6B tokens.
33
+
34
+ This linearized recreation assumes a single worker is reading every row of the dataset and so at a microbatch size of 8 over packed sequences of 2048 tokens, 21247488 steps worth of "training" is required to reach ~350B tokens.
35
+
36
+ At runtime, the the single worker receives the total dataset represented by the thousands of raw training format files (for reference, this format is defined by the `packed_cycle_dataset.py` file in this repo).
37
+ The raw files were first shuffled globally, and then the single worker
38
+ loaded 4 files at a time, and shuffled the "blocks" of 2048 tokens each in a temporary buffer so
39
+ that the contents of the 4 packed files were not read in the exact order in which the tokens appeared in them.
40
+
41
+ **Note**: the fact that a single worker receives all files in this version means that the sets of 4 files loaded at
42
+ a time whose contents (blocks of tokens) are read in a shuffled order, does not exactly match any one of the
43
+ Gemstones model sets. However, the key is that the synchronization argument above still holds and so analyses at a coarser granularity than ~8.6B tokens should be sound.
44
+
45
+ The `train_mock_data_order_file.py` performs these operations and writes the resulting data order out to files named like
46
+ `ordered_dataset_shard_{shard}-of-{total_shards}.parquet` where the total number of shards is arbitrary, but chosen to be 256 for
47
+ portability.
48
+
49
+ # Loading
50
+
51
+ This data should be loadable using `load_dataset` in the standard manner to auto-download the data.
52
+ Alternately, the dataset can be cloned using git to materialize the files locally, and then loaded
53
+ using the default `parquet` builder as described here: https://huggingface.co/docs/datasets/en/loading#parquet