Datasets:

Formats:
parquet
Libraries:
Datasets
Dask
License:
jwkirchenbauer commited on
Commit
c1aa036
·
verified ·
1 Parent(s): 2216ead

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -31,7 +31,7 @@ Specifically, a round-robin reading order ensured that while an 8 node set of wo
31
  data than individual workers in a larger 64 node configuration, the first files read by the smaller
32
  configuration would be the same as the first files read by the workers in the larger configuration.
33
  Eg. if workers `1` and `2` in a 2 worker job got files `[A,B]` and `[C,D]`, then workers `1`, `2`, `3`, and `4` in a larger 4 worker job would receive files `[A]`, `[B]`, `[C]`, `[D]` respectively. This way, periodically, all models would be guaranteed to
34
- have seen all of the same rows of the dataset during training. The sync granularity is determined by the largest configuration, so 64 nodes = 512 gpus, loading 4 raw files at a time each containing 2048*2049=~4M tokens, means synchronization every 512*4*2048*2049 = ~8.6B tokens.
35
 
36
  This linearized recreation assumes a single worker is reading every row of the dataset and so at a microbatch size of 8 over packed sequences of 2048 tokens, 21247488 steps worth of "training" is required to reach ~350B tokens.
37
 
 
31
  data than individual workers in a larger 64 node configuration, the first files read by the smaller
32
  configuration would be the same as the first files read by the workers in the larger configuration.
33
  Eg. if workers `1` and `2` in a 2 worker job got files `[A,B]` and `[C,D]`, then workers `1`, `2`, `3`, and `4` in a larger 4 worker job would receive files `[A]`, `[B]`, `[C]`, `[D]` respectively. This way, periodically, all models would be guaranteed to
34
+ have seen all of the same rows of the dataset during training. The sync granularity is determined by the largest configuration, so 64 nodes = 512 gpus, loading 4 raw files at a time each containing 2048 x 2049 = ~4M tokens, means synchronization every 512 x 4 x 2048 x 2049 = ~8.6B tokens.
35
 
36
  This linearized recreation assumes a single worker is reading every row of the dataset and so at a microbatch size of 8 over packed sequences of 2048 tokens, 21247488 steps worth of "training" is required to reach ~350B tokens.
37