Datasets:

Formats:
parquet
Libraries:
Datasets
Dask
License:
jwkirchenbauer commited on
Commit
d4a29ba
·
verified ·
1 Parent(s): a5e4388

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -5,7 +5,7 @@ configs:
5
  - split: train
6
  path: "*.parquet"
7
  ---
8
- Gemstones Training Dataset - Linearized version
9
 
10
  This data is a reporcessed version of the first 1B rows of the Dolma v1.7 dataset (https://huggingface.co/datasets/allenai/dolma).
11
 
@@ -15,7 +15,7 @@ reproduce the training batches across the gpus is/was the run the training code.
15
  This repo is the result of an attempt to simulate the way in which the training code loaded the data and
16
  stream it out to a portable file format for use in downstream analyses of the model suite.
17
 
18
- # Sharding format: worker parallel
19
 
20
  This version of the dataset approximates the order of the dataset _as if_ a model was being trained
21
  on a single gpu without data parallelism. In reality, specific subsets of the data were loaded by the distributed
 
5
  - split: train
6
  path: "*.parquet"
7
  ---
8
+ Gemstones Training Dataset - Sequential version
9
 
10
  This data is a reporcessed version of the first 1B rows of the Dolma v1.7 dataset (https://huggingface.co/datasets/allenai/dolma).
11
 
 
15
  This repo is the result of an attempt to simulate the way in which the training code loaded the data and
16
  stream it out to a portable file format for use in downstream analyses of the model suite.
17
 
18
+ # Sharding format: sequential
19
 
20
  This version of the dataset approximates the order of the dataset _as if_ a model was being trained
21
  on a single gpu without data parallelism. In reality, specific subsets of the data were loaded by the distributed