Datasets:

Languages:
English
ArXiv:
License:
ljb121002 commited on
Commit
784f282
·
1 Parent(s): dff553e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -6,10 +6,10 @@ This release integrates the entire data sequence utilized in the CrystalCoder tr
6
  During this initial stage, half of the [SlimPajama data](https://huggingface.co/datasets/cerebras/SlimPajama-627B) is utilized, equivalent to approximately 345 billion tokens.
7
 
8
  ## Stage 2
9
- In the second stage, the remaining half of the [SlimPajama data](https://huggingface.co/datasets/cerebras/SlimPajama-627B) is employed, along with two epochs of [StarCoder data](https://huggingface.co/datasets/bigcode/starcoderdata). For the StarCoder data, we apply [FIM augmentation](https://arxiv.org/abs/2207.14255) with an FIM rate of 0.9. The total token count for this stage is calculated as 0.5 * 690 + 2 * 291, resulting in 927 billion tokens.
10
 
11
  ## Stage 3
12
- The third stage involves reusing Python and web-related data from the [StarCoder data](https://huggingface.co/datasets/bigcode/starcoderdata), including HTML, CSS, and JavaScript. This data is utilized for training over three epochs, with the application of FIM at a rate of 0.3. The total token count for this stage is 100 billion. Additionally, a small portion of the SlimPajama dataset, excluding the Github part, is also reused, contributing around 10 billion tokens.
13
 
14
  ### Instruction tuning
15
 
 
6
  During this initial stage, half of the [SlimPajama data](https://huggingface.co/datasets/cerebras/SlimPajama-627B) is utilized, equivalent to approximately 345 billion tokens.
7
 
8
  ## Stage 2
9
+ In the second stage, the remaining half of the [SlimPajama data](https://huggingface.co/datasets/cerebras/SlimPajama-627B) is employed, along with two epochs of [StarCoder data](https://huggingface.co/datasets/bigcode/starcoderdata). For the StarCoder data, we apply [FIM augmentation](https://arxiv.org/abs/2207.14255) with an FIM rate of 0.9 and an SPM rate of 0.5. The total token count for this stage is calculated as 0.5 * 690 + 2 * 291, resulting in 927 billion tokens.
10
 
11
  ## Stage 3
12
+ The third stage involves reusing Python and web-related data from the [StarCoder data](https://huggingface.co/datasets/bigcode/starcoderdata), including HTML, CSS, and JavaScript. This data is utilized for training over three epochs, with the application of FIM at a rate of 0.3 alongside an SPM rate of 0.5. The total token count for this stage is 100 billion. Additionally, a small portion of the SlimPajama dataset, excluding the Github part, is also reused, contributing around 10 billion tokens.
13
 
14
  ### Instruction tuning
15