Datasets:

Languages:
English
ArXiv:
License:
hunterhector commited on
Commit
d0d232a
·
1 Parent(s): e703a42

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -11,7 +11,7 @@ In the second stage, the remaining half of the [SlimPajama data](https://hugging
11
  ## Stage 3
12
  The third stage involves reusing Python and web-related data from the [StarCoder data](https://huggingface.co/datasets/bigcode/starcoderdata), including HTML, CSS, and JavaScript. This data is utilized for training over three epochs, with the application of FIM at a rate of 0.3 alongside an SPM rate of 0.5. The total token count for this stage is 100 billion. Additionally, a small portion of the SlimPajama dataset, excluding the Github part, is also reused, contributing around 10 billion tokens.
13
 
14
- ### Instruction tuning
15
 
16
  To enhance the model's proficiency in real chat scenarios, we utilize a diverse set of instruction tuning datasets, totaling approximately 1 billion tokens. Specifically, our data include [OASST1-guanaco](https://huggingface.co/datasets/openaccess-ai-collective/oasst1-guanaco-extended-sharegpt), [SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca), [ShareGPT_V4.3](https://huggingface.co/datasets/Aeala/ShareGPT_Vicuna_unfiltered), [Evol-ShareGPT](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k), [CodeAlpaca](https://huggingface.co/datasets/lucasmccabe-lmi/CodeAlpaca-20k), [Rosetta Code](https://github.com/sahil280114/codealpaca/blob/master/data/rosetta_alpaca.json), [Evol-CodeAlpaca 1](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1), [Evol-CodeAlpaca 2](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1), and a self-generated dataset centered on website creation through the [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) pipeline. We will release the full dataset soon.
17
 
 
11
  ## Stage 3
12
  The third stage involves reusing Python and web-related data from the [StarCoder data](https://huggingface.co/datasets/bigcode/starcoderdata), including HTML, CSS, and JavaScript. This data is utilized for training over three epochs, with the application of FIM at a rate of 0.3 alongside an SPM rate of 0.5. The total token count for this stage is 100 billion. Additionally, a small portion of the SlimPajama dataset, excluding the Github part, is also reused, contributing around 10 billion tokens.
13
 
14
+ ### Instruction tuning (Stage 3a)
15
 
16
  To enhance the model's proficiency in real chat scenarios, we utilize a diverse set of instruction tuning datasets, totaling approximately 1 billion tokens. Specifically, our data include [OASST1-guanaco](https://huggingface.co/datasets/openaccess-ai-collective/oasst1-guanaco-extended-sharegpt), [SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca), [ShareGPT_V4.3](https://huggingface.co/datasets/Aeala/ShareGPT_Vicuna_unfiltered), [Evol-ShareGPT](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k), [CodeAlpaca](https://huggingface.co/datasets/lucasmccabe-lmi/CodeAlpaca-20k), [Rosetta Code](https://github.com/sahil280114/codealpaca/blob/master/data/rosetta_alpaca.json), [Evol-CodeAlpaca 1](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1), [Evol-CodeAlpaca 2](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1), and a self-generated dataset centered on website creation through the [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) pipeline. We will release the full dataset soon.
17