Update README.md
Browse files
README.md
CHANGED
@@ -1,245 +1,61 @@
|
|
1 |
-
|
2 |
-
license: apache-2.0
|
3 |
-
tags:
|
4 |
-
- Composer
|
5 |
-
- MosaicML
|
6 |
-
- llm-foundry
|
7 |
-
- StreamingDatasets
|
8 |
-
datasets:
|
9 |
-
- allenai/c4
|
10 |
-
- mc4
|
11 |
-
- togethercomputer/RedPajama-Data-1T
|
12 |
-
- bigcode/the-stack-dedup
|
13 |
-
- allenai/s2orc
|
14 |
-
inference: false
|
15 |
-
---
|
16 |
|
17 |
-
|
18 |
|
19 |
-
|
20 |
-
This model was trained by [MosaicML](https://www.mosaicml.com).
|
21 |
|
22 |
-
|
23 |
|
24 |
-
|
25 |
-
|
|
|
26 |
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
### How is this model different?
|
31 |
-
|
32 |
-
MPT-30B is:
|
33 |
-
* **Licensed for the possibility of commercial use** (unlike [LLaMA](https://arxiv.org/abs/2302.13971)).
|
34 |
-
* **Trained on a large amount of data** (1T tokens like [LLaMA](https://arxiv.org/abs/2302.13971) vs. 300B for [Pythia](https://github.com/EleutherAI/pythia), 300B for [OpenLLaMA](https://github.com/openlm-research/open_llama), and 800B for [StableLM](https://github.com/Stability-AI/StableLM)).
|
35 |
-
* **Prepared to handle extremely long inputs** thanks to [ALiBi](https://arxiv.org/abs/2108.12409).
|
36 |
-
* **Capable of fast training and inference** (via [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) and [FasterTransformer](https://github.com/NVIDIA/FasterTransformer))
|
37 |
-
* **Equipped with highly efficient open-source training code** via the [llm-foundry repository](https://github.com/mosaicml/llm-foundry)
|
38 |
-
|
39 |
-
### Models finetuned off MPT-30B:
|
40 |
-
|
41 |
-
The following models are finetuned on MPT-30B:
|
42 |
-
|
43 |
-
* [MPT-30B-Instruct](https://huggingface.co/mosaicml/mpt-30b-instruct): a model for short-form instruction following.
|
44 |
-
Built by finetuning MPT-30B on several carefully curated datasets.
|
45 |
-
* License: _CC-By-NC-SA-3.0_
|
46 |
-
|
47 |
-
* [MPT-30B-Chat](https://huggingface.co/mosaicml/mpt-30b-chat): a chatbot-like model for dialogue generation.
|
48 |
-
Built by finetuning MPT-30B on [ShareGPT-Vicuna](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered), [Camel-AI](https://huggingface.co/camel-ai),
|
49 |
-
[GPTeacher](https://github.com/teknium1/GPTeacher), [Guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco), [Baize](https://github.com/project-baize/baize-chatbot) and some generated datasets.
|
50 |
-
* License: _CC-By-NC-SA-4.0_
|
51 |
-
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-30b-chat)
|
52 |
-
|
53 |
-
## Model Date
|
54 |
-
|
55 |
-
June 22, 2023
|
56 |
-
|
57 |
-
## Model License
|
58 |
-
|
59 |
-
Apache-2.0
|
60 |
-
|
61 |
-
## Documentation
|
62 |
-
|
63 |
-
* [Blog post: MPT-30B: Raising the bar for open-source foundation models](https://www.mosaicml.com/blog/mpt-30b)
|
64 |
-
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
|
65 |
-
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
|
66 |
-
|
67 |
-
|
68 |
-
## How to Use
|
69 |
-
|
70 |
-
This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
|
71 |
-
|
72 |
-
```python
|
73 |
-
import transformers
|
74 |
-
model = transformers.AutoModelForCausalLM.from_pretrained(
|
75 |
-
'mosaicml/mpt-30b',
|
76 |
-
trust_remote_code=True
|
77 |
-
)
|
78 |
-
```
|
79 |
-
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
|
80 |
-
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
|
81 |
-
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
|
82 |
-
|
83 |
-
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
|
84 |
-
```python
|
85 |
-
import torch
|
86 |
-
import transformers
|
87 |
-
|
88 |
-
name = 'mosaicml/mpt-30b'
|
89 |
-
|
90 |
-
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
|
91 |
-
config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention
|
92 |
-
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
|
93 |
-
|
94 |
-
model = transformers.AutoModelForCausalLM.from_pretrained(
|
95 |
-
name,
|
96 |
-
config=config,
|
97 |
-
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
|
98 |
-
trust_remote_code=True
|
99 |
-
)
|
100 |
-
```
|
101 |
-
|
102 |
-
The model was trained initially with a sequence length of 4096 with an additional pretraining stage for sequence length adapation up to 8192. However, ALiBi enables users to increase the maximum sequence length even further during finetuning and/or inference. For example:
|
103 |
-
|
104 |
-
```python
|
105 |
-
import transformers
|
106 |
-
|
107 |
-
name = 'mosaicml/mpt-30b'
|
108 |
-
|
109 |
-
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
|
110 |
-
config.max_seq_len = 16384 # (input + output) tokens can now be up to 16384
|
111 |
-
|
112 |
-
model = transformers.AutoModelForCausalLM.from_pretrained(
|
113 |
-
name,
|
114 |
-
config=config,
|
115 |
-
trust_remote_code=True
|
116 |
-
)
|
117 |
-
```
|
118 |
-
|
119 |
-
This model was trained with the MPT-30B tokenizer which is identical to the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
|
120 |
-
|
121 |
-
```python
|
122 |
-
from transformers import AutoTokenizer
|
123 |
-
tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-30b')
|
124 |
-
```
|
125 |
-
|
126 |
-
The model can then be used, for example, within a text-generation pipeline.
|
127 |
-
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
|
128 |
-
|
129 |
-
```python
|
130 |
-
from transformers import pipeline
|
131 |
-
|
132 |
-
with torch.autocast('cuda', dtype=torch.bfloat16):
|
133 |
-
inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda')
|
134 |
-
outputs = model.generate(**inputs, max_new_tokens=100)
|
135 |
-
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
|
136 |
-
|
137 |
-
# or using the HF pipeline
|
138 |
-
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
|
139 |
-
with torch.autocast('cuda', dtype=torch.bfloat16):
|
140 |
-
print(
|
141 |
-
pipe('Here is a recipe for vegan banana bread:\n',
|
142 |
-
max_new_tokens=100,
|
143 |
-
do_sample=True,
|
144 |
-
use_cache=True))
|
145 |
-
```
|
146 |
-
|
147 |
-
## Model Description
|
148 |
-
|
149 |
-
The architecture is a modification of a standard decoder-only transformer.
|
150 |
-
|
151 |
-
The model has been modified from a standard transformer in the following ways:
|
152 |
-
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
|
153 |
-
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
|
154 |
-
* It does not use biases
|
155 |
-
|
156 |
-
|
157 |
-
| Hyperparameter | Value |
|
158 |
-
|----------------|-------|
|
159 |
-
|n_parameters | 29.95B |
|
160 |
-
|n_layers | 48 |
|
161 |
-
| n_heads | 64 |
|
162 |
-
| d_model | 7168 |
|
163 |
-
| vocab size | 50432 |
|
164 |
-
| sequence length | 8192 |
|
165 |
-
|
166 |
-
|
167 |
-
|
168 |
-
## Training Data
|
169 |
-
|
170 |
-
### Streaming Datasets
|
171 |
-
|
172 |
-
Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training.
|
173 |
-
StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset.
|
174 |
-
|
175 |
-
|
176 |
-
### Data Mix
|
177 |
-
|
178 |
-
The model was trained for 1T tokens on the following data mix:
|
179 |
-
|
180 |
-
| Data Source | Number of Tokens in Source | Proportion | Effective Number of Tokens | Epochs |
|
181 |
-
|-------------|----------------------------|------------|----------------------------|--------|
|
182 |
-
| mC4 3.1.0 - English (200+ words) | 2417.99 B | 33.50% | 335 B | 0.14 |
|
183 |
-
| c4 - English - SemDedup 80% | 100.42 B | 29.90% | 299 B | 2.98 |
|
184 |
-
| RedPajama - CommonCrawl | 878.45 B | 8.50% | 85 B | 0.097 |
|
185 |
-
| The Stack - Selected Languages | 463.78 B | 10.00% | 100 B | 0.22 |
|
186 |
-
| RedPajama - Wikipedia | 4.87 B | 4.00% | 40 B | 8.21 |
|
187 |
-
| The Stack - Markdown | 107.07 B | 4.50% | 45 B | 0.42 |
|
188 |
-
| Semantic Scholar ORC | 48.95 B | 3.30% | 33 B | 0.67 |
|
189 |
-
| RedPajama - Books | 26.02 B | 3.00% | 30 B | 1.15 |
|
190 |
-
| RedPajama - arXiv | 28.10 B | 1.90% | 19 B | 0.68 |
|
191 |
-
| RedPajama - StackExchange | 20.54 B | 1.40% | 14 B |0.68 |
|
192 |
-
|
193 |
-
Samples for each batch were selected from one of the datasets with the probability specified above. The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the sequence length. To build 8k support into MPT-30B efficiently, we first pre-trained on 1T tokens using sequences that were 2k tokens long, and then trained for an additional 50B tokens using sequences that were 8k tokens long.
|
194 |
-
|
195 |
-
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics,
|
196 |
-
most of which are relevant for tokenizing code:
|
197 |
-
(1) It was trained on a diverse mix of data that includes code (The Pile)
|
198 |
-
(2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces
|
199 |
-
(3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters.
|
200 |
-
|
201 |
-
The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)).
|
202 |
-
|
203 |
-
### Training Configuration
|
204 |
-
|
205 |
-
The model was trained in three stages using the [MosaicML Platform](https://www.mosaicml.com/platform):
|
206 |
-
(i) First it was trained on 440 A100-40GBs with a batch size of 1760.
|
207 |
-
(ii) Then, on 216 A100-40GBs with a batch size of 1728.
|
208 |
-
(iii) Training was completed on 256 H100-80GBs with a batch size of 512 with 8k context length and 50B tokens.
|
209 |
-
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer.
|
210 |
-
|
211 |
-
## Limitations and Biases
|
212 |
-
|
213 |
-
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
|
214 |
-
|
215 |
-
MPT-30B (Base) is **not** intended for deployment without finetuning.
|
216 |
-
It should not be used for human-facing interactions without further guardrails and user consent.
|
217 |
-
|
218 |
-
MPT-30B can produce factually incorrect output, and should not be relied on to produce factually accurate information.
|
219 |
-
MPT-30B was trained on various public datasets.
|
220 |
-
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
|
221 |
-
|
222 |
-
|
223 |
-
## MosaicML Platform
|
224 |
-
|
225 |
-
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-30b).
|
226 |
-
|
227 |
-
## Disclaimer
|
228 |
-
|
229 |
-
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
|
230 |
-
|
231 |
-
## Citation
|
232 |
-
|
233 |
-
Please cite this model using the following format:
|
234 |
|
235 |
```
|
236 |
-
|
237 |
-
|
238 |
-
|
239 |
-
|
240 |
-
|
241 |
-
|
242 |
-
|
243 |
-
|
244 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
245 |
```
|
|
|
1 |
+
Slightly modified mpt-30b, which has some updates to allow gradient checkpointing/etc., to be compatible with qlora training code.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
|
3 |
+
Original model: https://huggingface.co/mosaicml/mpt-30b
|
4 |
|
5 |
+
My fork of qlora with mpt-30b support: https://github.com/jondurbin/qlora
|
|
|
6 |
|
7 |
+
Differences in the qlora scripts:
|
8 |
|
9 |
+
- requires adding `--mpt True` for mpt-based models
|
10 |
+
- uses `--num_train_epochs` instead of `--max_steps`
|
11 |
+
- uses airoboros prompt format (mostly 1:1 with vicuna) rather than alpaca, and expects an input file in JSONL format with "instruction" and "response"
|
12 |
|
13 |
+
Full example of tuning (used for airoboros-mpt-30b-gpt4-1.4):
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
|
15 |
```
|
16 |
+
source /workspace/venv/bin/activate
|
17 |
+
export PYTHONPATH=./mpt-30b
|
18 |
+
export WANDB_API_KEY=[redacted]
|
19 |
+
export WANDB_PROJECT=airoboros-mpt-30b-gpt4-1.4
|
20 |
+
|
21 |
+
python qlora.py \
|
22 |
+
--model_name_or_path ./mpt-30b \
|
23 |
+
--output_dir ./$WANDB_PROJECT-checkpoints \
|
24 |
+
--num_train_epochs 4 \
|
25 |
+
--logging_steps 1 \
|
26 |
+
--save_strategy steps \
|
27 |
+
--data_seed 11422 \
|
28 |
+
--save_steps 75 \
|
29 |
+
--save_total_limit 3 \
|
30 |
+
--evaluation_strategy "no" \
|
31 |
+
--eval_dataset_size 2 \
|
32 |
+
--max_new_tokens 8192 \
|
33 |
+
--dataloader_num_workers 3 \
|
34 |
+
--logging_strategy steps \
|
35 |
+
--remove_unused_columns False \
|
36 |
+
--do_train \
|
37 |
+
--lora_r 64 \
|
38 |
+
--lora_alpha 16 \
|
39 |
+
--lora_modules all \
|
40 |
+
--double_quant \
|
41 |
+
--quant_type nf4 \
|
42 |
+
--bf16 \
|
43 |
+
--bits 4 \
|
44 |
+
--warmup_ratio 0.03 \
|
45 |
+
--lr_scheduler_type constant \
|
46 |
+
--dataset ./instructions.jsonl \
|
47 |
+
--dataset_format airoboros \
|
48 |
+
--model_max_len 8192 \
|
49 |
+
--gradient_checkpointing \
|
50 |
+
--per_device_train_batch_size 6 \
|
51 |
+
--gradient_accumulation_steps 16 \
|
52 |
+
--learning_rate 0.0001 \
|
53 |
+
--adam_beta2 0.999 \
|
54 |
+
--max_grad_norm 0.3 \
|
55 |
+
--lora_dropout 0.05 \
|
56 |
+
--weight_decay 0.0 \
|
57 |
+
--seed 11422 \
|
58 |
+
--trust_remote_code \
|
59 |
+
--mpt True \
|
60 |
+
--report_to wandb
|
61 |
```
|