Datasets:

License:
danielhanchen commited on
Commit
21714c3
·
verified ·
1 Parent(s): aa948e7

Upload 4 files

Browse files
Alpaca_+_Llama_7b_full_example.ipynb CHANGED
@@ -35,9 +35,7 @@
35
  "else:\n",
36
  " # Use this for older GPUs (V100, Tesla T4, RTX 20xx)\n",
37
  " !pip install \"unsloth[colab] @ git+https://github.com/unslothai/unsloth.git\"\n",
38
- "pass\n",
39
- "\n",
40
- "!pip install \"git+https://github.com/huggingface/transformers.git\" # Native 4bit loading works!"
41
  ]
42
  },
43
  {
@@ -280,11 +278,12 @@
280
  "# 4bit pre quantized models we support for 4x faster downloading + no OOMs.\n",
281
  "fourbit_models = [\n",
282
  " \"unsloth/mistral-7b-bnb-4bit\",\n",
 
283
  " \"unsloth/llama-2-7b-bnb-4bit\",\n",
284
  " \"unsloth/llama-2-13b-bnb-4bit\",\n",
285
  " \"unsloth/codellama-34b-bnb-4bit\",\n",
286
  " \"unsloth/tinyllama-bnb-4bit\",\n",
287
- "]\n",
288
  "\n",
289
  "model, tokenizer = FastLanguageModel.from_pretrained(\n",
290
  " model_name = \"unsloth/llama-2-7b-bnb-4bit\", # Choose ANY! eg mistralai/Mistral-7B-Instruct-v0.2\n",
@@ -348,7 +347,11 @@
348
  "\n",
349
  "**[NOTE]** To train only on completions (ignoring the user's input) read TRL's docs [here](https://huggingface.co/docs/trl/sft_trainer#train-on-completions-only).\n",
350
  "\n",
351
- "**[NOTE]** Remember to add the **EOS_TOKEN** to the tokenized output!! Otherwise you'll get infinite generations!"
 
 
 
 
352
  ],
353
  "metadata": {
354
  "id": "vITh0KVJ10qX"
@@ -656,6 +659,7 @@
656
  "\n",
657
  "trainer = SFTTrainer(\n",
658
  " model = model,\n",
 
659
  " train_dataset = dataset,\n",
660
  " dataset_text_field = \"text\",\n",
661
  " max_seq_length = max_seq_length,\n",
@@ -1054,7 +1058,7 @@
1054
  "cell_type": "code",
1055
  "source": [
1056
  "# alpaca_prompt = Copied from above\n",
1057
- "\n",
1058
  "inputs = tokenizer(\n",
1059
  "[\n",
1060
  " alpaca_prompt.format(\n",
@@ -1062,7 +1066,7 @@
1062
  " \"1, 1, 2, 3, 5, 8\", # input\n",
1063
  " \"\", # output - leave this blank for generation!\n",
1064
  " )\n",
1065
- "]*1, return_tensors = \"pt\").to(\"cuda\")\n",
1066
  "\n",
1067
  "outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)\n",
1068
  "tokenizer.batch_decode(outputs)"
@@ -1101,7 +1105,7 @@
1101
  "cell_type": "code",
1102
  "source": [
1103
  "# alpaca_prompt = Copied from above\n",
1104
- "\n",
1105
  "inputs = tokenizer(\n",
1106
  "[\n",
1107
  " alpaca_prompt.format(\n",
@@ -1109,7 +1113,7 @@
1109
  " \"1, 1, 2, 3, 5, 8\", # input\n",
1110
  " \"\", # output - leave this blank for generation!\n",
1111
  " )\n",
1112
- "]*1, return_tensors = \"pt\").to(\"cuda\")\n",
1113
  "\n",
1114
  "from transformers import TextStreamer\n",
1115
  "text_streamer = TextStreamer(tokenizer)\n",
@@ -1187,6 +1191,7 @@
1187
  " dtype = dtype,\n",
1188
  " load_in_4bit = load_in_4bit,\n",
1189
  " )\n",
 
1190
  "\n",
1191
  "# alpaca_prompt = You MUST copy from above!\n",
1192
  "\n",
@@ -1197,7 +1202,7 @@
1197
  " \"\", # input\n",
1198
  " \"\", # output - leave this blank for generation!\n",
1199
  " )\n",
1200
- "]*1, return_tensors = \"pt\").to(\"cuda\")\n",
1201
  "\n",
1202
  "from transformers import TextStreamer\n",
1203
  "text_streamer = TextStreamer(tokenizer)\n",
@@ -1227,7 +1232,7 @@
1227
  {
1228
  "cell_type": "markdown",
1229
  "source": [
1230
- "You can also use Hugging Face's `AutoModelForPeftCausalLM`"
1231
  ],
1232
  "metadata": {
1233
  "id": "TGKU509CuMmq"
@@ -1237,6 +1242,7 @@
1237
  "cell_type": "code",
1238
  "source": [
1239
  "if False:\n",
 
1240
  " from peft import AutoModelForPeftCausalLM\n",
1241
  " from transformers import AutoTokenizer\n",
1242
  " model = AutoModelForPeftCausalLM.from_pretrained(\n",
@@ -1256,7 +1262,7 @@
1256
  "source": [
1257
  "### Saving to float16 for VLLM\n",
1258
  "\n",
1259
- "We also support saving to `float16` directly. Select `merged_16bit` for float16 or `merged_4bit` for int4. We also allow `lora` adapters as a fallback. Use `push_to_hub_merged` to upload to your Hugging Face account!"
1260
  ],
1261
  "metadata": {
1262
  "id": "-xp0YDnKuN98"
@@ -1266,16 +1272,16 @@
1266
  "cell_type": "code",
1267
  "source": [
1268
  "# Merge to 16bit\n",
1269
- "if False: model.save_pretrained_merged(\"x\", tokenizer, save_method = \"merged_16bit\",)\n",
1270
- "if False: model.push_to_hub_merged(\"hf_user/x\", tokenizer, save_method = \"merged_16bit\", token = \"\")\n",
1271
  "\n",
1272
  "# Merge to 4bit\n",
1273
- "if False: model.save_pretrained_merged(\"x\", tokenizer, save_method = \"merged_4bit\",)\n",
1274
- "if False: model.push_to_hub_merged(\"hf_user/x\", tokenizer, save_method = \"merged_4bit\", token = \"\")\n",
1275
  "\n",
1276
  "# Just LoRA adapters\n",
1277
- "if False: model.save_pretrained_merged(\"x\", tokenizer, save_method = \"lora\",)\n",
1278
- "if False: model.push_to_hub_merged(\"hf_user/x\", tokenizer, save_method = \"lora\", token = \"\")"
1279
  ],
1280
  "metadata": {
1281
  "id": "vnFt-4ymuPM1"
@@ -1287,7 +1293,12 @@
1287
  "cell_type": "markdown",
1288
  "source": [
1289
  "### GGUF / llama.cpp Conversion\n",
1290
- "To save to `GGUF` / `llama.cpp`, we support it natively now! We clone `llama.cpp` and we default save it to `q8_0`. We allow all methods like `q4_k_m`. Use `save_pretrained_gguf` for local saving and `push_to_hub_gguf` for uploading to HF."
 
 
 
 
 
1291
  ],
1292
  "metadata": {
1293
  "id": "8xg8B-N7uQcE"
@@ -1297,16 +1308,16 @@
1297
  "cell_type": "code",
1298
  "source": [
1299
  "# Save to 8bit Q8_0\n",
1300
- "if False: model.save_pretrained_gguf(\"x\", tokenizer,)\n",
1301
- "if False: model.push_to_hub_gguf(\"hf_user/x\", tokenizer, token = \"\")\n",
1302
  "\n",
1303
  "# Save to 16bit GGUF\n",
1304
- "if False: model.save_pretrained_gguf(\"x\", tokenizer, quantization_method = \"f16\")\n",
1305
- "if False: model.push_to_hub_gguf(\"hf_user/x\", tokenizer, quantization_method = \"f16\", token = \"\")\n",
1306
  "\n",
1307
  "# Save to q4_k_m GGUF\n",
1308
- "if False: model.save_pretrained_gguf(\"x\", tokenizer, quantization_method = \"q4_k_m\")\n",
1309
- "if False: model.push_to_hub_gguf(\"hf_user/x\", tokenizer, quantization_method = \"q4_k_m\", token = \"\")"
1310
  ],
1311
  "metadata": {
1312
  "id": "8T822D9fuR0g"
@@ -1317,7 +1328,7 @@
1317
  {
1318
  "cell_type": "markdown",
1319
  "source": [
1320
- "Now, use the `x.gguf` file or `x-unsloth-Q4_K_M.gguf` file in `llama.cpp` or a UI based system like `GPT4All`. You can install GPT4All by going [here](https://gpt4all.io/index.html)."
1321
  ],
1322
  "metadata": {
1323
  "id": "RiRcv_rquUq0"
@@ -1333,8 +1344,10 @@
1333
  "2. Mistral 7b 2x faster [free Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing)\n",
1334
  "3. TinyLlama 4x faster full Alpaca 52K in 1 hour [free Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing)\n",
1335
  "4. CodeLlama 34b 2x faster [A100 on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing)\n",
1336
- "5. Llama 7b [free Kaggle](https://www.kaggle.com/danielhanchen/unsloth-alpaca-t4-ddp)\n",
1337
  "6. We also did a [blog](https://huggingface.co/blog/unsloth-trl) with 🤗 HuggingFace, and we're in the TRL [docs](https://huggingface.co/docs/trl/main/en/sft_trainer#accelerate-fine-tuning-2x-using-unsloth)!\n",
 
 
1338
  "\n",
1339
  "<div class=\"align-center\">\n",
1340
  " <a href=\"https://github.com/unslothai/unsloth\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png\" width=\"115\"></a>\n",
 
35
  "else:\n",
36
  " # Use this for older GPUs (V100, Tesla T4, RTX 20xx)\n",
37
  " !pip install \"unsloth[colab] @ git+https://github.com/unslothai/unsloth.git\"\n",
38
+ "pass"
 
 
39
  ]
40
  },
41
  {
 
278
  "# 4bit pre quantized models we support for 4x faster downloading + no OOMs.\n",
279
  "fourbit_models = [\n",
280
  " \"unsloth/mistral-7b-bnb-4bit\",\n",
281
+ " \"unsloth/mistral-7b-instruct-v0.2-bnb-4bit\",\n",
282
  " \"unsloth/llama-2-7b-bnb-4bit\",\n",
283
  " \"unsloth/llama-2-13b-bnb-4bit\",\n",
284
  " \"unsloth/codellama-34b-bnb-4bit\",\n",
285
  " \"unsloth/tinyllama-bnb-4bit\",\n",
286
+ "] # More models at https://huggingface.co/unsloth\n",
287
  "\n",
288
  "model, tokenizer = FastLanguageModel.from_pretrained(\n",
289
  " model_name = \"unsloth/llama-2-7b-bnb-4bit\", # Choose ANY! eg mistralai/Mistral-7B-Instruct-v0.2\n",
 
347
  "\n",
348
  "**[NOTE]** To train only on completions (ignoring the user's input) read TRL's docs [here](https://huggingface.co/docs/trl/sft_trainer#train-on-completions-only).\n",
349
  "\n",
350
+ "**[NOTE]** Remember to add the **EOS_TOKEN** to the tokenized output!! Otherwise you'll get infinite generations!\n",
351
+ "\n",
352
+ "If you want to use the `ChatML` template for ShareGPT datasets, try our conversational [notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing).\n",
353
+ "\n",
354
+ "For text completions like novel writing, try this [notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing)."
355
  ],
356
  "metadata": {
357
  "id": "vITh0KVJ10qX"
 
659
  "\n",
660
  "trainer = SFTTrainer(\n",
661
  " model = model,\n",
662
+ " tokenizer = tokenizer,\n",
663
  " train_dataset = dataset,\n",
664
  " dataset_text_field = \"text\",\n",
665
  " max_seq_length = max_seq_length,\n",
 
1058
  "cell_type": "code",
1059
  "source": [
1060
  "# alpaca_prompt = Copied from above\n",
1061
+ "FastLanguageModel.for_inference(model) # Enable native 2x faster inference\n",
1062
  "inputs = tokenizer(\n",
1063
  "[\n",
1064
  " alpaca_prompt.format(\n",
 
1066
  " \"1, 1, 2, 3, 5, 8\", # input\n",
1067
  " \"\", # output - leave this blank for generation!\n",
1068
  " )\n",
1069
+ "], return_tensors = \"pt\").to(\"cuda\")\n",
1070
  "\n",
1071
  "outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)\n",
1072
  "tokenizer.batch_decode(outputs)"
 
1105
  "cell_type": "code",
1106
  "source": [
1107
  "# alpaca_prompt = Copied from above\n",
1108
+ "FastLanguageModel.for_inference(model) # Enable native 2x faster inference\n",
1109
  "inputs = tokenizer(\n",
1110
  "[\n",
1111
  " alpaca_prompt.format(\n",
 
1113
  " \"1, 1, 2, 3, 5, 8\", # input\n",
1114
  " \"\", # output - leave this blank for generation!\n",
1115
  " )\n",
1116
+ "], return_tensors = \"pt\").to(\"cuda\")\n",
1117
  "\n",
1118
  "from transformers import TextStreamer\n",
1119
  "text_streamer = TextStreamer(tokenizer)\n",
 
1191
  " dtype = dtype,\n",
1192
  " load_in_4bit = load_in_4bit,\n",
1193
  " )\n",
1194
+ " FastLanguageModel.for_inference(model) # Enable native 2x faster inference\n",
1195
  "\n",
1196
  "# alpaca_prompt = You MUST copy from above!\n",
1197
  "\n",
 
1202
  " \"\", # input\n",
1203
  " \"\", # output - leave this blank for generation!\n",
1204
  " )\n",
1205
+ "], return_tensors = \"pt\").to(\"cuda\")\n",
1206
  "\n",
1207
  "from transformers import TextStreamer\n",
1208
  "text_streamer = TextStreamer(tokenizer)\n",
 
1232
  {
1233
  "cell_type": "markdown",
1234
  "source": [
1235
+ "You can also use Hugging Face's `AutoModelForPeftCausalLM`. Only use this if you do not have `unsloth` installed. It can be hopelessly slow, since `4bit` model downloading is not supported, and Unsloth's **inference is 2x faster**."
1236
  ],
1237
  "metadata": {
1238
  "id": "TGKU509CuMmq"
 
1242
  "cell_type": "code",
1243
  "source": [
1244
  "if False:\n",
1245
+ " # I highly do NOT suggest - use Unsloth if possible\n",
1246
  " from peft import AutoModelForPeftCausalLM\n",
1247
  " from transformers import AutoTokenizer\n",
1248
  " model = AutoModelForPeftCausalLM.from_pretrained(\n",
 
1262
  "source": [
1263
  "### Saving to float16 for VLLM\n",
1264
  "\n",
1265
+ "We also support saving to `float16` directly. Select `merged_16bit` for float16 or `merged_4bit` for int4. We also allow `lora` adapters as a fallback. Use `push_to_hub_merged` to upload to your Hugging Face account! You can go to https://huggingface.co/settings/tokens for your personal tokens."
1266
  ],
1267
  "metadata": {
1268
  "id": "-xp0YDnKuN98"
 
1272
  "cell_type": "code",
1273
  "source": [
1274
  "# Merge to 16bit\n",
1275
+ "if False: model.save_pretrained_merged(\"model\", tokenizer, save_method = \"merged_16bit\",)\n",
1276
+ "if False: model.push_to_hub_merged(\"hf/model\", tokenizer, save_method = \"merged_16bit\", token = \"\")\n",
1277
  "\n",
1278
  "# Merge to 4bit\n",
1279
+ "if False: model.save_pretrained_merged(\"model\", tokenizer, save_method = \"merged_4bit\",)\n",
1280
+ "if False: model.push_to_hub_merged(\"hf/model\", tokenizer, save_method = \"merged_4bit\", token = \"\")\n",
1281
  "\n",
1282
  "# Just LoRA adapters\n",
1283
+ "if False: model.save_pretrained_merged(\"model\", tokenizer, save_method = \"lora\",)\n",
1284
+ "if False: model.push_to_hub_merged(\"hf/model\", tokenizer, save_method = \"lora\", token = \"\")"
1285
  ],
1286
  "metadata": {
1287
  "id": "vnFt-4ymuPM1"
 
1293
  "cell_type": "markdown",
1294
  "source": [
1295
  "### GGUF / llama.cpp Conversion\n",
1296
+ "To save to `GGUF` / `llama.cpp`, we support it natively now! We clone `llama.cpp` and we default save it to `q8_0`. We allow all methods like `q4_k_m`. Use `save_pretrained_gguf` for local saving and `push_to_hub_gguf` for uploading to HF.\n",
1297
+ "\n",
1298
+ "Some supported quant methods (full list on our [Wiki page](https://github.com/unslothai/unsloth/wiki#gguf-quantization-options)):\n",
1299
+ "* `q8_0` - Fast conversion. High resource use, but generally acceptable.\n",
1300
+ "* `q4_k_m` - Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K.\n",
1301
+ "* `q5_k_m` - Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K."
1302
  ],
1303
  "metadata": {
1304
  "id": "8xg8B-N7uQcE"
 
1308
  "cell_type": "code",
1309
  "source": [
1310
  "# Save to 8bit Q8_0\n",
1311
+ "if False: model.save_pretrained_gguf(\"model\", tokenizer,)\n",
1312
+ "if False: model.push_to_hub_gguf(\"hf/model\", tokenizer, token = \"\")\n",
1313
  "\n",
1314
  "# Save to 16bit GGUF\n",
1315
+ "if False: model.save_pretrained_gguf(\"model\", tokenizer, quantization_method = \"f16\")\n",
1316
+ "if False: model.push_to_hub_gguf(\"hf/model\", tokenizer, quantization_method = \"f16\", token = \"\")\n",
1317
  "\n",
1318
  "# Save to q4_k_m GGUF\n",
1319
+ "if False: model.save_pretrained_gguf(\"model\", tokenizer, quantization_method = \"q4_k_m\")\n",
1320
+ "if False: model.push_to_hub_gguf(\"hf/model\", tokenizer, quantization_method = \"q4_k_m\", token = \"\")"
1321
  ],
1322
  "metadata": {
1323
  "id": "8T822D9fuR0g"
 
1328
  {
1329
  "cell_type": "markdown",
1330
  "source": [
1331
+ "Now, use the `model-unsloth.gguf` file or `model-unsloth-Q4_K_M.gguf` file in `llama.cpp` or a UI based system like `GPT4All`. You can install GPT4All by going [here](https://gpt4all.io/index.html)."
1332
  ],
1333
  "metadata": {
1334
  "id": "RiRcv_rquUq0"
 
1344
  "2. Mistral 7b 2x faster [free Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing)\n",
1345
  "3. TinyLlama 4x faster full Alpaca 52K in 1 hour [free Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing)\n",
1346
  "4. CodeLlama 34b 2x faster [A100 on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing)\n",
1347
+ "5. Mistral 7b [free Kaggle version](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook)\n",
1348
  "6. We also did a [blog](https://huggingface.co/blog/unsloth-trl) with 🤗 HuggingFace, and we're in the TRL [docs](https://huggingface.co/docs/trl/main/en/sft_trainer#accelerate-fine-tuning-2x-using-unsloth)!\n",
1349
+ "7. `ChatML` for ShareGPT datasets, [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing)\n",
1350
+ "8. Text completions like novel writing [notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing)\n",
1351
  "\n",
1352
  "<div class=\"align-center\">\n",
1353
  " <a href=\"https://github.com/unslothai/unsloth\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png\" width=\"115\"></a>\n",
Alpaca_+_Mistral_7b_full_example.ipynb CHANGED
@@ -35,9 +35,7 @@
35
  "else:\n",
36
  " # Use this for older GPUs (V100, Tesla T4, RTX 20xx)\n",
37
  " !pip install \"unsloth[colab] @ git+https://github.com/unslothai/unsloth.git\"\n",
38
- "pass\n",
39
- "\n",
40
- "!pip install \"git+https://github.com/huggingface/transformers.git\" # Native 4bit loading works!"
41
  ]
42
  },
43
  {
@@ -280,14 +278,15 @@
280
  "# 4bit pre quantized models we support for 4x faster downloading + no OOMs.\n",
281
  "fourbit_models = [\n",
282
  " \"unsloth/mistral-7b-bnb-4bit\",\n",
 
283
  " \"unsloth/llama-2-7b-bnb-4bit\",\n",
284
  " \"unsloth/llama-2-13b-bnb-4bit\",\n",
285
  " \"unsloth/codellama-34b-bnb-4bit\",\n",
286
  " \"unsloth/tinyllama-bnb-4bit\",\n",
287
- "]\n",
288
  "\n",
289
  "model, tokenizer = FastLanguageModel.from_pretrained(\n",
290
- " model_name = \"unsloth/mistral-7b-bnb-4bit\", # Choose ANY! eg mistralai/Mistral-7B-Instruct-v0.2\n",
291
  " max_seq_length = max_seq_length,\n",
292
  " dtype = dtype,\n",
293
  " load_in_4bit = load_in_4bit,\n",
@@ -348,7 +347,11 @@
348
  "\n",
349
  "**[NOTE]** To train only on completions (ignoring the user's input) read TRL's docs [here](https://huggingface.co/docs/trl/sft_trainer#train-on-completions-only).\n",
350
  "\n",
351
- "**[NOTE]** Remember to add the **EOS_TOKEN** to the tokenized output!! Otherwise you'll get infinite generations!"
 
 
 
 
352
  ],
353
  "metadata": {
354
  "id": "vITh0KVJ10qX"
@@ -656,6 +659,7 @@
656
  "\n",
657
  "trainer = SFTTrainer(\n",
658
  " model = model,\n",
 
659
  " train_dataset = dataset,\n",
660
  " dataset_text_field = \"text\",\n",
661
  " max_seq_length = max_seq_length,\n",
@@ -1054,7 +1058,7 @@
1054
  "cell_type": "code",
1055
  "source": [
1056
  "# alpaca_prompt = Copied from above\n",
1057
- "\n",
1058
  "inputs = tokenizer(\n",
1059
  "[\n",
1060
  " alpaca_prompt.format(\n",
@@ -1062,7 +1066,7 @@
1062
  " \"1, 1, 2, 3, 5, 8\", # input\n",
1063
  " \"\", # output - leave this blank for generation!\n",
1064
  " )\n",
1065
- "]*1, return_tensors = \"pt\").to(\"cuda\")\n",
1066
  "\n",
1067
  "outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)\n",
1068
  "tokenizer.batch_decode(outputs)"
@@ -1108,7 +1112,7 @@
1108
  "cell_type": "code",
1109
  "source": [
1110
  "# alpaca_prompt = Copied from above\n",
1111
- "\n",
1112
  "inputs = tokenizer(\n",
1113
  "[\n",
1114
  " alpaca_prompt.format(\n",
@@ -1116,7 +1120,7 @@
1116
  " \"1, 1, 2, 3, 5, 8\", # input\n",
1117
  " \"\", # output - leave this blank for generation!\n",
1118
  " )\n",
1119
- "]*1, return_tensors = \"pt\").to(\"cuda\")\n",
1120
  "\n",
1121
  "from transformers import TextStreamer\n",
1122
  "text_streamer = TextStreamer(tokenizer)\n",
@@ -1201,17 +1205,9 @@
1201
  " dtype = dtype,\n",
1202
  " load_in_4bit = load_in_4bit,\n",
1203
  " )\n",
 
1204
  "\n",
1205
- "alpaca_prompt = \"\"\"Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n",
1206
- "\n",
1207
- "### Instruction:\n",
1208
- "{}\n",
1209
- "\n",
1210
- "### Input:\n",
1211
- "{}\n",
1212
- "\n",
1213
- "### Response:\n",
1214
- "{}\"\"\"\n",
1215
  "\n",
1216
  "inputs = tokenizer(\n",
1217
  "[\n",
@@ -1220,7 +1216,7 @@
1220
  " \"\", # input\n",
1221
  " \"\", # output - leave this blank for generation!\n",
1222
  " )\n",
1223
- "]*1, return_tensors = \"pt\").to(\"cuda\")\n",
1224
  "\n",
1225
  "outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)\n",
1226
  "tokenizer.batch_decode(outputs)"
@@ -1256,7 +1252,7 @@
1256
  {
1257
  "cell_type": "markdown",
1258
  "source": [
1259
- "You can also use Hugging Face's `AutoModelForPeftCausalLM`"
1260
  ],
1261
  "metadata": {
1262
  "id": "QQMjaNrjsU5_"
@@ -1266,6 +1262,7 @@
1266
  "cell_type": "code",
1267
  "source": [
1268
  "if False:\n",
 
1269
  " from peft import AutoModelForPeftCausalLM\n",
1270
  " from transformers import AutoTokenizer\n",
1271
  " model = AutoModelForPeftCausalLM.from_pretrained(\n",
@@ -1285,7 +1282,7 @@
1285
  "source": [
1286
  "### Saving to float16 for VLLM\n",
1287
  "\n",
1288
- "We also support saving to `float16` directly. Select `merged_16bit` for float16 or `merged_4bit` for int4. We also allow `lora` adapters as a fallback. Use `push_to_hub_merged` to upload to your Hugging Face account!"
1289
  ],
1290
  "metadata": {
1291
  "id": "f422JgM9sdVT"
@@ -1295,16 +1292,16 @@
1295
  "cell_type": "code",
1296
  "source": [
1297
  "# Merge to 16bit\n",
1298
- "if False: model.save_pretrained_merged(\"x\", tokenizer, save_method = \"merged_16bit\",)\n",
1299
- "if False: model.push_to_hub_merged(\"hf_user/x\", tokenizer, save_method = \"merged_16bit\", token = \"\")\n",
1300
  "\n",
1301
  "# Merge to 4bit\n",
1302
- "if False: model.save_pretrained_merged(\"x\", tokenizer, save_method = \"merged_4bit\",)\n",
1303
- "if False: model.push_to_hub_merged(\"hf_user/x\", tokenizer, save_method = \"merged_4bit\", token = \"\")\n",
1304
  "\n",
1305
  "# Just LoRA adapters\n",
1306
- "if False: model.save_pretrained_merged(\"x\", tokenizer, save_method = \"lora\",)\n",
1307
- "if False: model.push_to_hub_merged(\"hf_user/x\", tokenizer, save_method = \"lora\", token = \"\")"
1308
  ],
1309
  "metadata": {
1310
  "id": "iHjt_SMYsd3P"
@@ -1316,7 +1313,12 @@
1316
  "cell_type": "markdown",
1317
  "source": [
1318
  "### GGUF / llama.cpp Conversion\n",
1319
- "To save to `GGUF` / `llama.cpp`, we support it natively now! We clone `llama.cpp` and we default save it to `q8_0`. We allow all methods like `q4_k_m`. Use `save_pretrained_gguf` for local saving and `push_to_hub_gguf` for uploading to HF."
 
 
 
 
 
1320
  ],
1321
  "metadata": {
1322
  "id": "TCv4vXHd61i7"
@@ -1326,16 +1328,16 @@
1326
  "cell_type": "code",
1327
  "source": [
1328
  "# Save to 8bit Q8_0\n",
1329
- "if False: model.save_pretrained_gguf(\"x\", tokenizer,)\n",
1330
- "if False: model.push_to_hub_gguf(\"hf_user/x\", tokenizer, token = \"\")\n",
1331
  "\n",
1332
  "# Save to 16bit GGUF\n",
1333
- "if False: model.save_pretrained_gguf(\"x\", tokenizer, quantization_method = \"f16\")\n",
1334
- "if False: model.push_to_hub_gguf(\"hf_user/x\", tokenizer, quantization_method = \"f16\", token = \"\")\n",
1335
  "\n",
1336
  "# Save to q4_k_m GGUF\n",
1337
- "if False: model.save_pretrained_gguf(\"x\", tokenizer, quantization_method = \"q4_k_m\")\n",
1338
- "if False: model.push_to_hub_gguf(\"hf_user/x\", tokenizer, quantization_method = \"q4_k_m\", token = \"\")"
1339
  ],
1340
  "metadata": {
1341
  "id": "FqfebeAdT073"
@@ -1346,7 +1348,7 @@
1346
  {
1347
  "cell_type": "markdown",
1348
  "source": [
1349
- "Now, use the `x.gguf` file or `x-unsloth-Q4_K_M.gguf` file in `llama.cpp` or a UI based system like `GPT4All`. You can install GPT4All by going [here](https://gpt4all.io/index.html)."
1350
  ],
1351
  "metadata": {
1352
  "id": "bDp0zNpwe6U_"
@@ -1362,8 +1364,10 @@
1362
  "2. Llama 7b 2x faster [free Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing)\n",
1363
  "3. TinyLlama 4x faster full Alpaca 52K in 1 hour [free Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing)\n",
1364
  "4. CodeLlama 34b 2x faster [A100 on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing)\n",
1365
- "5. Llama 7b [free Kaggle](https://www.kaggle.com/danielhanchen/unsloth-alpaca-t4-ddp)\n",
1366
  "6. We also did a [blog](https://huggingface.co/blog/unsloth-trl) with 🤗 HuggingFace, and we're in the TRL [docs](https://huggingface.co/docs/trl/main/en/sft_trainer#accelerate-fine-tuning-2x-using-unsloth)!\n",
 
 
1367
  "\n",
1368
  "<div class=\"align-center\">\n",
1369
  " <a href=\"https://github.com/unslothai/unsloth\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png\" width=\"115\"></a>\n",
 
35
  "else:\n",
36
  " # Use this for older GPUs (V100, Tesla T4, RTX 20xx)\n",
37
  " !pip install \"unsloth[colab] @ git+https://github.com/unslothai/unsloth.git\"\n",
38
+ "pass"
 
 
39
  ]
40
  },
41
  {
 
278
  "# 4bit pre quantized models we support for 4x faster downloading + no OOMs.\n",
279
  "fourbit_models = [\n",
280
  " \"unsloth/mistral-7b-bnb-4bit\",\n",
281
+ " \"unsloth/mistral-7b-instruct-v0.2-bnb-4bit\",\n",
282
  " \"unsloth/llama-2-7b-bnb-4bit\",\n",
283
  " \"unsloth/llama-2-13b-bnb-4bit\",\n",
284
  " \"unsloth/codellama-34b-bnb-4bit\",\n",
285
  " \"unsloth/tinyllama-bnb-4bit\",\n",
286
+ "] # More models at https://huggingface.co/unsloth\n",
287
  "\n",
288
  "model, tokenizer = FastLanguageModel.from_pretrained(\n",
289
+ " model_name = \"unsloth/mistral-7b-bnb-4bit\", # Choose ANY! eg teknium/OpenHermes-2.5-Mistral-7B\n",
290
  " max_seq_length = max_seq_length,\n",
291
  " dtype = dtype,\n",
292
  " load_in_4bit = load_in_4bit,\n",
 
347
  "\n",
348
  "**[NOTE]** To train only on completions (ignoring the user's input) read TRL's docs [here](https://huggingface.co/docs/trl/sft_trainer#train-on-completions-only).\n",
349
  "\n",
350
+ "**[NOTE]** Remember to add the **EOS_TOKEN** to the tokenized output!! Otherwise you'll get infinite generations!\n",
351
+ "\n",
352
+ "If you want to use the `ChatML` template for ShareGPT datasets, try our conversational [notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing).\n",
353
+ "\n",
354
+ "For text completions like novel writing, try this [notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing)."
355
  ],
356
  "metadata": {
357
  "id": "vITh0KVJ10qX"
 
659
  "\n",
660
  "trainer = SFTTrainer(\n",
661
  " model = model,\n",
662
+ " tokenizer = tokenizer,\n",
663
  " train_dataset = dataset,\n",
664
  " dataset_text_field = \"text\",\n",
665
  " max_seq_length = max_seq_length,\n",
 
1058
  "cell_type": "code",
1059
  "source": [
1060
  "# alpaca_prompt = Copied from above\n",
1061
+ "FastLanguageModel.for_inference(model) # Enable native 2x faster inference\n",
1062
  "inputs = tokenizer(\n",
1063
  "[\n",
1064
  " alpaca_prompt.format(\n",
 
1066
  " \"1, 1, 2, 3, 5, 8\", # input\n",
1067
  " \"\", # output - leave this blank for generation!\n",
1068
  " )\n",
1069
+ "], return_tensors = \"pt\").to(\"cuda\")\n",
1070
  "\n",
1071
  "outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)\n",
1072
  "tokenizer.batch_decode(outputs)"
 
1112
  "cell_type": "code",
1113
  "source": [
1114
  "# alpaca_prompt = Copied from above\n",
1115
+ "FastLanguageModel.for_inference(model) # Enable native 2x faster inference\n",
1116
  "inputs = tokenizer(\n",
1117
  "[\n",
1118
  " alpaca_prompt.format(\n",
 
1120
  " \"1, 1, 2, 3, 5, 8\", # input\n",
1121
  " \"\", # output - leave this blank for generation!\n",
1122
  " )\n",
1123
+ "], return_tensors = \"pt\").to(\"cuda\")\n",
1124
  "\n",
1125
  "from transformers import TextStreamer\n",
1126
  "text_streamer = TextStreamer(tokenizer)\n",
 
1205
  " dtype = dtype,\n",
1206
  " load_in_4bit = load_in_4bit,\n",
1207
  " )\n",
1208
+ " FastLanguageModel.for_inference(model) # Enable native 2x faster inference\n",
1209
  "\n",
1210
+ "# alpaca_prompt = You MUST copy from above!\n",
 
 
 
 
 
 
 
 
 
1211
  "\n",
1212
  "inputs = tokenizer(\n",
1213
  "[\n",
 
1216
  " \"\", # input\n",
1217
  " \"\", # output - leave this blank for generation!\n",
1218
  " )\n",
1219
+ "], return_tensors = \"pt\").to(\"cuda\")\n",
1220
  "\n",
1221
  "outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)\n",
1222
  "tokenizer.batch_decode(outputs)"
 
1252
  {
1253
  "cell_type": "markdown",
1254
  "source": [
1255
+ "You can also use Hugging Face's `AutoModelForPeftCausalLM`. Only use this if you do not have `unsloth` installed. It can be hopelessly slow, since `4bit` model downloading is not supported, and Unsloth's **inference is 2x faster**."
1256
  ],
1257
  "metadata": {
1258
  "id": "QQMjaNrjsU5_"
 
1262
  "cell_type": "code",
1263
  "source": [
1264
  "if False:\n",
1265
+ " # I highly do NOT suggest - use Unsloth if possible\n",
1266
  " from peft import AutoModelForPeftCausalLM\n",
1267
  " from transformers import AutoTokenizer\n",
1268
  " model = AutoModelForPeftCausalLM.from_pretrained(\n",
 
1282
  "source": [
1283
  "### Saving to float16 for VLLM\n",
1284
  "\n",
1285
+ "We also support saving to `float16` directly. Select `merged_16bit` for float16 or `merged_4bit` for int4. We also allow `lora` adapters as a fallback. Use `push_to_hub_merged` to upload to your Hugging Face account! You can go to https://huggingface.co/settings/tokens for your personal tokens."
1286
  ],
1287
  "metadata": {
1288
  "id": "f422JgM9sdVT"
 
1292
  "cell_type": "code",
1293
  "source": [
1294
  "# Merge to 16bit\n",
1295
+ "if False: model.save_pretrained_merged(\"model\", tokenizer, save_method = \"merged_16bit\",)\n",
1296
+ "if False: model.push_to_hub_merged(\"hf/model\", tokenizer, save_method = \"merged_16bit\", token = \"\")\n",
1297
  "\n",
1298
  "# Merge to 4bit\n",
1299
+ "if False: model.save_pretrained_merged(\"model\", tokenizer, save_method = \"merged_4bit\",)\n",
1300
+ "if False: model.push_to_hub_merged(\"hf/model\", tokenizer, save_method = \"merged_4bit\", token = \"\")\n",
1301
  "\n",
1302
  "# Just LoRA adapters\n",
1303
+ "if False: model.save_pretrained_merged(\"model\", tokenizer, save_method = \"lora\",)\n",
1304
+ "if False: model.push_to_hub_merged(\"hf/model\", tokenizer, save_method = \"lora\", token = \"\")"
1305
  ],
1306
  "metadata": {
1307
  "id": "iHjt_SMYsd3P"
 
1313
  "cell_type": "markdown",
1314
  "source": [
1315
  "### GGUF / llama.cpp Conversion\n",
1316
+ "To save to `GGUF` / `llama.cpp`, we support it natively now! We clone `llama.cpp` and we default save it to `q8_0`. We allow all methods like `q4_k_m`. Use `save_pretrained_gguf` for local saving and `push_to_hub_gguf` for uploading to HF.\n",
1317
+ "\n",
1318
+ "Some supported quant methods (full list on our [Wiki page](https://github.com/unslothai/unsloth/wiki#gguf-quantization-options)):\n",
1319
+ "* `q8_0` - Fast conversion. High resource use, but generally acceptable.\n",
1320
+ "* `q4_k_m` - Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K.\n",
1321
+ "* `q5_k_m` - Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K."
1322
  ],
1323
  "metadata": {
1324
  "id": "TCv4vXHd61i7"
 
1328
  "cell_type": "code",
1329
  "source": [
1330
  "# Save to 8bit Q8_0\n",
1331
+ "if False: model.save_pretrained_gguf(\"model\", tokenizer,)\n",
1332
+ "if False: model.push_to_hub_gguf(\"hf/model\", tokenizer, token = \"\")\n",
1333
  "\n",
1334
  "# Save to 16bit GGUF\n",
1335
+ "if False: model.save_pretrained_gguf(\"model\", tokenizer, quantization_method = \"f16\")\n",
1336
+ "if False: model.push_to_hub_gguf(\"hf/model\", tokenizer, quantization_method = \"f16\", token = \"\")\n",
1337
  "\n",
1338
  "# Save to q4_k_m GGUF\n",
1339
+ "if False: model.save_pretrained_gguf(\"model\", tokenizer, quantization_method = \"q4_k_m\")\n",
1340
+ "if False: model.push_to_hub_gguf(\"hf/model\", tokenizer, quantization_method = \"q4_k_m\", token = \"\")"
1341
  ],
1342
  "metadata": {
1343
  "id": "FqfebeAdT073"
 
1348
  {
1349
  "cell_type": "markdown",
1350
  "source": [
1351
+ "Now, use the `model-unsloth.gguf` file or `model-unsloth-Q4_K_M.gguf` file in `llama.cpp` or a UI based system like `GPT4All`. You can install GPT4All by going [here](https://gpt4all.io/index.html)."
1352
  ],
1353
  "metadata": {
1354
  "id": "bDp0zNpwe6U_"
 
1364
  "2. Llama 7b 2x faster [free Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing)\n",
1365
  "3. TinyLlama 4x faster full Alpaca 52K in 1 hour [free Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing)\n",
1366
  "4. CodeLlama 34b 2x faster [A100 on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing)\n",
1367
+ "5. Mistral 7b [free Kaggle version](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook)\n",
1368
  "6. We also did a [blog](https://huggingface.co/blog/unsloth-trl) with 🤗 HuggingFace, and we're in the TRL [docs](https://huggingface.co/docs/trl/main/en/sft_trainer#accelerate-fine-tuning-2x-using-unsloth)!\n",
1369
+ "7. `ChatML` for ShareGPT datasets, [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing)\n",
1370
+ "8. Text completions like novel writing [notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing)\n",
1371
  "\n",
1372
  "<div class=\"align-center\">\n",
1373
  " <a href=\"https://github.com/unslothai/unsloth\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png\" width=\"115\"></a>\n",
Alpaca_+_TinyLlama_+_RoPE_Scaling_full_example.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
DPO_Zephyr_Unsloth_Example.ipynb CHANGED
The diff for this file is too large to render. See raw diff