File size: 13,978 Bytes
f0eb9e9 6eea308 c25dad2 ebfbb06 4cf974e aa40e0c 1cc3302 067ad06 1cc3302 067ad06 1cc3302 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 |
---
license: apache-2.0
language:
- en
base_model:
- prithivMLmods/SmolLM2-CoT-360M
datasets:
- prithivMLmods/Deepthink-Reasoning
library_name: transformers
tags:
- smolLM
- llama
- CoT
- Thinker
- text-generation-inference
pipeline_tag: text-generation
---
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/bMONeEIzYGnh7b7oppgBN.png)
# **SMOLLM CoT 360M GGUF ON CUSTOM SYNTHETIC DATA**
SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device. Fine-tuning a language model like SmolLM involves several steps, from setting up the environment to training the model and saving the results. Below is a detailed step-by-step guide based on the provided notebook file
# How to use with `Transformers`
```bash
pip install transformers
```
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "prithivMLmods/SmolLM2-CoT-360M"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
messages = [{"role": "user", "content": "What is the capital of France."}]
input_text=tokenizer.apply_chat_template(messages, tokenize=False)
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True)
print(tokenizer.decode(outputs[0]))
```
### **Step 1: Setting Up the Environment**
Before diving into fine-tuning, you need to set up your environment with the necessary libraries and tools.
1. **Install Required Libraries**:
- Install the necessary Python libraries using `pip`. These include `transformers`, `datasets`, `trl`, `torch`, `accelerate`, `bitsandbytes`, and `wandb`.
- These libraries are essential for working with Hugging Face models, datasets, and training loops.
```python
!pip install transformers datasets trl torch accelerate bitsandbytes wandb
```
2. **Import Necessary Modules**:
- Import the required modules from the installed libraries. These include `AutoModelForCausalLM`, `AutoTokenizer`, `TrainingArguments`, `pipeline`, `load_dataset`, and `SFTTrainer`.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TrainingArguments, pipeline
from datasets import load_dataset
from trl import SFTConfig, SFTTrainer, setup_chat_format
import torch
import os
```
3. **Detect Device (GPU, MPS, or CPU)**:
- Detect the available hardware (GPU, MPS, or CPU) to ensure the model runs on the most efficient device.
```python
device = (
"cuda"
if torch.cuda.is_available()
else "mps" if torch.backends.mps.is_available() else "cpu"
)
```
---
### **Step 2: Load the Pre-trained Model and Tokenizer**
Next, load the pre-trained SmolLM model and its corresponding tokenizer.
1. **Load the Model and Tokenizer**:
- Use `AutoModelForCausalLM` and `AutoTokenizer` to load the SmolLM model and tokenizer from Hugging Face.
```python
model_name = "HuggingFaceTB/SmolLM2-360M"
model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path=model_name)
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path=model_name)
```
2. **Set Up Chat Format**:
- Use the `setup_chat_format` function to prepare the model and tokenizer for chat-based tasks.
```python
model, tokenizer = setup_chat_format(model=model, tokenizer=tokenizer)
```
3. **Test the Base Model**:
- Test the base model with a simple prompt to ensure it’s working correctly.
```python
prompt = "Explain AGI ?"
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0 if device == "cuda" else -1)
print(pipe(prompt, max_new_tokens=200))
```
4. **If: Encountering**:
- Chat template is already added to the tokenizer, indicates that the tokenizer already has a predefined chat template, which prevents the setup_chat_format() from modifying it again.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name = "HuggingFaceTB/SmolLM2-1.7B-Instruct"
model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path=model_name)
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path=model_name)
tokenizer.chat_template = None
from trl.models.utils import setup_chat_format
model, tokenizer = setup_chat_format(model=model, tokenizer=tokenizer)
prompt = "Explain AGI?"
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0)
print(pipe(prompt, max_new_tokens=200))
```
*📍 Else Skip the Part [ Step 4 ] !*
---
### **Step 3: Load and Prepare the Dataset**
Fine-tuning requires a dataset. In this case, we’re using a custom dataset called `Deepthink-Reasoning`.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/JIwUAT-NpqpN18zUdo6uW.png)
1. **Load the Dataset**:
- Use the `load_dataset` function to load the dataset from Hugging Face.
```python
ds = load_dataset("prithivMLmods/Deepthink-Reasoning")
```
2. **Tokenize the Dataset**:
- Define a tokenization function that processes the dataset in batches. This function applies the chat template to each prompt-response pair and tokenizes the text.
```python
def tokenize_function(examples):
prompts = [p.strip() for p in examples["prompt"]]
responses = [r.strip() for r in examples["response"]]
texts = [
tokenizer.apply_chat_template(
[{"role": "user", "content": p}, {"role": "assistant", "content": r}],
tokenize=False
)
for p, r in zip(prompts, responses)
]
return tokenizer(texts, truncation=True, padding="max_length", max_length=512)
```
3. **Apply Tokenization**:
- Apply the tokenization function to the dataset.
```python
ds = ds.map(tokenize_function, batched=True)
```
---
### **Step 4: Configure Training Arguments**
Set up the training arguments to control the fine-tuning process.
1. **Define Training Arguments**:
- Use `TrainingArguments` to specify parameters like batch size, learning rate, number of steps, and optimization settings.
```python
use_bf16 = torch.cuda.is_bf16_supported()
training_args = TrainingArguments(
per_device_train_batch_size=2,
gradient_accumulation_steps=4,
warmup_steps=5,
max_steps=60,
learning_rate=2e-4,
fp16=not use_bf16,
bf16=use_bf16,
logging_steps=1,
optim="adamw_8bit",
weight_decay=0.01,
lr_scheduler_type="linear",
seed=3407,
output_dir="outputs",
report_to="wandb",
)
```
---
### **Step 5: Initialize the Trainer**
Initialize the `SFTTrainer` with the model, tokenizer, dataset, and training arguments.
```python
trainer = SFTTrainer(
model=model,
processing_class=tokenizer,
train_dataset=ds["train"],
args=training_args,
)
```
---
### **Step 6: Start Training**
Begin the fine-tuning process by calling the `train` method on the trainer.
```python
trainer.train()
```
---
### **Step 7: Save the Fine-Tuned Model**
After training, save the fine-tuned model and tokenizer to a local directory.
1. **Save Model and Tokenizer**:
- Use the `save_pretrained` method to save the model and tokenizer.
```python
save_directory = "/content/my_model"
model.save_pretrained(save_directory)
tokenizer.save_pretrained(save_directory)
```
2. **Zip and Download the Model**:
- Zip the saved directory and download it for future use.
```python
import shutil
shutil.make_archive(save_directory, 'zip', save_directory)
from google.colab import files
files.download(f"{save_directory}.zip")
```
---
### **Run with Ollama [Ollama Run]**
Ollama makes running machine learning models simple and efficient. Follow these steps to set up and run your GGUF models quickly.
## Quick Start: Step-by-Step Guide
| Step | Description | Command / Instructions |
|------|-------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 | **Install Ollama 🦙** | Download Ollama from [https://ollama.com/download](https://ollama.com/download) and install it on your system. |
| 2 | **Create Your Model File** | - Create a file named after your model, e.g., `metallama`. |
| | | - Add the following line to specify the base model: |
| | | ```bash |
| | | FROM Llama-3.2-1B.F16.gguf |
| | | ``` |
| | | - Ensure the base model file is in the same directory. |
| 3 | **Create and Patch the Model** | Run the following commands to create and verify your model: |
| | | ```bash |
| | | ollama create metallama -f ./metallama |
| | | ollama list |
| | | ``` |
| 4 | **Run the Model** | Use the following command to start your model: |
| | | ```bash |
| | | ollama run metallama |
| | | ``` |
| 5 | **Interact with the Model** | Once the model is running, interact with it: |
| | | ```plaintext |
| | | >>> Tell me about Space X. |
| | | Space X, the private aerospace company founded by Elon Musk, is revolutionizing space exploration... |
| | | ``` |
### **Model & Quant**
| **Item** | **Link** |
|----------|----------|
| **Model** | [SmolLM2-CoT-360M](https://huggingface.co/prithivMLmods/SmolLM2-CoT-360M) |
| **Quantized Version** | [SmolLM2-CoT-360M-GGUF](https://huggingface.co/prithivMLmods/SmolLM2-CoT-360M-GGUF) |
### **Conclusion**
Fine-tuning SmolLM involves setting up the environment, loading the model and dataset, configuring training parameters, and running the training loop. By following these steps, you can adapt SmolLM to your specific use case, whether it’s for reasoning tasks, chat-based applications, or other NLP tasks.
This process is highly customizable, so feel free to experiment with different datasets, hyperparameters, and training strategies to achieve the best results for your project.
---
|