prithivMLmods commited on
Commit
1cc3302
·
verified ·
1 Parent(s): 4cf974e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +246 -0
README.md CHANGED
@@ -16,3 +16,249 @@ tags:
16
  pipeline_tag: text-generation
17
  ---
18
  ![smollmv1.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/7XZC42GuwcehV28lUI-8g.png)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  pipeline_tag: text-generation
17
  ---
18
  ![smollmv1.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/7XZC42GuwcehV28lUI-8g.png)
19
+
20
+ # **SMOLLM CoT 360M GGUF ON CUSTOM SYNTHETIC DATA**
21
+
22
+ SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device. Fine-tuning a language model like SmolLM involves several steps, from setting up the environment to training the model and saving the results. Below is a detailed step-by-step guide based on the provided notebook file
23
+
24
+ | **Notebook** | **Link** |
25
+ |--------------|----------|
26
+ | SmolLM-FT-360M | [SmolLM-FT-360M.ipynb](https://huggingface.co/datasets/prithivMLmods/FinetuneRT-Colab/blob/main/SmolLM-FT/SmolLM-FT-360M.ipynb) |
27
+
28
+ ---
29
+
30
+ ### How to use
31
+
32
+ ### Transformers
33
+ ```bash
34
+ pip install transformers
35
+ ```
36
+
37
+ ```python
38
+ from transformers import AutoModelForCausalLM, AutoTokenizer
39
+ checkpoint = "prithivMLmods/SmolLM2-CoT-360M"
40
+
41
+ device = "cuda" # for GPU usage or "cpu" for CPU usage
42
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
43
+ # for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
44
+ model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
45
+
46
+ messages = [{"role": "user", "content": "What is the capital of France."}]
47
+ input_text=tokenizer.apply_chat_template(messages, tokenize=False)
48
+ inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
49
+ outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True)
50
+ print(tokenizer.decode(outputs[0]))
51
+ ```
52
+
53
+ ### **Step 1: Setting Up the Environment**
54
+ Before diving into fine-tuning, you need to set up your environment with the necessary libraries and tools.
55
+
56
+ 1. **Install Required Libraries**:
57
+ - Install the necessary Python libraries using `pip`. These include `transformers`, `datasets`, `trl`, `torch`, `accelerate`, `bitsandbytes`, and `wandb`.
58
+ - These libraries are essential for working with Hugging Face models, datasets, and training loops.
59
+
60
+ ```python
61
+ !pip install transformers datasets trl torch accelerate bitsandbytes wandb
62
+ ```
63
+
64
+ 2. **Import Necessary Modules**:
65
+ - Import the required modules from the installed libraries. These include `AutoModelForCausalLM`, `AutoTokenizer`, `TrainingArguments`, `pipeline`, `load_dataset`, and `SFTTrainer`.
66
+
67
+ ```python
68
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TrainingArguments, pipeline
69
+ from datasets import load_dataset
70
+ from trl import SFTConfig, SFTTrainer, setup_chat_format
71
+ import torch
72
+ import os
73
+ ```
74
+
75
+ 3. **Detect Device (GPU, MPS, or CPU)**:
76
+ - Detect the available hardware (GPU, MPS, or CPU) to ensure the model runs on the most efficient device.
77
+
78
+ ```python
79
+ device = (
80
+ "cuda"
81
+ if torch.cuda.is_available()
82
+ else "mps" if torch.backends.mps.is_available() else "cpu"
83
+ )
84
+ ```
85
+ ---
86
+
87
+ ### **Step 2: Load the Pre-trained Model and Tokenizer**
88
+ Next, load the pre-trained SmolLM model and its corresponding tokenizer.
89
+
90
+ 1. **Load the Model and Tokenizer**:
91
+ - Use `AutoModelForCausalLM` and `AutoTokenizer` to load the SmolLM model and tokenizer from Hugging Face.
92
+
93
+ ```python
94
+ model_name = "HuggingFaceTB/SmolLM2-360M"
95
+ model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path=model_name)
96
+ tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path=model_name)
97
+ ```
98
+
99
+ 2. **Set Up Chat Format**:
100
+ - Use the `setup_chat_format` function to prepare the model and tokenizer for chat-based tasks.
101
+
102
+ ```python
103
+ model, tokenizer = setup_chat_format(model=model, tokenizer=tokenizer)
104
+ ```
105
+
106
+ 3. **Test the Base Model**:
107
+ - Test the base model with a simple prompt to ensure it’s working correctly.
108
+
109
+ ```python
110
+ prompt = "Explain AGI ?"
111
+ pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0 if device == "cuda" else -1)
112
+ print(pipe(prompt, max_new_tokens=200))
113
+ ```
114
+ 4. **If: Encountering**:
115
+ - Chat template is already added to the tokenizer, indicates that the tokenizer already has a predefined chat template, which prevents the setup_chat_format() from modifying it again.
116
+
117
+ ```python
118
+ from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
119
+
120
+ model_name = "HuggingFaceTB/SmolLM2-1.7B-Instruct"
121
+ model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path=model_name)
122
+ tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path=model_name)
123
+
124
+ tokenizer.chat_template = None
125
+
126
+ from trl.models.utils import setup_chat_format
127
+ model, tokenizer = setup_chat_format(model=model, tokenizer=tokenizer)
128
+
129
+ prompt = "Explain AGI?"
130
+ pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0)
131
+ print(pipe(prompt, max_new_tokens=200))
132
+ ```
133
+ *📍 Else Skip the Part [ Step 4 ] !*
134
+
135
+ ---
136
+
137
+ ### **Step 3: Load and Prepare the Dataset**
138
+ Fine-tuning requires a dataset. In this case, we’re using a custom dataset called `Deepthink-Reasoning`.
139
+
140
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/JIwUAT-NpqpN18zUdo6uW.png)
141
+
142
+ 1. **Load the Dataset**:
143
+ - Use the `load_dataset` function to load the dataset from Hugging Face.
144
+
145
+ ```python
146
+ ds = load_dataset("prithivMLmods/Deepthink-Reasoning")
147
+ ```
148
+
149
+ 2. **Tokenize the Dataset**:
150
+ - Define a tokenization function that processes the dataset in batches. This function applies the chat template to each prompt-response pair and tokenizes the text.
151
+
152
+ ```python
153
+ def tokenize_function(examples):
154
+ prompts = [p.strip() for p in examples["prompt"]]
155
+ responses = [r.strip() for r in examples["response"]]
156
+ texts = [
157
+ tokenizer.apply_chat_template(
158
+ [{"role": "user", "content": p}, {"role": "assistant", "content": r}],
159
+ tokenize=False
160
+ )
161
+ for p, r in zip(prompts, responses)
162
+ ]
163
+ return tokenizer(texts, truncation=True, padding="max_length", max_length=512)
164
+ ```
165
+
166
+ 3. **Apply Tokenization**:
167
+ - Apply the tokenization function to the dataset.
168
+
169
+ ```python
170
+ ds = ds.map(tokenize_function, batched=True)
171
+ ```
172
+
173
+ ---
174
+
175
+ ### **Step 4: Configure Training Arguments**
176
+ Set up the training arguments to control the fine-tuning process.
177
+
178
+ 1. **Define Training Arguments**:
179
+ - Use `TrainingArguments` to specify parameters like batch size, learning rate, number of steps, and optimization settings.
180
+
181
+ ```python
182
+ use_bf16 = torch.cuda.is_bf16_supported()
183
+ training_args = TrainingArguments(
184
+ per_device_train_batch_size=2,
185
+ gradient_accumulation_steps=4,
186
+ warmup_steps=5,
187
+ max_steps=60,
188
+ learning_rate=2e-4,
189
+ fp16=not use_bf16,
190
+ bf16=use_bf16,
191
+ logging_steps=1,
192
+ optim="adamw_8bit",
193
+ weight_decay=0.01,
194
+ lr_scheduler_type="linear",
195
+ seed=3407,
196
+ output_dir="outputs",
197
+ report_to="wandb",
198
+ )
199
+ ```
200
+
201
+ ---
202
+
203
+ ### **Step 5: Initialize the Trainer**
204
+ Initialize the `SFTTrainer` with the model, tokenizer, dataset, and training arguments.
205
+
206
+ ```python
207
+ trainer = SFTTrainer(
208
+ model=model,
209
+ processing_class=tokenizer,
210
+ train_dataset=ds["train"],
211
+ args=training_args,
212
+ )
213
+ ```
214
+
215
+ ---
216
+
217
+ ### **Step 6: Start Training**
218
+ Begin the fine-tuning process by calling the `train` method on the trainer.
219
+
220
+ ```python
221
+ trainer.train()
222
+ ```
223
+
224
+ ---
225
+
226
+ ### **Step 7: Save the Fine-Tuned Model**
227
+ After training, save the fine-tuned model and tokenizer to a local directory.
228
+
229
+ 1. **Save Model and Tokenizer**:
230
+ - Use the `save_pretrained` method to save the model and tokenizer.
231
+
232
+ ```python
233
+ save_directory = "/content/my_model"
234
+ model.save_pretrained(save_directory)
235
+ tokenizer.save_pretrained(save_directory)
236
+ ```
237
+
238
+ 2. **Zip and Download the Model**:
239
+ - Zip the saved directory and download it for future use.
240
+
241
+ ```python
242
+ import shutil
243
+ shutil.make_archive(save_directory, 'zip', save_directory)
244
+
245
+ from google.colab import files
246
+ files.download(f"{save_directory}.zip")
247
+ ```
248
+
249
+ ---
250
+ ### **Model & Quant**
251
+
252
+ | **Item** | **Link** |
253
+ |----------|----------|
254
+ | **Model** | [SmolLM2-CoT-360M](https://huggingface.co/prithivMLmods/SmolLM2-CoT-360M) |
255
+ | **Quantized Version** | [SmolLM2-CoT-360M-GGUF](https://huggingface.co/prithivMLmods/SmolLM2-CoT-360M-GGUF) |
256
+
257
+ ### **Conclusion**
258
+
259
+ Fine-tuning SmolLM involves setting up the environment, loading the model and dataset, configuring training parameters, and running the training loop. By following these steps, you can adapt SmolLM to your specific use case, whether it’s for reasoning tasks, chat-based applications, or other NLP tasks.
260
+
261
+ This process is highly customizable, so feel free to experiment with different datasets, hyperparameters, and training strategies to achieve the best results for your project.
262
+
263
+ ---
264
+