jncraton commited on
Commit
9be3b82
·
verified ·
1 Parent(s): 83b1658

Upload folder using huggingface_hub

Browse files
Files changed (2) hide show
  1. README.md +117 -10
  2. tokenizer_config.json +1 -1
README.md CHANGED
@@ -3,6 +3,13 @@ library_name: transformers
3
  license: apache-2.0
4
  language:
5
  - en
 
 
 
 
 
 
 
6
  ---
7
 
8
 
@@ -22,15 +29,18 @@ language:
22
 
23
  ## Model Summary
24
 
25
- SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device.
26
 
27
  The 1.7B variant demonstrates significant advances over its predecessor SmolLM1-1.7B, particularly in instruction following, knowledge, reasoning, and mathematics. It was trained on 11 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new mathematics and coding datasets that we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
28
 
29
  The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by [Argilla](https://huggingface.co/argilla) such as [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1).
 
 
 
30
 
31
  ### How to use
32
 
33
- ### Transformers
34
  ```bash
35
  pip install transformers
36
  ```
@@ -52,13 +62,40 @@ print(tokenizer.decode(outputs[0]))
52
  ```
53
 
54
 
55
- ### Chat in TRL
56
  You can also use the TRL CLI to chat with the model from the terminal:
57
  ```bash
58
  pip install trl
59
  trl chat --model_name_or_path HuggingFaceTB/SmolLM2-1.7B-Instruct --device cpu
60
  ```
61
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62
  ## Evaluation
63
 
64
  In this section, we report the evaluation results of SmolLM2. All evaluations are zero-shot unless stated otherwise, and we use [lighteval](https://github.com/huggingface/lighteval) to run them.
@@ -100,7 +137,7 @@ Below are some system and instruct prompts that work well for special tasks
100
  ```python
101
  system_prompt_rewrite = "You are an AI writing assistant. Your task is to rewrite the user's email to make it more professional and approachable while maintaining its main points and key message. Do not return any text other than the rewritten message."
102
  user_prompt_rewrite = "Rewrite the message below to make it more friendly and approachable while maintaining its main points and key message. Do not add any new information or return any text other than the rewritten message\nThe message:"
103
- messages = [{"role": "system", "content": system_prompt_rewrite}, {"role": "user", "content":f"{user_prompt_rewrite} The CI is failing after your last commit!}"]
104
  input_text=tokenizer.apply_chat_template(messages, tokenize=False)
105
  inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
106
  outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True)
@@ -114,7 +151,7 @@ Hey there! I noticed that the CI isn't passing after your latest commit. Could y
114
 
115
  ```python
116
  system_prompt_summarize = "Provide a concise, objective summary of the input text in up to three sentences, focusing on key actions and intentions without using second or third person pronouns."
117
- messages = [{"role": "system", "content": system_prompt_rewrite}, {"role": "user", "content": INSERT_LONG_EMAIL]
118
  input_text=tokenizer.apply_chat_template(messages, tokenize=False)
119
  inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
120
  outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True)
@@ -192,7 +229,73 @@ def parse_response(text: str) -> str | dict[str, any]:
192
  if matches:
193
  return json.loads(matches[0])
194
  return text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
195
  ```
 
196
 
197
  ## Limitations
198
 
@@ -213,7 +316,7 @@ SmolLM2 models primarily understand and generate content in English. They can pr
213
  ### Software
214
 
215
  - **Training Framework:** [nanotron](https://github.com/huggingface/nanotron/tree/main)
216
- - **Alignement Handbook** [alignement-handbook](https://github.com/huggingface/alignment-handbook/)
217
 
218
  ## License
219
 
@@ -221,9 +324,13 @@ SmolLM2 models primarily understand and generate content in English. They can pr
221
 
222
  ## Citation
223
  ```bash
224
- @misc{allal2024SmolLM2,
225
- title={SmolLM2 - with great data, comes great performance},
226
- author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martín Blázquez and Lewis Tunstall and Agustín Piqueres and Andres Marafioti and Cyril Zakka and Leandro von Werra and Thomas Wolf},
227
- year={2024},
 
 
 
 
228
  }
229
  ```
 
3
  license: apache-2.0
4
  language:
5
  - en
6
+ pipeline_tag: text-generation
7
+ tags:
8
+ - safetensors
9
+ - onnx
10
+ - transformers.js
11
+ base_model:
12
+ - HuggingFaceTB/SmolLM2-1.7B
13
  ---
14
 
15
 
 
29
 
30
  ## Model Summary
31
 
32
+ SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device. More details in our paper: https://arxiv.org/abs/2502.02737v1
33
 
34
  The 1.7B variant demonstrates significant advances over its predecessor SmolLM1-1.7B, particularly in instruction following, knowledge, reasoning, and mathematics. It was trained on 11 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new mathematics and coding datasets that we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
35
 
36
  The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by [Argilla](https://huggingface.co/argilla) such as [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1).
37
+ You can find the SFT dataset here: https://huggingface.co/datasets/HuggingFaceTB/smoltalk.
38
+
39
+ For more details refer to: https://github.com/huggingface/smollm. You will find pre-training, post-training, evaluation and local inference code.
40
 
41
  ### How to use
42
 
43
+ #### Transformers
44
  ```bash
45
  pip install transformers
46
  ```
 
62
  ```
63
 
64
 
65
+ #### Chat in TRL
66
  You can also use the TRL CLI to chat with the model from the terminal:
67
  ```bash
68
  pip install trl
69
  trl chat --model_name_or_path HuggingFaceTB/SmolLM2-1.7B-Instruct --device cpu
70
  ```
71
 
72
+ #### Transformers.js
73
+
74
+ ```bash
75
+ npm i @huggingface/transformers
76
+ ```
77
+
78
+ ```js
79
+ import { pipeline } from "@huggingface/transformers";
80
+
81
+ // Create a text generation pipeline
82
+ const generator = await pipeline(
83
+ "text-generation",
84
+ "HuggingFaceTB/SmolLM2-1.7B-Instruct",
85
+ );
86
+
87
+ // Define the list of messages
88
+ const messages = [
89
+ { role: "system", content: "You are a helpful assistant." },
90
+ { role: "user", content: "Tell me a joke." },
91
+ ];
92
+
93
+ // Generate a response
94
+ const output = await generator(messages, { max_new_tokens: 128 });
95
+ console.log(output[0].generated_text.at(-1).content);
96
+ // "Why don't scientists trust atoms?\n\nBecause they make up everything!"
97
+ ```
98
+
99
  ## Evaluation
100
 
101
  In this section, we report the evaluation results of SmolLM2. All evaluations are zero-shot unless stated otherwise, and we use [lighteval](https://github.com/huggingface/lighteval) to run them.
 
137
  ```python
138
  system_prompt_rewrite = "You are an AI writing assistant. Your task is to rewrite the user's email to make it more professional and approachable while maintaining its main points and key message. Do not return any text other than the rewritten message."
139
  user_prompt_rewrite = "Rewrite the message below to make it more friendly and approachable while maintaining its main points and key message. Do not add any new information or return any text other than the rewritten message\nThe message:"
140
+ messages = [{"role": "system", "content": system_prompt_rewrite}, {"role": "user", "content":f"{user_prompt_rewrite} The CI is failing after your last commit!"}]
141
  input_text=tokenizer.apply_chat_template(messages, tokenize=False)
142
  inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
143
  outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True)
 
151
 
152
  ```python
153
  system_prompt_summarize = "Provide a concise, objective summary of the input text in up to three sentences, focusing on key actions and intentions without using second or third person pronouns."
154
+ messages = [{"role": "system", "content": system_prompt_summarize}, {"role": "user", "content": INSERT_LONG_EMAIL}]
155
  input_text=tokenizer.apply_chat_template(messages, tokenize=False)
156
  inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
157
  outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True)
 
229
  if matches:
230
  return json.loads(matches[0])
231
  return text
232
+
233
+
234
+ model_name_smollm = "HuggingFaceTB/SmolLM2-1.7B-Instruct"
235
+ model = AutoModelForCausalLM.from_pretrained(model_name_smollm, device_map="auto", torch_dtype="auto", trust_remote_code=True)
236
+ tokenizer = AutoTokenizer.from_pretrained(model_name_smollm)
237
+
238
+ from datetime import datetime
239
+ import random
240
+
241
+ def get_current_time() -> str:
242
+ """Returns the current time in 24-hour format.
243
+
244
+ Returns:
245
+ str: Current time in HH:MM:SS format.
246
+ """
247
+ return datetime.now().strftime("%H:%M:%S")
248
+
249
+
250
+ def get_random_number_between(min: int, max: int) -> int:
251
+ """
252
+ Gets a random number between min and max.
253
+
254
+ Args:
255
+ min: The minimum number.
256
+ max: The maximum number.
257
+
258
+ Returns:
259
+ A random number between min and max.
260
+ """
261
+ return random.randint(min, max)
262
+
263
+
264
+ tools = [get_json_schema(get_random_number_between), get_json_schema(get_current_time)]
265
+
266
+ toolbox = {"get_random_number_between": get_random_number_between, "get_current_time": get_current_time}
267
+
268
+ query = "Give me a number between 1 and 300"
269
+
270
+ messages = prepare_messages(query, tools=tools)
271
+
272
+ inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
273
+ outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
274
+ result = tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)
275
+
276
+ tool_calls = parse_response(result)
277
+ # [{'name': 'get_random_number_between', 'arguments': {'min': 1, 'max': 300}}
278
+
279
+ # Get tool responses
280
+ tool_responses = [toolbox.get(tc["name"])(*tc["arguments"].values()) for tc in tool_calls]
281
+ # [63]
282
+
283
+ # For the second turn, rebuild the history of messages:
284
+ history = messages.copy()
285
+ # Add the "parsed response"
286
+ history.append({"role": "assistant", "content": result})
287
+ query = "Can you give me the hour?"
288
+ history.append({"role": "user", "content": query})
289
+
290
+ inputs = tokenizer.apply_chat_template(history, add_generation_prompt=True, return_tensors="pt").to(model.device)
291
+ outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
292
+ result = tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)
293
+
294
+ tool_calls = parse_response(result)
295
+ tool_responses = [toolbox.get(tc["name"])(*tc["arguments"].values()) for tc in tool_calls]
296
+ # ['07:57:25']
297
  ```
298
+ More details such as parallel function calls and tools not available can be found [here](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct/blob/main/instructions_function_calling.md)
299
 
300
  ## Limitations
301
 
 
316
  ### Software
317
 
318
  - **Training Framework:** [nanotron](https://github.com/huggingface/nanotron/tree/main)
319
+ - **Alignment Handbook** [alignment-handbook](https://github.com/huggingface/alignment-handbook/)
320
 
321
  ## License
322
 
 
324
 
325
  ## Citation
326
  ```bash
327
+ @misc{allal2025smollm2smolgoesbig,
328
+ title={SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model},
329
+ author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martín Blázquez and Guilherme Penedo and Lewis Tunstall and Andrés Marafioti and Hynek Kydlíček and Agustín Piqueres Lajarín and Vaibhav Srivastav and Joshua Lochner and Caleb Fahlgren and Xuan-Son Nguyen and Clémentine Fourrier and Ben Burtenshaw and Hugo Larcher and Haojun Zhao and Cyril Zakka and Mathieu Morlon and Colin Raffel and Leandro von Werra and Thomas Wolf},
330
+ year={2025},
331
+ eprint={2502.02737},
332
+ archivePrefix={arXiv},
333
+ primaryClass={cs.CL},
334
+ url={https://arxiv.org/abs/2502.02737},
335
  }
336
  ```
tokenizer_config.json CHANGED
@@ -146,7 +146,7 @@
146
  "chat_template": "{% for message in messages %}{% if loop.first and messages[0]['role'] != 'system' %}{{ '<|im_start|>system\nYou are a helpful AI assistant named SmolLM, trained by Hugging Face<|im_end|>\n' }}{% endif %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
147
  "clean_up_tokenization_spaces": false,
148
  "eos_token": "<|im_end|>",
149
- "model_max_length": 2048,
150
  "pad_token": "<|im_end|>",
151
  "tokenizer_class": "GPT2Tokenizer",
152
  "unk_token": "<|endoftext|>",
 
146
  "chat_template": "{% for message in messages %}{% if loop.first and messages[0]['role'] != 'system' %}{{ '<|im_start|>system\nYou are a helpful AI assistant named SmolLM, trained by Hugging Face<|im_end|>\n' }}{% endif %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
147
  "clean_up_tokenization_spaces": false,
148
  "eos_token": "<|im_end|>",
149
+ "model_max_length": 8192,
150
  "pad_token": "<|im_end|>",
151
  "tokenizer_class": "GPT2Tokenizer",
152
  "unk_token": "<|endoftext|>",