eliebak HF staff commited on
Commit
2f3b256
1 Parent(s): 1717970

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +105 -68
README.md CHANGED
@@ -1,87 +1,124 @@
1
  ---
2
- base_model: loubnabnl/smollm2-360M-8k-lc100k-mix1-ep2
3
- tags:
4
- - alignment-handbook
5
- - trl
6
- - dpo
7
- - generated_from_trainer
8
- - trl
9
- - dpo
10
- - generated_from_trainer
11
- datasets:
12
- - HuggingFaceH4/ultrafeedback_binarized
13
- model-index:
14
- - name: smollm2-360M-8k-lc100k-dpo-ultaf-ep2
15
- results: []
16
  ---
17
 
18
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
19
- should probably proofread and complete it, then remove this comment. -->
20
 
21
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/loubnabnl/huggingface/runs/3nedrmyg)
22
- # smollm2-360M-8k-lc100k-dpo-ultaf-ep2
23
 
24
- This model is a fine-tuned version of [loubnabnl/smollm2-360M-8k-lc100k-mix1-ep2](https://huggingface.co/loubnabnl/smollm2-360M-8k-lc100k-mix1-ep2) on the HuggingFaceH4/ultrafeedback_binarized dataset.
25
- It achieves the following results on the evaluation set:
26
- - Loss: 0.6348
27
- - Rewards/chosen: -0.0342
28
- - Rewards/rejected: -0.3910
29
- - Rewards/accuracies: 0.6190
30
- - Rewards/margins: 0.3568
31
- - Logps/rejected: -323.7198
32
- - Logps/chosen: -375.6464
33
- - Logits/rejected: -1.6969
34
- - Logits/chosen: -1.6408
35
 
36
- ## Model description
37
 
38
- More information needed
 
 
 
 
39
 
40
- ## Intended uses & limitations
41
 
42
- More information needed
43
 
44
- ## Training and evaluation data
45
 
46
- More information needed
47
 
48
- ## Training procedure
49
 
50
- ### Training hyperparameters
 
 
 
51
 
52
- The following hyperparameters were used during training:
53
- - learning_rate: 1e-06
54
- - train_batch_size: 2
55
- - eval_batch_size: 4
56
- - seed: 42
57
- - distributed_type: multi-GPU
58
- - num_devices: 8
59
- - gradient_accumulation_steps: 8
60
- - total_train_batch_size: 128
61
- - total_eval_batch_size: 32
62
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
- - lr_scheduler_type: cosine
64
- - lr_scheduler_warmup_ratio: 0.1
65
- - num_epochs: 2
66
 
67
- ### Training results
 
 
 
68
 
69
- | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
70
- |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
71
- | 0.7098 | 0.2094 | 100 | 0.7162 | -0.0109 | -0.0675 | 0.5278 | 0.0566 | -323.0727 | -375.5997 | -1.6983 | -1.6387 |
72
- | 0.6825 | 0.4187 | 200 | 0.6842 | -0.0010 | -0.1880 | 0.5794 | 0.1870 | -323.3139 | -375.5800 | -1.6938 | -1.6358 |
73
- | 0.663 | 0.6281 | 300 | 0.6617 | 0.0225 | -0.2389 | 0.6032 | 0.2614 | -323.4156 | -375.5330 | -1.6893 | -1.6317 |
74
- | 0.6547 | 0.8375 | 400 | 0.6591 | 0.0001 | -0.3516 | 0.6389 | 0.3517 | -323.6410 | -375.5778 | -1.6980 | -1.6414 |
75
- | 0.6456 | 1.0468 | 500 | 0.6430 | 0.0133 | -0.3566 | 0.6667 | 0.3699 | -323.6510 | -375.5514 | -1.6931 | -1.6365 |
76
- | 0.6054 | 1.2562 | 600 | 0.6423 | -0.0329 | -0.3895 | 0.6349 | 0.3566 | -323.7167 | -375.6438 | -1.6991 | -1.6431 |
77
- | 0.6129 | 1.4656 | 700 | 0.6431 | -0.0449 | -0.4183 | 0.6349 | 0.3735 | -323.7745 | -375.6677 | -1.6979 | -1.6414 |
78
- | 0.5972 | 1.6750 | 800 | 0.6384 | -0.0695 | -0.4139 | 0.6429 | 0.3444 | -323.7656 | -375.7169 | -1.6965 | -1.6399 |
79
- | 0.6207 | 1.8843 | 900 | 0.6362 | -0.0627 | -0.4222 | 0.6786 | 0.3595 | -323.7822 | -375.7033 | -1.6976 | -1.6407 |
80
 
 
 
 
 
 
 
81
 
82
- ### Framework versions
83
 
84
- - Transformers 4.42.3
85
- - Pytorch 2.1.2
86
- - Datasets 2.20.0
87
- - Tokenizers 0.19.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ language:
5
+ - en
 
 
 
 
 
 
 
 
 
 
6
  ---
7
 
 
 
8
 
9
+ # SmolLM2
 
10
 
11
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/7IzejwZJ62MfRwvDYvQXY.png)
 
 
 
 
 
 
 
 
 
 
12
 
13
+ ## Table of Contents
14
 
15
+ 1. [Model Summary](##model-summary)
16
+ 2. [Limitations](##limitations)
17
+ 3. [Training](##training)
18
+ 4. [License](##license)
19
+ 5. [Citation](##citation)
20
 
21
+ ## Model Summary
22
 
23
+ SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device.
24
 
25
+ SmolLM2 demonstrates significant advances over its predecessor SmolLM1, particularly in instruction following, knowledge, reasoning. The 135M model was trained on 2 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new filtered datasets we curated and will release soon.
26
 
27
+ We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets, designed to enhance instruction following, rewriting, and summarization capabilities. We then applied Direct Preference Optimization (DPO) using a mix of UltraFeedback and DPO-ORPO.
28
 
29
+ ### How to use
30
 
31
+ ### Transformers
32
+ ```bash
33
+ pip install transformers
34
+ ```
35
 
36
+ ```python
37
+ from transformers import AutoModelForCausalLM, AutoTokenizer
38
+ checkpoint = "HuggingFaceTB/SmolLM2-360M-Instruct"
 
 
 
 
 
 
 
 
 
 
 
39
 
40
+ device = "cuda" # for GPU usage or "cpu" for CPU usage
41
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
42
+ # for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
43
+ model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
44
 
45
+ messages = [{"role": "user", "content": "What is the capital of France."}]
46
+ input_text=tokenizer.apply_chat_template(messages, tokenize=False)
47
+ print(input_text)
48
+ inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
49
+ outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True)
50
+ print(tokenizer.decode(outputs[0]))
51
+ ```
 
 
 
 
52
 
53
+ ### Chat in TRL
54
+ You can also use the TRL CLI to chat with the model from the terminal:
55
+ ```bash
56
+ pip install trl
57
+ trl chat --model_name_or_path HuggingFaceTB/SmolLM2-360M-Instruct --device cpu
58
+ ```
59
 
60
+ ## Evaluation
61
 
62
+ In this section, we report the evaluation results of SmolLM2. All evaluations are zero-shot unless stated otherwise, and we use [lighteval](https://github.com/huggingface/lighteval) to run them.
63
+
64
+ ## Base Pre-Trained Model
65
+
66
+ | Metrics | SmolLM2-360M | Qwen2.5-0.5B | SmolLM-360M |
67
+ |:-------------------|:------------:|:------------:|:------------:|
68
+ | HellaSwag | **54.5** | 51.2 | 51.8 |
69
+ | ARC (Average) | **53.0** | 45.4 | 50.1 |
70
+ | PIQA | **71.7** | 69.9 | 71.6 |
71
+ | MMLU (cloze) | **35.8** | 33.7 | 34.4 |
72
+ | CommonsenseQA | **38.0** | 31.6 | 35.3 |
73
+ | TriviaQA | **16.9** | 4.3 | 9.1 |
74
+ | Winogrande | 52.5 | **54.1** | 52.8 |
75
+ | OpenBookQA | **37.4** | **37.4** | 37.2 |
76
+ | GSM8K (5-shot) | 3.2 | **33.4** | 1.6 |
77
+
78
+
79
+ ## Instruction Model
80
+
81
+ | Metric | SmolLM2-360M-Instruct | Qwen2.5-0.5B-Instruct | SmolLM-360M-Instruct |
82
+ |:-----------------------------|:---------------------:|:---------------------:|:---------------------:|
83
+ | IFEval (Average prompt/inst) | **41.0** | 31.6 | 19.8 |
84
+ | MT-Bench | 3.66 | **4.16** | 3.37 |
85
+ | HellaSwag | **52.1** | 48.0 | 47.9 |
86
+ | ARC (Average) | **43.7** | 37.3 | 38.8 |
87
+ | PIQA | **70.8** | 67.2 | 69.4 |
88
+ | MMLU (cloze) | **32.8** | 31.7 | 30.6 |
89
+ | BBH (3-shot) | 27.3 | **30.7** | 24.4 |
90
+ | GSM8K (5-shot) | 7.43 | **26.8** | 1.36 |
91
+
92
+
93
+ ## Limitations
94
+
95
+ SmolLM2 models primarily understand and generate content in English. They can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.
96
+
97
+ ## Training
98
+
99
+ ### Model
100
+
101
+ - **Architecture:** Transformer decoder
102
+ - **Pretraining tokens:** 4T
103
+ - **Precision:** bfloat16
104
+
105
+ ### Hardware
106
+
107
+ - **GPUs:** 64 H100
108
+
109
+ ### Software
110
+
111
+ - **Training Framework:** [nanotron](https://github.com/huggingface/nanotron/tree/main)
112
+
113
+ ## License
114
+
115
+ [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
116
+
117
+ ## Citation
118
+ ```bash
119
+ @misc{allal2024SmolLM2,
120
+ title={SmolLM2 - with great data, comes great performance},
121
+ author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martín Blázquez and Lewis Tunstall and Agustín Piqueres and Andres Marafioti and Cyril Zakka and Leandro von Werra and Thomas Wolf},
122
+ year={2024},
123
+ }
124
+ ```