habout632 commited on
Commit
bc31e20
·
verified ·
1 Parent(s): 35a2c5f

End of training

Browse files
Files changed (2) hide show
  1. README.md +229 -0
  2. adapter_model.bin +3 -0
README.md ADDED
@@ -0,0 +1,229 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama2
3
+ library_name: peft
4
+ tags:
5
+ - axolotl
6
+ - generated_from_trainer
7
+ base_model: codellama/CodeLlama-7b-hf
8
+ model-index:
9
+ - name: EvolCodeLlama-7b
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
17
+ <details><summary>See axolotl config</summary>
18
+
19
+ axolotl version: `0.4.0`
20
+ ```yaml
21
+ base_model: codellama/CodeLlama-7b-hf
22
+ base_model_config: codellama/CodeLlama-7b-hf
23
+ model_type: LlamaForCausalLM
24
+ tokenizer_type: LlamaTokenizer
25
+ is_llama_derived_model: true
26
+ hub_model_id: EvolCodeLlama-7b
27
+
28
+ load_in_8bit: false
29
+ load_in_4bit: true
30
+ strict: false
31
+
32
+ datasets:
33
+ - path: mlabonne/Evol-Instruct-Python-1k
34
+ type: alpaca
35
+ dataset_prepared_path: last_run_prepared
36
+ val_set_size: 0.02
37
+ output_dir: ./qlora-out
38
+
39
+ adapter: qlora
40
+ lora_model_dir:
41
+
42
+ sequence_len: 2048
43
+ sample_packing: true
44
+
45
+ lora_r: 32
46
+ lora_alpha: 16
47
+ lora_dropout: 0.05
48
+ lora_target_modules:
49
+ lora_target_linear: true
50
+ lora_fan_in_fan_out:
51
+
52
+ wandb_project: axolotl
53
+ wandb_entity:
54
+ wandb_watch:
55
+ wandb_run_id:
56
+ wandb_log_model:
57
+
58
+ gradient_accumulation_steps: 4
59
+ micro_batch_size: 2
60
+ num_epochs: 3
61
+ optimizer: paged_adamw_32bit
62
+ lr_scheduler: cosine
63
+ learning_rate: 0.0002
64
+
65
+ train_on_inputs: false
66
+ group_by_length: false
67
+ bf16: true
68
+ fp16: false
69
+ tf32: false
70
+
71
+ gradient_checkpointing: true
72
+ early_stopping_patience:
73
+ resume_from_checkpoint:
74
+ local_rank:
75
+ logging_steps: 1
76
+ xformers_attention:
77
+ flash_attention: true
78
+
79
+ warmup_steps: 100
80
+ eval_steps: 0.01
81
+ save_strategy: epoch
82
+ save_steps:
83
+ debug:
84
+ deepspeed:
85
+ weight_decay: 0.0
86
+ fsdp:
87
+ fsdp_config:
88
+ special_tokens:
89
+ bos_token: "<s>"
90
+ eos_token: "</s>"
91
+ unk_token: "<unk>"
92
+ ```
93
+
94
+ </details><br>
95
+
96
+ # EvolCodeLlama-7b
97
+
98
+ This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the None dataset.
99
+ It achieves the following results on the evaluation set:
100
+ - Loss: 0.3796
101
+
102
+ ## Model description
103
+
104
+ More information needed
105
+
106
+ ## Intended uses & limitations
107
+
108
+ More information needed
109
+
110
+ ## Training and evaluation data
111
+
112
+ More information needed
113
+
114
+ ## Training procedure
115
+
116
+ ### Training hyperparameters
117
+
118
+ The following hyperparameters were used during training:
119
+ - learning_rate: 0.0002
120
+ - train_batch_size: 2
121
+ - eval_batch_size: 2
122
+ - seed: 42
123
+ - gradient_accumulation_steps: 4
124
+ - total_train_batch_size: 8
125
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
126
+ - lr_scheduler_type: cosine
127
+ - lr_scheduler_warmup_steps: 100
128
+ - num_epochs: 3
129
+
130
+ ### Training results
131
+
132
+ | Training Loss | Epoch | Step | Validation Loss |
133
+ |:-------------:|:-----:|:----:|:---------------:|
134
+ | 0.3178 | 0.01 | 1 | 0.5311 |
135
+ | 0.3147 | 0.03 | 4 | 0.5312 |
136
+ | 0.3626 | 0.07 | 8 | 0.5310 |
137
+ | 0.6265 | 0.1 | 12 | 0.5296 |
138
+ | 0.429 | 0.14 | 16 | 0.5270 |
139
+ | 0.5086 | 0.17 | 20 | 0.5205 |
140
+ | 0.4335 | 0.21 | 24 | 0.5067 |
141
+ | 0.3383 | 0.24 | 28 | 0.4842 |
142
+ | 0.3688 | 0.28 | 32 | 0.4603 |
143
+ | 0.2528 | 0.31 | 36 | 0.4403 |
144
+ | 0.3105 | 0.35 | 40 | 0.4251 |
145
+ | 0.4936 | 0.38 | 44 | 0.4162 |
146
+ | 0.4146 | 0.42 | 48 | 0.4086 |
147
+ | 0.3327 | 0.45 | 52 | 0.4024 |
148
+ | 0.3429 | 0.48 | 56 | 0.3971 |
149
+ | 0.3328 | 0.52 | 60 | 0.3937 |
150
+ | 0.1844 | 0.55 | 64 | 0.3901 |
151
+ | 0.3001 | 0.59 | 68 | 0.3887 |
152
+ | 0.3632 | 0.62 | 72 | 0.3872 |
153
+ | 0.1997 | 0.66 | 76 | 0.3847 |
154
+ | 0.2461 | 0.69 | 80 | 0.3823 |
155
+ | 0.2865 | 0.73 | 84 | 0.3812 |
156
+ | 0.26 | 0.76 | 88 | 0.3805 |
157
+ | 0.3191 | 0.8 | 92 | 0.3792 |
158
+ | 0.4642 | 0.83 | 96 | 0.3763 |
159
+ | 0.2649 | 0.87 | 100 | 0.3750 |
160
+ | 0.2095 | 0.9 | 104 | 0.3727 |
161
+ | 0.2738 | 0.94 | 108 | 0.3737 |
162
+ | 0.4274 | 0.97 | 112 | 0.3730 |
163
+ | 0.2722 | 1.0 | 116 | 0.3724 |
164
+ | 0.2164 | 1.02 | 120 | 0.3705 |
165
+ | 0.1549 | 1.05 | 124 | 0.3726 |
166
+ | 0.3051 | 1.08 | 128 | 0.3725 |
167
+ | 0.1873 | 1.12 | 132 | 0.3730 |
168
+ | 0.3388 | 1.15 | 136 | 0.3738 |
169
+ | 0.2504 | 1.19 | 140 | 0.3741 |
170
+ | 0.2851 | 1.22 | 144 | 0.3714 |
171
+ | 0.2365 | 1.26 | 148 | 0.3690 |
172
+ | 0.3986 | 1.29 | 152 | 0.3699 |
173
+ | 0.1913 | 1.33 | 156 | 0.3720 |
174
+ | 0.1963 | 1.36 | 160 | 0.3698 |
175
+ | 0.1824 | 1.4 | 164 | 0.3679 |
176
+ | 0.1453 | 1.43 | 168 | 0.3685 |
177
+ | 0.3073 | 1.47 | 172 | 0.3702 |
178
+ | 0.1501 | 1.5 | 176 | 0.3692 |
179
+ | 0.2167 | 1.53 | 180 | 0.3662 |
180
+ | 0.3007 | 1.57 | 184 | 0.3660 |
181
+ | 0.2203 | 1.6 | 188 | 0.3666 |
182
+ | 0.3978 | 1.64 | 192 | 0.3669 |
183
+ | 0.2397 | 1.67 | 196 | 0.3663 |
184
+ | 0.2161 | 1.71 | 200 | 0.3656 |
185
+ | 0.2593 | 1.74 | 204 | 0.3651 |
186
+ | 0.2113 | 1.78 | 208 | 0.3658 |
187
+ | 0.2435 | 1.81 | 212 | 0.3657 |
188
+ | 0.2625 | 1.85 | 216 | 0.3639 |
189
+ | 0.302 | 1.88 | 220 | 0.3624 |
190
+ | 0.2556 | 1.92 | 224 | 0.3611 |
191
+ | 0.2063 | 1.95 | 228 | 0.3609 |
192
+ | 0.1994 | 1.98 | 232 | 0.3612 |
193
+ | 0.2229 | 2.02 | 236 | 0.3613 |
194
+ | 0.1983 | 2.03 | 240 | 0.3634 |
195
+ | 0.1925 | 2.06 | 244 | 0.3725 |
196
+ | 0.1778 | 2.1 | 248 | 0.3832 |
197
+ | 0.1293 | 2.13 | 252 | 0.3834 |
198
+ | 0.2166 | 2.16 | 256 | 0.3789 |
199
+ | 0.2082 | 2.2 | 260 | 0.3760 |
200
+ | 0.1858 | 2.23 | 264 | 0.3761 |
201
+ | 0.1862 | 2.27 | 268 | 0.3763 |
202
+ | 0.1619 | 2.3 | 272 | 0.3783 |
203
+ | 0.174 | 2.34 | 276 | 0.3786 |
204
+ | 0.2414 | 2.37 | 280 | 0.3790 |
205
+ | 0.1977 | 2.41 | 284 | 0.3783 |
206
+ | 0.1678 | 2.44 | 288 | 0.3784 |
207
+ | 0.2263 | 2.48 | 292 | 0.3786 |
208
+ | 0.082 | 2.51 | 296 | 0.3783 |
209
+ | 0.2621 | 2.55 | 300 | 0.3784 |
210
+ | 0.1754 | 2.58 | 304 | 0.3795 |
211
+ | 0.1957 | 2.61 | 308 | 0.3802 |
212
+ | 0.1203 | 2.65 | 312 | 0.3803 |
213
+ | 0.1388 | 2.68 | 316 | 0.3796 |
214
+ | 0.1699 | 2.72 | 320 | 0.3796 |
215
+ | 0.161 | 2.75 | 324 | 0.3796 |
216
+ | 0.2394 | 2.79 | 328 | 0.3792 |
217
+ | 0.1465 | 2.82 | 332 | 0.3795 |
218
+ | 0.1746 | 2.86 | 336 | 0.3794 |
219
+ | 0.1839 | 2.89 | 340 | 0.3795 |
220
+ | 0.1581 | 2.93 | 344 | 0.3796 |
221
+
222
+
223
+ ### Framework versions
224
+
225
+ - PEFT 0.8.2
226
+ - Transformers 4.39.0.dev0
227
+ - Pytorch 2.0.1+cu118
228
+ - Datasets 2.17.1
229
+ - Tokenizers 0.15.0
adapter_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:35f2d4fdce661818236104d8da1041306f840e960d1d627ff24bb8cc516e8beb
3
+ size 319977229