UsernameJustAnother commited on
Commit
0819953
·
verified ·
1 Parent(s): 7329143

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -1
README.md CHANGED
@@ -1,4 +1,3 @@
1
- ---
2
  base_model: unsloth/Mistral-Nemo-Instruct-2407
3
  language:
4
  - en
@@ -17,6 +16,42 @@ tags:
17
  - **License:** apache-2.0
18
  - **Finetuned from model :** unsloth/Mistral-Nemo-Instruct-2407
19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
 
22
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
1
  base_model: unsloth/Mistral-Nemo-Instruct-2407
2
  language:
3
  - en
 
16
  - **License:** apache-2.0
17
  - **Finetuned from model :** unsloth/Mistral-Nemo-Instruct-2407
18
 
19
+ Experimental RP Finetune with secret sauce dataset, rsLoRA, r = 256, on an Colab A100 instance. 36GB vRAM used, 2 epochs ~ 3.5hrs of training.
20
+
21
+ This is for A/B testing vs Marlin v1, to see what difference rank 256 (v2) has compared to rank 64 (v1).
22
+
23
+
24
+ ```
25
+ ==((====))== Unsloth - 2x faster free finetuning | Num GPUs = 1
26
+ \\ /| Num examples = 8,160 | Num Epochs = 2
27
+ O^O/ \_/ \ Batch size per device = 2 | Gradient Accumulation steps = 4
28
+ \ / Total batch size = 8 | Total steps = 2,040
29
+ "-____-" Number of trainable parameters = 912,261,120
30
+
31
+ r = 256,
32
+ target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
33
+ "gate_proj", "up_proj", "down_proj",],
34
+ lora_alpha = 16,
35
+ lora_dropout = 0, # Supports any, but = 0 is optimized
36
+ bias = "none", # Supports any, but = "none" is optimized
37
+ use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context
38
+ random_state = 3407,
39
+ use_rslora = True, # lora_alpha --> 16
40
+ loftq_config = None,
41
+
42
+ per_device_train_batch_size = 2,
43
+ gradient_accumulation_steps = 4,
44
+ warmup_steps = 5,
45
+ num_train_epochs = 2,
46
+ learning_rate = 2e-5, # down from 2e-4, could go down to (5e-5 then 1e-5)
47
+ fp16 = not is_bfloat16_supported(),
48
+ bf16 = is_bfloat16_supported(),
49
+ logging_steps = 1,
50
+ optim = "adamw_8bit",
51
+ weight_decay = 0.01,
52
+ lr_scheduler_type = "linear",
53
+ seed = 3407,
54
+ ```
55
  This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
56
 
57
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)