ncbateman commited on
Commit
d382303
·
verified ·
1 Parent(s): 475c641

End of training

Browse files
Files changed (2) hide show
  1. README.md +12 -21
  2. adapter_model.bin +2 -2
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  library_name: peft
3
  license: llama3.2
4
- base_model: unsloth/Llama-3.2-3B-Instruct
5
  tags:
6
  - axolotl
7
  - generated_from_trainer
@@ -19,22 +19,13 @@ should probably proofread and complete it, then remove this comment. -->
19
  axolotl version: `0.4.1`
20
  ```yaml
21
  adapter: lora
22
- base_model: unsloth/Llama-3.2-3B-Instruct
23
  bf16: auto
24
  chat_template: llama3
25
  dataset_prepared_path: null
26
  datasets:
27
- - data_files:
28
- - alpaca_2k_test_train_data.json
29
- ds_type: json
30
- path: /workspace/input_data/alpaca_2k_test_train_data.json
31
- type:
32
- field_input: input
33
- field_instruction: instruction
34
- field_output: output
35
- field_system: text
36
- system_format: '{system}'
37
- system_prompt: ''
38
  debug: null
39
  deepspeed: null
40
  early_stopping_patience: null
@@ -65,7 +56,7 @@ lora_target_linear: true
65
  lr_scheduler: cosine
66
  max_steps: 10
67
  micro_batch_size: 2
68
- mlflow_experiment_name: /tmp/alpaca_2k_test_train_data.json
69
  model_type: LlamaForCausalLM
70
  num_epochs: 1
71
  optimizer: adamw_bnb_8bit
@@ -86,7 +77,7 @@ wandb_entity: breakfasthut
86
  wandb_mode: online
87
  wandb_project: tuning-miner
88
  wandb_run: miner
89
- wandb_runid: 38a84026-8f15-419b-afa8-a97fbd07e799
90
  warmup_steps: 10
91
  weight_decay: 0.0
92
  xformers_attention: null
@@ -97,9 +88,9 @@ xformers_attention: null
97
 
98
  # tuning-miner-output
99
 
100
- This model is a fine-tuned version of [unsloth/Llama-3.2-3B-Instruct](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct) on the None dataset.
101
  It achieves the following results on the evaluation set:
102
- - Loss: 0.0912
103
 
104
  ## Model description
105
 
@@ -133,10 +124,10 @@ The following hyperparameters were used during training:
133
 
134
  | Training Loss | Epoch | Step | Validation Loss |
135
  |:-------------:|:------:|:----:|:---------------:|
136
- | 0.2093 | 0.0047 | 1 | 0.3442 |
137
- | 0.0777 | 0.0140 | 3 | 0.3320 |
138
- | 0.0852 | 0.0281 | 6 | 0.2251 |
139
- | 0.0449 | 0.0421 | 9 | 0.0912 |
140
 
141
 
142
  ### Framework versions
 
1
  ---
2
  library_name: peft
3
  license: llama3.2
4
+ base_model: unsloth/Llama-3.2-1B-Instruct
5
  tags:
6
  - axolotl
7
  - generated_from_trainer
 
19
  axolotl version: `0.4.1`
20
  ```yaml
21
  adapter: lora
22
+ base_model: unsloth/Llama-3.2-1B-Instruct
23
  bf16: auto
24
  chat_template: llama3
25
  dataset_prepared_path: null
26
  datasets:
27
+ - path: mhenrichsen/alpaca_2k_test
28
+ type: alpaca
 
 
 
 
 
 
 
 
 
29
  debug: null
30
  deepspeed: null
31
  early_stopping_patience: null
 
56
  lr_scheduler: cosine
57
  max_steps: 10
58
  micro_batch_size: 2
59
+ mlflow_experiment_name: mhenrichsen/alpaca_2k_test
60
  model_type: LlamaForCausalLM
61
  num_epochs: 1
62
  optimizer: adamw_bnb_8bit
 
77
  wandb_mode: online
78
  wandb_project: tuning-miner
79
  wandb_run: miner
80
+ wandb_runid: 383a850e-bb15-45a2-8f4b-fc96eb001a74
81
  warmup_steps: 10
82
  weight_decay: 0.0
83
  xformers_attention: null
 
88
 
89
  # tuning-miner-output
90
 
91
+ This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset.
92
  It achieves the following results on the evaluation set:
93
+ - Loss: 1.2168
94
 
95
  ## Model description
96
 
 
124
 
125
  | Training Loss | Epoch | Step | Validation Loss |
126
  |:-------------:|:------:|:----:|:---------------:|
127
+ | 1.3218 | 0.0042 | 1 | 1.2625 |
128
+ | 1.3028 | 0.0126 | 3 | 1.2629 |
129
+ | 1.4831 | 0.0253 | 6 | 1.2133 |
130
+ | 1.2899 | 0.0379 | 9 | 1.2168 |
131
 
132
 
133
  ### Framework versions
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:82adfb42eb8867327adba9f02bed01a776266e6d21f03b8dae1c8b298347e6ce
3
- size 97396522
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e11e87d9f4c05d9e49cabe8c727e90dd550085dc314ff9ba6e0633297e846e2e
3
+ size 45169354