chauhoang commited on
Commit
ea44962
·
verified ·
1 Parent(s): 6ffd0ae

End of training

Browse files
Files changed (2) hide show
  1. README.md +11 -4
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -65,7 +65,7 @@ lora_model_dir: null
65
  lora_r: 8
66
  lora_target_linear: true
67
  lr_scheduler: cosine
68
- max_steps: 1
69
  micro_batch_size: 2
70
  mlflow_experiment_name: /tmp/c2630d242d385457_train_data.json
71
  model_type: AutoModelForCausalLM
@@ -90,7 +90,7 @@ wandb_name: ae63dcd0-ac71-4f17-a221-7c6244b5c6eb
90
  wandb_project: Gradients-On-Demand
91
  wandb_run: your_name
92
  wandb_runid: ae63dcd0-ac71-4f17-a221-7c6244b5c6eb
93
- warmup_steps: 1
94
  weight_decay: 0.0
95
  xformers_attention: null
96
 
@@ -101,6 +101,8 @@ xformers_attention: null
101
  # 9b76e7fd-f058-05f6-3b5a-5a8bf960edb9
102
 
103
  This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
 
 
104
 
105
  ## Model description
106
 
@@ -127,14 +129,19 @@ The following hyperparameters were used during training:
127
  - total_train_batch_size: 8
128
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
129
  - lr_scheduler_type: cosine
130
- - lr_scheduler_warmup_steps: 2
131
- - training_steps: 1
132
 
133
  ### Training results
134
 
135
  | Training Loss | Epoch | Step | Validation Loss |
136
  |:-------------:|:------:|:----:|:---------------:|
137
  | No log | 0.0000 | 1 | 0.7466 |
 
 
 
 
 
138
 
139
 
140
  ### Framework versions
 
65
  lora_r: 8
66
  lora_target_linear: true
67
  lr_scheduler: cosine
68
+ max_steps: 50
69
  micro_batch_size: 2
70
  mlflow_experiment_name: /tmp/c2630d242d385457_train_data.json
71
  model_type: AutoModelForCausalLM
 
90
  wandb_project: Gradients-On-Demand
91
  wandb_run: your_name
92
  wandb_runid: ae63dcd0-ac71-4f17-a221-7c6244b5c6eb
93
+ warmup_steps: 10
94
  weight_decay: 0.0
95
  xformers_attention: null
96
 
 
101
  # 9b76e7fd-f058-05f6-3b5a-5a8bf960edb9
102
 
103
  This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
104
+ It achieves the following results on the evaluation set:
105
+ - Loss: 0.6375
106
 
107
  ## Model description
108
 
 
129
  - total_train_batch_size: 8
130
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
131
  - lr_scheduler_type: cosine
132
+ - lr_scheduler_warmup_steps: 10
133
+ - training_steps: 50
134
 
135
  ### Training results
136
 
137
  | Training Loss | Epoch | Step | Validation Loss |
138
  |:-------------:|:------:|:----:|:---------------:|
139
  | No log | 0.0000 | 1 | 0.7466 |
140
+ | 0.7084 | 0.0004 | 10 | 0.7225 |
141
+ | 0.6795 | 0.0009 | 20 | 0.6730 |
142
+ | 0.6638 | 0.0013 | 30 | 0.6490 |
143
+ | 0.629 | 0.0017 | 40 | 0.6393 |
144
+ | 0.6377 | 0.0022 | 50 | 0.6375 |
145
 
146
 
147
  ### Framework versions
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d79a41a57126ec30aacae1f169dbebe94cfba042299b1187aa3baaa49a7a587a
3
  size 25342042
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:136736bf1aa24cbe656ed3cbe23aca79977ef3f548819cd3921f4fdb81745d93
3
  size 25342042