PaulD commited on
Commit
aae3347
1 Parent(s): d94f4df

End of training

Browse files
Files changed (3) hide show
  1. README.md +15 -15
  2. adapter_model.safetensors +1 -1
  3. training_args.bin +1 -1
README.md CHANGED
@@ -14,18 +14,18 @@ model-index:
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/pauld/huggingface/runs/em482maw)
18
  # kto-aligned-model-lora
19
 
20
  This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the None dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 0.5001
23
- - Eval/rewards/chosen: 0.1739
24
- - Eval/logps/chosen: -0.4029
25
- - Eval/rewards/rejected: 0.1787
26
- - Eval/logps/rejected: -0.0087
27
- - Eval/rewards/margins: -0.0048
28
- - Eval/kl: 1.7305
29
 
30
  ## Model description
31
 
@@ -58,13 +58,13 @@ The following hyperparameters were used during training:
58
 
59
  ### Training results
60
 
61
- | Training Loss | Epoch | Step | Validation Loss | |
62
- |:-------------:|:-----:|:----:|:---------------:|:------:|
63
- | 0.5 | 1.0 | 9 | 0.5000 | 1.2364 |
64
- | 0.4994 | 2.0 | 18 | 0.5002 | 1.7169 |
65
- | 0.4985 | 3.0 | 27 | 0.5003 | 1.7311 |
66
- | 0.4981 | 4.0 | 36 | 0.5002 | 1.7306 |
67
- | 0.4976 | 5.0 | 45 | 0.5001 | 1.7305 |
68
 
69
 
70
  ### Framework versions
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/pauld/huggingface/runs/y7di7l44)
18
  # kto-aligned-model-lora
19
 
20
  This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the None dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 0.4990
23
+ - Eval/rewards/chosen: 0.1561
24
+ - Eval/logps/chosen: -0.6624
25
+ - Eval/rewards/rejected: 0.1281
26
+ - Eval/logps/rejected: -1.9415
27
+ - Eval/rewards/margins: 0.0281
28
+ - Eval/kl: 1.5643
29
 
30
  ## Model description
31
 
 
58
 
59
  ### Training results
60
 
61
+ | Training Loss | Epoch | Step | Validation Loss | |
62
+ |:-------------:|:------:|:----:|:---------------:|:------:|
63
+ | 0.4994 | 0.9057 | 8 | 0.4997 | 0.8856 |
64
+ | 0.5 | 1.9245 | 17 | 0.4994 | 1.5546 |
65
+ | 0.501 | 2.9434 | 26 | 0.4992 | 1.5634 |
66
+ | 0.5004 | 3.9623 | 35 | 0.4991 | 1.5675 |
67
+ | 0.4999 | 4.5283 | 40 | 0.4990 | 1.5643 |
68
 
69
 
70
  ### Framework versions
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:51cf6a46415880ad28aa06798a0e1c911e1c554721e670aaa19708322019f84d
3
  size 8397184
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9edc78ac5314459cade5002a9b7fbea45d0f906f16037086f352c5e657d6730
3
  size 8397184
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e9122972af136c3835ff9b69597ab9835cc8f727c57b9a37afdb1bff7816ddd4
3
  size 5496
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b09854525ea44d0ece38e709aa75664028ad8e7e57216b0f5a61dfec8fa1bb4c
3
  size 5496