PaulD commited on
Commit
8839820
·
verified ·
1 Parent(s): e4829c4

End of training

Browse files
README.md CHANGED
@@ -18,13 +18,13 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 1.1781
22
- - Eval/rewards/chosen: -0.0396
23
- - Eval/logps/chosen: -200.9254
24
- - Eval/rewards/rejected: -0.0766
25
- - Eval/logps/rejected: -241.0806
26
- - Eval/rewards/margins: 0.0370
27
- - Eval/kl: 0.0308
28
 
29
  ## Model description
30
 
@@ -43,10 +43,10 @@ More information needed
43
  ### Training hyperparameters
44
 
45
  The following hyperparameters were used during training:
46
- - learning_rate: 1e-05
47
  - train_batch_size: 1
48
  - eval_batch_size: 2
49
- - seed: 5678
50
  - gradient_accumulation_steps: 8
51
  - total_train_batch_size: 8
52
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
@@ -56,10 +56,11 @@ The following hyperparameters were used during training:
56
 
57
  ### Training results
58
 
59
- | Training Loss | Epoch | Step | Validation Loss | |
60
- |:-------------:|:-----:|:----:|:---------------:|:------:|
61
- | 0.7403 | 0.96 | 12 | 1.2009 | 0.1233 |
62
- | 0.961 | 2.0 | 25 | 1.1781 | 0.0308 |
 
63
 
64
 
65
  ### Framework versions
 
18
 
19
  This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.7208
22
+ - Eval/rewards/chosen: 5.9950
23
+ - Eval/logps/chosen: -150.6513
24
+ - Eval/rewards/rejected: 5.3806
25
+ - Eval/logps/rejected: -180.0595
26
+ - Eval/rewards/margins: 0.6145
27
+ - Eval/kl: 52.5969
28
 
29
  ## Model description
30
 
 
43
  ### Training hyperparameters
44
 
45
  The following hyperparameters were used during training:
46
+ - learning_rate: 0.001
47
  - train_batch_size: 1
48
  - eval_batch_size: 2
49
+ - seed: 9012
50
  - gradient_accumulation_steps: 8
51
  - total_train_batch_size: 8
52
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
 
56
 
57
  ### Training results
58
 
59
+ | Training Loss | Epoch | Step | Validation Loss | |
60
+ |:-------------:|:------:|:----:|:---------------:|:-------:|
61
+ | 0.4454 | 0.9412 | 12 | 0.6884 | 36.2845 |
62
+ | 0.3037 | 1.9608 | 25 | 0.7410 | 37.1582 |
63
+ | 0.129 | 2.9804 | 38 | 0.7208 | 52.5969 |
64
 
65
 
66
  ### Framework versions
adapter_config.json CHANGED
@@ -20,9 +20,9 @@
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
- "k_proj",
24
  "q_proj",
25
  "v_proj",
 
26
  "o_proj"
27
  ],
28
  "task_type": "CAUSAL_LM",
 
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
 
23
  "q_proj",
24
  "v_proj",
25
+ "k_proj",
26
  "o_proj"
27
  ],
28
  "task_type": "CAUSAL_LM",
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fd6946db558e74bfb454bad307b0bdc33704e39f257b512a5d6be81e58304d46
3
  size 27297544
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7bc45d7319b9241c7d7a54616050d81ee34483405c3742434e459d9e266f94fc
3
  size 27297544
metrics.jsonl CHANGED
@@ -22,3 +22,6 @@
22
  {"epoch": 2.0, "precision": 0.6444444430123457, "recall": 0.9666666634444444, "fold": 0}
23
  {"epoch": 0.96, "precision": 0.34939758994048487, "recall": 0.9666666634444444, "fold": 0}
24
  {"epoch": 2.0, "precision": 0.3717948713182117, "recall": 0.9666666634444444, "fold": 0}
 
 
 
 
22
  {"epoch": 2.0, "precision": 0.6444444430123457, "recall": 0.9666666634444444, "fold": 0}
23
  {"epoch": 0.96, "precision": 0.34939758994048487, "recall": 0.9666666634444444, "fold": 0}
24
  {"epoch": 2.0, "precision": 0.3717948713182117, "recall": 0.9666666634444444, "fold": 0}
25
+ {"epoch": 0.9411764705882353, "precision": 0.4999999916666668, "recall": 0.0749999998125, "fold": 0}
26
+ {"epoch": 1.9607843137254903, "precision": 0.6326530599333611, "recall": 0.7749999980625, "fold": 0}
27
+ {"epoch": 2.980392156862745, "precision": 0.7419354814776274, "recall": 0.5749999985625, "fold": 0}
metrics_epoch_0.9411764705882353_fold_0_lr_0.001_seed_9012_weight_2.0.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"epoch": 0.9411764705882353, "precision": 0.4999999916666668, "recall": 0.0749999998125, "fold": 0}
metrics_epoch_1.9607843137254903_fold_0_lr_0.001_seed_9012_weight_2.0.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"epoch": 1.9607843137254903, "precision": 0.6326530599333611, "recall": 0.7749999980625, "fold": 0}
metrics_epoch_2.980392156862745_fold_0_lr_0.001_seed_9012_weight_2.0.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"epoch": 2.980392156862745, "precision": 0.7419354814776274, "recall": 0.5749999985625, "fold": 0}
results_epoch_0.9411764705882353_fold_0_lr_0.001_seed_9012_weight_2.0.json ADDED
The diff for this file is too large to render. See raw diff
 
results_epoch_1.9607843137254903_fold_0_lr_0.001_seed_9012_weight_2.0.json ADDED
The diff for this file is too large to render. See raw diff
 
results_epoch_2.980392156862745_fold_0_lr_0.001_seed_9012_weight_2.0.json ADDED
The diff for this file is too large to render. See raw diff
 
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5783f3b3aa05b56860198428463dd5f7822af851e3ab42375ceb413fc156243e
3
  size 5688
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:155482eea6cfc9856e03cc92d1779c2047923e077d880b4dbb6e4d3a605f29f7
3
  size 5688