CharlesLi commited on
Commit
a71f911
·
verified ·
1 Parent(s): ecc7fc7

Model save

Browse files
README.md CHANGED
@@ -3,6 +3,7 @@ library_name: transformers
3
  tags:
4
  - trl
5
  - dpo
 
6
  - generated_from_trainer
7
  model-index:
8
  - name: OpenELM-1_1B-DPO-full-most-similar
@@ -16,15 +17,15 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model was trained from scratch on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.9338
20
- - Rewards/chosen: -2.9219
21
- - Rewards/rejected: -3.1562
22
- - Rewards/accuracies: 0.5078
23
- - Rewards/margins: 0.2363
24
- - Logps/rejected: -604.0
25
- - Logps/chosen: -612.0
26
- - Logits/rejected: -12.0
27
- - Logits/chosen: -12.1875
28
 
29
  ## Model description
30
 
@@ -61,39 +62,39 @@ The following hyperparameters were used during training:
61
 
62
  | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
63
  |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
64
- | 0.2793 | 0.1047 | 100 | 0.7108 | -0.9570 | -1.125 | 0.5371 | 0.1699 | -402.0 | -414.0 | -11.9375 | -12.3125 |
65
- | 0.232 | 0.2093 | 200 | 0.7277 | -0.9414 | -1.0391 | 0.4980 | 0.1011 | -392.0 | -412.0 | -12.875 | -13.1875 |
66
- | 0.2635 | 0.3140 | 300 | 0.7188 | -0.7539 | -0.7852 | 0.4980 | 0.0309 | -366.0 | -394.0 | -13.9375 | -14.125 |
67
- | 0.2575 | 0.4186 | 400 | 0.7315 | -0.8555 | -0.9336 | 0.4902 | 0.0786 | -382.0 | -404.0 | -14.5 | -14.75 |
68
- | 0.2332 | 0.5233 | 500 | 0.7356 | -1.0547 | -1.0938 | 0.4824 | 0.0398 | -398.0 | -424.0 | -12.5625 | -12.875 |
69
- | 0.2629 | 0.6279 | 600 | 0.7679 | -1.0859 | -1.1016 | 0.4980 | 0.0173 | -400.0 | -428.0 | -15.0625 | -15.3125 |
70
- | 0.2748 | 0.7326 | 700 | 0.7595 | -1.1797 | -1.25 | 0.5039 | 0.0654 | -414.0 | -436.0 | -15.875 | -15.875 |
71
- | 0.1994 | 0.8373 | 800 | 0.8100 | -1.5469 | -1.5938 | 0.4961 | 0.0457 | -448.0 | -472.0 | -14.0625 | -14.375 |
72
- | 0.2198 | 0.9419 | 900 | 0.8008 | -1.5156 | -1.4922 | 0.4941 | -0.0189 | -438.0 | -470.0 | -13.75 | -13.9375 |
73
- | 0.0915 | 1.0466 | 1000 | 0.7759 | -1.3359 | -1.3984 | 0.5117 | 0.0554 | -428.0 | -452.0 | -14.4375 | -14.625 |
74
- | 0.0627 | 1.1512 | 1100 | 0.8235 | -1.8672 | -1.9922 | 0.5195 | 0.1245 | -488.0 | -506.0 | -14.375 | -14.5625 |
75
- | 0.0434 | 1.2559 | 1200 | 0.8238 | -1.9062 | -2.0312 | 0.5137 | 0.1221 | -492.0 | -508.0 | -15.625 | -15.625 |
76
- | 0.0312 | 1.3605 | 1300 | 0.8236 | -2.625 | -2.8438 | 0.5137 | 0.2070 | -572.0 | -580.0 | -10.125 | -10.6875 |
77
- | 0.0475 | 1.4652 | 1400 | 0.8034 | -1.9922 | -2.0781 | 0.4980 | 0.0884 | -496.0 | -516.0 | -13.5 | -13.625 |
78
- | 0.0452 | 1.5699 | 1500 | 0.8494 | -2.3125 | -2.4531 | 0.4961 | 0.1357 | -532.0 | -552.0 | -13.5 | -13.5625 |
79
- | 0.0508 | 1.6745 | 1600 | 0.8670 | -2.2031 | -2.2969 | 0.5 | 0.1050 | -520.0 | -540.0 | -14.3125 | -14.4375 |
80
- | 0.0344 | 1.7792 | 1700 | 0.8703 | -2.25 | -2.375 | 0.5078 | 0.1318 | -528.0 | -544.0 | -14.0 | -14.1875 |
81
- | 0.0454 | 1.8838 | 1800 | 0.8531 | -2.6875 | -2.8438 | 0.5020 | 0.1562 | -572.0 | -588.0 | -9.625 | -10.1875 |
82
- | 0.0445 | 1.9885 | 1900 | 0.8538 | -2.4062 | -2.5625 | 0.5078 | 0.1602 | -544.0 | -560.0 | -12.3125 | -12.5 |
83
- | 0.0041 | 2.0931 | 2000 | 0.9060 | -2.6406 | -2.8438 | 0.5098 | 0.2021 | -572.0 | -584.0 | -12.8125 | -13.0 |
84
- | 0.0041 | 2.1978 | 2100 | 0.9190 | -2.6875 | -2.9062 | 0.5039 | 0.2148 | -580.0 | -588.0 | -13.0 | -13.1875 |
85
- | 0.0018 | 2.3025 | 2200 | 0.9240 | -2.7188 | -2.9531 | 0.5039 | 0.2217 | -584.0 | -592.0 | -12.9375 | -13.125 |
86
- | 0.0032 | 2.4071 | 2300 | 0.9312 | -2.7969 | -3.0156 | 0.5039 | 0.2256 | -592.0 | -596.0 | -12.6875 | -12.875 |
87
- | 0.0026 | 2.5118 | 2400 | 0.9400 | -2.8906 | -3.125 | 0.5020 | 0.2344 | -600.0 | -608.0 | -12.375 | -12.625 |
88
- | 0.0015 | 2.6164 | 2500 | 0.9388 | -2.9219 | -3.1719 | 0.5059 | 0.2344 | -604.0 | -612.0 | -12.0 | -12.25 |
89
- | 0.0013 | 2.7211 | 2600 | 0.9397 | -2.9375 | -3.1719 | 0.5039 | 0.2354 | -604.0 | -612.0 | -12.1875 | -12.375 |
90
- | 0.0017 | 2.8257 | 2700 | 0.9347 | -2.9219 | -3.1562 | 0.5059 | 0.2344 | -604.0 | -612.0 | -12.0 | -12.25 |
91
- | 0.0018 | 2.9304 | 2800 | 0.9338 | -2.9219 | -3.1562 | 0.5078 | 0.2363 | -604.0 | -612.0 | -12.0 | -12.1875 |
92
 
93
 
94
  ### Framework versions
95
 
96
  - Transformers 4.44.2
97
  - Pytorch 2.3.0
98
- - Datasets 2.21.0
99
  - Tokenizers 0.19.1
 
3
  tags:
4
  - trl
5
  - dpo
6
+ - alignment-handbook
7
  - generated_from_trainer
8
  model-index:
9
  - name: OpenELM-1_1B-DPO-full-most-similar
 
17
 
18
  This model was trained from scratch on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 1.2107
21
+ - Rewards/chosen: -6.5312
22
+ - Rewards/rejected: -6.9375
23
+ - Rewards/accuracies: 0.5176
24
+ - Rewards/margins: 0.3906
25
+ - Logps/rejected: -980.0
26
+ - Logps/chosen: -972.0
27
+ - Logits/rejected: -3.5781
28
+ - Logits/chosen: -4.9688
29
 
30
  ## Model description
31
 
 
62
 
63
  | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
64
  |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
65
+ | 0.6244 | 0.1047 | 100 | 0.6786 | -0.5742 | -0.6484 | 0.5547 | 0.0737 | -354.0 | -376.0 | -12.3125 | -12.5625 |
66
+ | 0.6189 | 0.2093 | 200 | 0.6845 | -0.5234 | -0.6055 | 0.5664 | 0.0825 | -350.0 | -370.0 | -10.375 | -10.875 |
67
+ | 0.615 | 0.3140 | 300 | 0.7258 | -0.8516 | -0.9492 | 0.5020 | 0.0996 | -384.0 | -404.0 | -9.0 | -9.5 |
68
+ | 0.6363 | 0.4186 | 400 | 0.7065 | -1.7891 | -1.9375 | 0.5352 | 0.1436 | -482.0 | -498.0 | -10.9375 | -11.625 |
69
+ | 0.6154 | 0.5233 | 500 | 0.7352 | -1.8125 | -1.9062 | 0.4961 | 0.0942 | -480.0 | -500.0 | -9.4375 | -10.25 |
70
+ | 0.6475 | 0.6279 | 600 | 0.7276 | -1.1328 | -1.1406 | 0.5020 | 0.0104 | -402.0 | -432.0 | -14.25 | -14.5 |
71
+ | 0.6127 | 0.7326 | 700 | 0.7310 | -1.25 | -1.2578 | 0.4883 | 0.0139 | -414.0 | -442.0 | -14.0 | -14.125 |
72
+ | 0.6244 | 0.8373 | 800 | 0.6975 | -1.4141 | -1.5391 | 0.5449 | 0.1245 | -442.0 | -460.0 | -13.4375 | -13.6875 |
73
+ | 0.5981 | 0.9419 | 900 | 0.7064 | -1.7266 | -1.8828 | 0.5293 | 0.1572 | -476.0 | -490.0 | -13.25 | -13.6875 |
74
+ | 0.1623 | 1.0466 | 1000 | 0.8445 | -3.3125 | -3.5625 | 0.5234 | 0.2383 | -644.0 | -648.0 | -8.75 | -9.9375 |
75
+ | 0.174 | 1.1512 | 1100 | 0.9356 | -3.4688 | -3.6719 | 0.5312 | 0.1973 | -656.0 | -664.0 | -9.5625 | -10.5625 |
76
+ | 0.1635 | 1.2559 | 1200 | 0.8848 | -3.4219 | -3.7188 | 0.5430 | 0.2969 | -660.0 | -660.0 | -7.5312 | -8.6875 |
77
+ | 0.1524 | 1.3605 | 1300 | 0.8919 | -3.4219 | -3.7188 | 0.5449 | 0.2988 | -660.0 | -660.0 | -6.0938 | -7.4688 |
78
+ | 0.1368 | 1.4652 | 1400 | 0.9149 | -3.4844 | -3.6875 | 0.5352 | 0.2021 | -660.0 | -668.0 | -6.7188 | -8.0625 |
79
+ | 0.1442 | 1.5699 | 1500 | 0.9040 | -3.7812 | -4.0 | 0.5371 | 0.2275 | -688.0 | -696.0 | -7.0625 | -8.1875 |
80
+ | 0.1444 | 1.6745 | 1600 | 0.8945 | -3.6406 | -3.8906 | 0.5449 | 0.25 | -680.0 | -684.0 | -6.4375 | -7.6562 |
81
+ | 0.1499 | 1.7792 | 1700 | 0.8198 | -4.125 | -4.4375 | 0.5449 | 0.3086 | -732.0 | -732.0 | -5.5938 | -6.8125 |
82
+ | 0.132 | 1.8838 | 1800 | 0.9142 | -4.1875 | -4.5 | 0.5371 | 0.3008 | -740.0 | -740.0 | -5.625 | -6.9062 |
83
+ | 0.1262 | 1.9885 | 1900 | 0.8738 | -4.1875 | -4.5 | 0.5371 | 0.3184 | -740.0 | -736.0 | -5.3438 | -6.7188 |
84
+ | 0.0202 | 2.0931 | 2000 | 1.0958 | -5.5938 | -5.9062 | 0.5234 | 0.3242 | -880.0 | -880.0 | -4.3125 | -5.6875 |
85
+ | 0.0169 | 2.1978 | 2100 | 1.1553 | -6.125 | -6.5 | 0.5273 | 0.3770 | -940.0 | -932.0 | -4.1562 | -5.5312 |
86
+ | 0.0181 | 2.3025 | 2200 | 1.1497 | -6.125 | -6.5 | 0.5293 | 0.3887 | -940.0 | -932.0 | -4.3438 | -5.7188 |
87
+ | 0.0154 | 2.4071 | 2300 | 1.2094 | -6.3125 | -6.6875 | 0.5215 | 0.3730 | -956.0 | -948.0 | -3.9531 | -5.3438 |
88
+ | 0.0165 | 2.5118 | 2400 | 1.2153 | -6.6875 | -7.0625 | 0.5352 | 0.3848 | -996.0 | -984.0 | -3.3594 | -4.75 |
89
+ | 0.0165 | 2.6164 | 2500 | 1.2408 | -6.4375 | -6.8125 | 0.5215 | 0.3691 | -968.0 | -960.0 | -3.8594 | -5.25 |
90
+ | 0.0143 | 2.7211 | 2600 | 1.2348 | -6.6875 | -7.0625 | 0.5176 | 0.3906 | -996.0 | -984.0 | -3.625 | -5.0 |
91
+ | 0.0111 | 2.8257 | 2700 | 1.2130 | -6.5625 | -6.9375 | 0.5195 | 0.3965 | -984.0 | -972.0 | -3.5938 | -5.0 |
92
+ | 0.0222 | 2.9304 | 2800 | 1.2107 | -6.5312 | -6.9375 | 0.5176 | 0.3906 | -980.0 | -972.0 | -3.5781 | -4.9688 |
93
 
94
 
95
  ### Framework versions
96
 
97
  - Transformers 4.44.2
98
  - Pytorch 2.3.0
99
+ - Datasets 3.0.0
100
  - Tokenizers 0.19.1
all_results.json CHANGED
@@ -1,9 +1,22 @@
1
  {
2
  "epoch": 2.998430141287284,
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  "total_flos": 0.0,
4
- "train_loss": 0.10811326470304065,
5
- "train_runtime": 12380.2108,
6
- "train_samples": 61121,
7
- "train_samples_per_second": 14.811,
8
- "train_steps_per_second": 0.231
9
  }
 
1
  {
2
  "epoch": 2.998430141287284,
3
+ "eval_logits/chosen": -12.1875,
4
+ "eval_logits/rejected": -12.0,
5
+ "eval_logps/chosen": -612.0,
6
+ "eval_logps/rejected": -604.0,
7
+ "eval_loss": 0.9336875081062317,
8
+ "eval_rewards/accuracies": 0.5078125,
9
+ "eval_rewards/chosen": -2.921875,
10
+ "eval_rewards/margins": 0.234375,
11
+ "eval_rewards/rejected": -3.15625,
12
+ "eval_runtime": 46.4634,
13
+ "eval_samples": 2000,
14
+ "eval_samples_per_second": 43.045,
15
+ "eval_steps_per_second": 0.689,
16
  "total_flos": 0.0,
17
+ "train_loss": 0.2659621652300237,
18
+ "train_runtime": 12443.8108,
19
+ "train_samples": 61122,
20
+ "train_samples_per_second": 14.736,
21
+ "train_steps_per_second": 0.23
22
  }
eval_results.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 2.998430141287284,
3
+ "eval_logits/chosen": -12.1875,
4
+ "eval_logits/rejected": -12.0,
5
+ "eval_logps/chosen": -612.0,
6
+ "eval_logps/rejected": -604.0,
7
+ "eval_loss": 0.9336875081062317,
8
+ "eval_rewards/accuracies": 0.5078125,
9
+ "eval_rewards/chosen": -2.921875,
10
+ "eval_rewards/margins": 0.234375,
11
+ "eval_rewards/rejected": -3.15625,
12
+ "eval_runtime": 46.4634,
13
+ "eval_samples": 2000,
14
+ "eval_samples_per_second": 43.045,
15
+ "eval_steps_per_second": 0.689
16
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b5dc71fb6d77dc6f450d3966d23dd98cc8585e7354f2ca8a430b8c6cd7252d99
3
  size 2159808696
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ef8ccbfce2ef221b959cf5c79acaabb6bef5abe9b730c3ba679e99f50ec686f
3
  size 2159808696
runs/Sep10_04-21-18_xe8545-a100-31/events.out.tfevents.1725948763.xe8545-a100-31.65465.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e3947fac1ea7f7662ae5268bfc4893a7b9d5577e592613635186e8165dd2ef86
3
+ size 828
runs/Sep22_15-09-56_xe8545-a100-05/events.out.tfevents.1727011921.xe8545-a100-05.375983.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eaf76d5eb3810892b7cf18fbff72532717221219ba08b18f0d44923c91479ba3
3
+ size 225666
train_results.json CHANGED
@@ -1,9 +1,9 @@
1
  {
2
  "epoch": 2.998430141287284,
3
  "total_flos": 0.0,
4
- "train_loss": 0.10811326470304065,
5
- "train_runtime": 12380.2108,
6
- "train_samples": 61121,
7
- "train_samples_per_second": 14.811,
8
- "train_steps_per_second": 0.231
9
  }
 
1
  {
2
  "epoch": 2.998430141287284,
3
  "total_flos": 0.0,
4
+ "train_loss": 0.2659621652300237,
5
+ "train_runtime": 12443.8108,
6
+ "train_samples": 61122,
7
+ "train_samples_per_second": 14.736,
8
+ "train_steps_per_second": 0.23
9
  }
trainer_state.json CHANGED
The diff for this file is too large to render. See raw diff
 
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b45a03b0c71f720ed2a1a77fbd507dc7043368b732227967efe9680cd6a34a62
3
- size 7544
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:877530b6916f042fa46c0a12aaa9392958d02ec5697fa2dd66ca1192cefd9d88
3
+ size 7608