satyanshu404 commited on
Commit
5da6528
·
verified ·
1 Parent(s): 2f187d4

End of training

Browse files
Files changed (1) hide show
  1. README.md +35 -7
README.md CHANGED
@@ -15,7 +15,7 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  This model is a fine-tuned version of [google/long-t5-tglobal-base](https://huggingface.co/google/long-t5-tglobal-base) on the None dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 4.5302
19
 
20
  ## Model description
21
 
@@ -34,26 +34,54 @@ More information needed
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
- - learning_rate: 2e-05
38
  - train_batch_size: 1
39
  - eval_batch_size: 1
40
  - seed: 42
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
- - num_epochs: 3
 
44
 
45
  ### Training results
46
 
47
  | Training Loss | Epoch | Step | Validation Loss |
48
  |:-------------:|:-----:|:----:|:---------------:|
49
- | No log | 1.0 | 65 | 18.5939 |
50
- | No log | 2.0 | 130 | 10.9423 |
51
- | No log | 3.0 | 195 | 4.5302 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52
 
53
 
54
  ### Framework versions
55
 
56
  - Transformers 4.38.2
57
- - Pytorch 2.2.1+cu121
58
  - Datasets 2.18.0
59
  - Tokenizers 0.15.2
 
15
 
16
  This model is a fine-tuned version of [google/long-t5-tglobal-base](https://huggingface.co/google/long-t5-tglobal-base) on the None dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: nan
19
 
20
  ## Model description
21
 
 
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
+ - learning_rate: 2e-07
38
  - train_batch_size: 1
39
  - eval_batch_size: 1
40
  - seed: 42
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
+ - num_epochs: 30
44
+ - mixed_precision_training: Native AMP
45
 
46
  ### Training results
47
 
48
  | Training Loss | Epoch | Step | Validation Loss |
49
  |:-------------:|:-----:|:----:|:---------------:|
50
+ | No log | 1.0 | 65 | nan |
51
+ | No log | 2.0 | 130 | nan |
52
+ | No log | 3.0 | 195 | nan |
53
+ | No log | 4.0 | 260 | nan |
54
+ | No log | 5.0 | 325 | nan |
55
+ | No log | 6.0 | 390 | nan |
56
+ | No log | 7.0 | 455 | nan |
57
+ | 22.0508 | 8.0 | 520 | nan |
58
+ | 22.0508 | 9.0 | 585 | nan |
59
+ | 22.0508 | 10.0 | 650 | nan |
60
+ | 22.0508 | 11.0 | 715 | nan |
61
+ | 22.0508 | 12.0 | 780 | nan |
62
+ | 22.0508 | 13.0 | 845 | nan |
63
+ | 22.0508 | 14.0 | 910 | nan |
64
+ | 22.0508 | 15.0 | 975 | nan |
65
+ | 0.0 | 16.0 | 1040 | nan |
66
+ | 0.0 | 17.0 | 1105 | nan |
67
+ | 0.0 | 18.0 | 1170 | nan |
68
+ | 0.0 | 19.0 | 1235 | nan |
69
+ | 0.0 | 20.0 | 1300 | nan |
70
+ | 0.0 | 21.0 | 1365 | nan |
71
+ | 0.0 | 22.0 | 1430 | nan |
72
+ | 0.0 | 23.0 | 1495 | nan |
73
+ | 0.0 | 24.0 | 1560 | nan |
74
+ | 0.0 | 25.0 | 1625 | nan |
75
+ | 0.0 | 26.0 | 1690 | nan |
76
+ | 0.0 | 27.0 | 1755 | nan |
77
+ | 0.0 | 28.0 | 1820 | nan |
78
+ | 0.0 | 29.0 | 1885 | nan |
79
+ | 0.0 | 30.0 | 1950 | nan |
80
 
81
 
82
  ### Framework versions
83
 
84
  - Transformers 4.38.2
85
+ - Pytorch 2.2.2+cu121
86
  - Datasets 2.18.0
87
  - Tokenizers 0.15.2