tvergho commited on
Commit
243292a
·
1 Parent(s): f59e37a

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -10
README.md CHANGED
@@ -12,9 +12,9 @@ should probably proofread and complete it, then remove this comment. -->
12
 
13
  # highlight_model
14
 
15
- This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
- - Loss: 3.4777
18
 
19
  ## Model description
20
 
@@ -37,6 +37,8 @@ The following hyperparameters were used during training:
37
  - train_batch_size: 2
38
  - eval_batch_size: 2
39
  - seed: 42
 
 
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
  - num_epochs: 4
@@ -46,15 +48,15 @@ The following hyperparameters were used during training:
46
 
47
  | Training Loss | Epoch | Step | Validation Loss |
48
  |:-------------:|:-----:|:----:|:---------------:|
49
- | No log | 1.0 | 70 | 3.1346 |
50
- | No log | 2.0 | 140 | 3.1047 |
51
- | No log | 3.0 | 210 | 3.2720 |
52
- | No log | 4.0 | 280 | 3.4777 |
53
 
54
 
55
  ### Framework versions
56
 
57
- - Transformers 4.26.0
58
- - Pytorch 1.13.1+cu116
59
- - Datasets 2.9.0
60
- - Tokenizers 0.13.2
 
12
 
13
  # highlight_model
14
 
15
+ This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Loss: 2.8965
18
 
19
  ## Model description
20
 
 
37
  - train_batch_size: 2
38
  - eval_batch_size: 2
39
  - seed: 42
40
+ - gradient_accumulation_steps: 4
41
+ - total_train_batch_size: 8
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
44
  - num_epochs: 4
 
48
 
49
  | Training Loss | Epoch | Step | Validation Loss |
50
  |:-------------:|:-----:|:----:|:---------------:|
51
+ | No log | 0.98 | 22 | 3.0073 |
52
+ | No log | 1.98 | 44 | 2.7684 |
53
+ | No log | 2.98 | 66 | 2.8394 |
54
+ | No log | 3.98 | 88 | 2.8965 |
55
 
56
 
57
  ### Framework versions
58
 
59
+ - Transformers 4.20.1
60
+ - Pytorch 1.11.0
61
+ - Datasets 2.1.0
62
+ - Tokenizers 0.12.1