MasterAlex69 commited on
Commit
72ddb7a
·
verified ·
1 Parent(s): 31d96ea

Model save

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  library_name: transformers
3
  license: mit
4
- base_model: gpt2
5
  tags:
6
  - generated_from_trainer
7
  model-index:
@@ -14,7 +14,7 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  # gpt2_edline
16
 
17
- This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
18
 
19
  ## Model description
20
 
@@ -33,13 +33,13 @@ More information needed
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
- - learning_rate: 1e-05
37
- - train_batch_size: 8
38
  - eval_batch_size: 8
39
  - seed: 42
40
- - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
41
  - lr_scheduler_type: linear
42
- - num_epochs: 4
43
 
44
  ### Training results
45
 
@@ -47,6 +47,6 @@ The following hyperparameters were used during training:
47
 
48
  ### Framework versions
49
 
50
- - Transformers 4.46.2
51
  - Pytorch 2.5.1+cu121
52
  - Tokenizers 0.20.3
 
1
  ---
2
  library_name: transformers
3
  license: mit
4
+ base_model: openai-community/gpt2
5
  tags:
6
  - generated_from_trainer
7
  model-index:
 
14
 
15
  # gpt2_edline
16
 
17
+ This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on an unknown dataset.
18
 
19
  ## Model description
20
 
 
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
+ - learning_rate: 1e-07
37
+ - train_batch_size: 4
38
  - eval_batch_size: 8
39
  - seed: 42
40
+ - optimizer: Use adamw_hf with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
41
  - lr_scheduler_type: linear
42
+ - training_steps: 50000
43
 
44
  ### Training results
45
 
 
47
 
48
  ### Framework versions
49
 
50
+ - Transformers 4.46.3
51
  - Pytorch 2.5.1+cu121
52
  - Tokenizers 0.20.3