bilkultheek commited on
Commit
690b7e0
·
verified ·
1 Parent(s): 55160b1

Model save

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -39,12 +39,13 @@ The following hyperparameters were used during training:
39
  - train_batch_size: 32
40
  - eval_batch_size: 32
41
  - seed: 42
42
- - gradient_accumulation_steps: 4
43
- - total_train_batch_size: 128
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: cosine
46
  - lr_scheduler_warmup_ratio: 0.03
47
  - num_epochs: 5
 
48
 
49
  ### Training results
50
 
 
39
  - train_batch_size: 32
40
  - eval_batch_size: 32
41
  - seed: 42
42
+ - gradient_accumulation_steps: 8
43
+ - total_train_batch_size: 256
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: cosine
46
  - lr_scheduler_warmup_ratio: 0.03
47
  - num_epochs: 5
48
+ - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51