Update README.md
Browse files
README.md
CHANGED
@@ -62,16 +62,16 @@ print(outputs[0]["generated_text"])
|
|
62 |
### Training hyper-parameters
|
63 |
The following hyperparameters were used during training:
|
64 |
|
65 |
-
learning_rate: 4.0e-5
|
66 |
-
train_batch_size: 2
|
67 |
-
seed: 42
|
68 |
-
packing: false
|
69 |
-
distributed_type: deepspeed-zero-3
|
70 |
-
num_devices: 8
|
71 |
-
gradient_accumulation_steps: 8
|
72 |
-
total_train_batch_size: 16
|
73 |
-
optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
74 |
-
lr_scheduler_type: cosine_with_min_lr
|
75 |
-
min_lr_rate: 0.1
|
76 |
-
lr_scheduler_warmup_ratio: 0.03
|
77 |
-
num_epochs: 10.0
|
|
|
62 |
### Training hyper-parameters
|
63 |
The following hyperparameters were used during training:
|
64 |
|
65 |
+
- learning_rate: 4.0e-5
|
66 |
+
- train_batch_size: 2
|
67 |
+
- seed: 42
|
68 |
+
- packing: false
|
69 |
+
- distributed_type: deepspeed-zero-3
|
70 |
+
- num_devices: 8
|
71 |
+
- gradient_accumulation_steps: 8
|
72 |
+
- total_train_batch_size: 16
|
73 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
74 |
+
- lr_scheduler_type: cosine_with_min_lr
|
75 |
+
- min_lr_rate: 0.1
|
76 |
+
- lr_scheduler_warmup_ratio: 0.03
|
77 |
+
- num_epochs: 10.0
|