The learning rate was not displayed as it should.
#3
by
kapllan
- opened
README.md
CHANGED
@@ -109,7 +109,7 @@ For further details see [Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?u
|
|
109 |
- batche size: 512 samples
|
110 |
- Number of steps: 1M/500K for the base/large model
|
111 |
- Warm-up steps for the first 5\% of the total training steps
|
112 |
-
- Learning rate: (linearly increasing up to)
|
113 |
- Word masking: increased 20/30\% masking rate for base/large models respectively
|
114 |
|
115 |
## Evaluation
|
|
|
109 |
- batche size: 512 samples
|
110 |
- Number of steps: 1M/500K for the base/large model
|
111 |
- Warm-up steps for the first 5\% of the total training steps
|
112 |
+
- Learning rate: (linearly increasing up to) 1e-4
|
113 |
- Word masking: increased 20/30\% masking rate for base/large models respectively
|
114 |
|
115 |
## Evaluation
|