Update README.md
Browse files
README.md
CHANGED
@@ -20,12 +20,12 @@ Language(s) (NLP): Egyptian Arabic
|
|
20 |
Finetuned from model: Modern Bert Base
|
21 |
|
22 |
It achieves the following results on the evaluation set:
|
23 |
-
- eval_loss: 2.
|
24 |
- eval_runtime: 134.9987
|
25 |
- eval_samples_per_second: 68.267
|
26 |
- eval_steps_per_second: 8.533
|
27 |
-
- epoch:
|
28 |
-
- step:
|
29 |
|
30 |
## Training procedure
|
31 |
|
@@ -39,7 +39,7 @@ The following hyperparameters were used during training:
|
|
39 |
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
40 |
- lr_scheduler_type: linear
|
41 |
- lr_scheduler_warmup_steps: 500
|
42 |
-
- num_epochs:
|
43 |
|
44 |
### Framework versions
|
45 |
|
|
|
20 |
Finetuned from model: Modern Bert Base
|
21 |
|
22 |
It achieves the following results on the evaluation set:
|
23 |
+
- eval_loss: 2.2641
|
24 |
- eval_runtime: 134.9987
|
25 |
- eval_samples_per_second: 68.267
|
26 |
- eval_steps_per_second: 8.533
|
27 |
+
- epoch: 4
|
28 |
+
- step: 190000
|
29 |
|
30 |
## Training procedure
|
31 |
|
|
|
39 |
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
40 |
- lr_scheduler_type: linear
|
41 |
- lr_scheduler_warmup_steps: 500
|
42 |
+
- num_epochs: 4
|
43 |
|
44 |
### Framework versions
|
45 |
|