GorkaUrbizu commited on
Commit
638ee12
1 Parent(s): 8fa886e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -8,7 +8,7 @@ language:
8
  BERT medium (cased) model trained on a subset of 125M tokens of cc100-Swahili for our work [Scaling Laws for BERT in Low-Resource Settings]() at ACL2023 Findings.
9
 
10
  The model has 51M parameters (8L), and a vocab size of 50K.
11
- It was trained for 500K steps with a sequence length of 512 tokens.
12
 
13
  Results
14
  -----------
 
8
  BERT medium (cased) model trained on a subset of 125M tokens of cc100-Swahili for our work [Scaling Laws for BERT in Low-Resource Settings]() at ACL2023 Findings.
9
 
10
  The model has 51M parameters (8L), and a vocab size of 50K.
11
+ It was trained for 500K steps with a sequence length of 512 tokens and batch-size of 256.
12
 
13
  Results
14
  -----------