File size: 424 Bytes
5369036
 
 
 
 
 
 
1
2
3
4
5
6
7
This is a **RoBERTa-base** model trained from scratch in Spanish.

The training dataset is mc4 (1) subsampling documents to a total of about 50 million examples. Sampling is random.
This model takes the one using sequence length 128 (2) and trains during 25.000 steps using sequence length 512.

(1) https://huggingface.co/datasets/bertin-project/mc4-es-sampled  
(2) https://huggingface.co/bertin-project/bertin-base-random