Icelandic ConvBERT-Small

This model was pretrained on the Icelandic Gigaword Corpus, which contains approximately 1.69B tokens, using default settings. The model uses a Unigram tokenizer with a vocabulary size of 96,000.

Acknowledgments

This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).

This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by Almannar贸mur, is funded by the Icelandic Ministry of Education, Science and Culture.

Downloads last month
114
Safetensors
Model size
21.5M params
Tensor type
I64
F32
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.