license: apache-2.0
language:
- lv
Latvian BERT base model (cased)
A BERT model pretrained on the Latvian language using the masked language modeling and next sentence prediction objectives. It was introduced in this paper and first released via this repository.
This model is case-sensitive. It is primarily intended to be fine-tuned on downstream natural language understanding (NLU) tasks.
Developed at AiLab.lv
Training data
LVBERT was pretrained on texts from the Balanced Corpus of Modern Latvian, Latvian Wikipedia, Corpus of News Portal Articles, as well as Corpus of News Portal Comments; 500M tokens in total.
Tokenization
A SentencePiece model was trained on the training dataset, producing a vocabulary of 32,000 tokens. It was then converted to the WordPiece format used by BERT.
Pretraining
We used the BERT-base configuration with 12 layers, 768 hidden units, 12 heads, 128 sequence length, 128 mini-batch size and 32k token vocabulary.
Citation
Please cite this paper if you use LVBERT:
@inproceedings{Znotins-Barzdins:2020:BalticHLT,
author = {Arturs Znotins and Guntis Barzdins},
title = {{LVBERT: Transformer-Based Model for Latvian Language Understanding}},
booktitle = {Human Language Technologies - The Baltic Perspective},
series = {Frontiers in Artificial Intelligence and Applications},
volume = {328},
publisher = {IOS Press},
year = {2020},
pages = {111-115},
doi = {10.3233/FAIA200610},
url = {http://ebooks.iospress.nl/volumearticle/55531}
}