coppercitylabs
commited on
Commit
·
627d942
1
Parent(s):
81c16d6
Add citation info
Browse files
README.md
CHANGED
@@ -15,7 +15,7 @@ datasets:
|
|
15 |
Pretrained model on Uzbek language (Cyrillic script) using a masked
|
16 |
language modeling and next sentence prediction objectives.
|
17 |
|
18 |
-
|
19 |
|
20 |
You can use this model directly with a pipeline for masked language modeling:
|
21 |
|
@@ -61,3 +61,15 @@ You can use this model directly with a pipeline for masked language modeling:
|
|
61 |
## Training data
|
62 |
|
63 |
UzBERT model was pretrained on \~625K news articles (\~142M words).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
Pretrained model on Uzbek language (Cyrillic script) using a masked
|
16 |
language modeling and next sentence prediction objectives.
|
17 |
|
18 |
+
## How to use
|
19 |
|
20 |
You can use this model directly with a pipeline for masked language modeling:
|
21 |
|
|
|
61 |
## Training data
|
62 |
|
63 |
UzBERT model was pretrained on \~625K news articles (\~142M words).
|
64 |
+
|
65 |
+
## BibTeX entry and citation info
|
66 |
+
```bibtex
|
67 |
+
@misc{mansurov2021uzbert,
|
68 |
+
title={{UzBERT: pretraining a BERT model for Uzbek}},
|
69 |
+
author={B. Mansurov and A. Mansurov},
|
70 |
+
year={2021},
|
71 |
+
eprint={2108.09814},
|
72 |
+
archivePrefix={arXiv},
|
73 |
+
primaryClass={cs.CL}
|
74 |
+
}
|
75 |
+
```
|