Andrey Kutuzov
commited on
Commit
•
f970042
1
Parent(s):
87def6d
Formatting
Browse files
README.md
CHANGED
@@ -9,17 +9,20 @@ thumbnail: https://raw.githubusercontent.com/ltgoslo/NorBERT/main/Norbert.png
|
|
9 |
---
|
10 |
## Quickstart
|
11 |
**Release 2.0** (February 7, 2022)
|
|
|
12 |
Trained on the very large corpus of Norwegian (C4 + NCC, about 15 billion word tokens).
|
13 |
Features a 50 000 words vocabulary and was trained using Whole Word Masking.
|
14 |
|
15 |
Download the model here:
|
16 |
* Cased Norwegian BERT Base 2.0 (NorBERT 2): [221.zip](http://vectors.nlpl.eu/repository/20/221.zip)
|
|
|
17 |
More about NorBERT training corpora, training procedure and evaluation benchmarks: http://norlm.nlpl.eu/
|
18 |
|
19 |
Associated code: https://github.com/ltgoslo/NorBERT
|
20 |
|
21 |
Check this paper for more details:
|
22 |
-
|
|
|
23 |
|
24 |
NorBERT was trained as a part of NorLM, a joint initiative of the projects [EOSC-Nordic](https://www.eosc-nordic.eu/) (European Open Science Cloud),
|
25 |
coordinated by the [Language Technology Group](https://www.mn.uio.no/ifi/english/research/groups/ltg/) (LTG) at the University of Oslo.
|
|
|
9 |
---
|
10 |
## Quickstart
|
11 |
**Release 2.0** (February 7, 2022)
|
12 |
+
|
13 |
Trained on the very large corpus of Norwegian (C4 + NCC, about 15 billion word tokens).
|
14 |
Features a 50 000 words vocabulary and was trained using Whole Word Masking.
|
15 |
|
16 |
Download the model here:
|
17 |
* Cased Norwegian BERT Base 2.0 (NorBERT 2): [221.zip](http://vectors.nlpl.eu/repository/20/221.zip)
|
18 |
+
|
19 |
More about NorBERT training corpora, training procedure and evaluation benchmarks: http://norlm.nlpl.eu/
|
20 |
|
21 |
Associated code: https://github.com/ltgoslo/NorBERT
|
22 |
|
23 |
Check this paper for more details:
|
24 |
+
|
25 |
+
_Andrey Kutuzov, Jeremy Barnes, Erik Velldal, Lilja Øvrelid, Stephan Oepen. [Large-Scale Contextualised Language Modelling for Norwegian](https://aclanthology.org/2021.nodalida-main.4/), NoDaLiDa'21 (2021)_
|
26 |
|
27 |
NorBERT was trained as a part of NorLM, a joint initiative of the projects [EOSC-Nordic](https://www.eosc-nordic.eu/) (European Open Science Cloud),
|
28 |
coordinated by the [Language Technology Group](https://www.mn.uio.no/ifi/english/research/groups/ltg/) (LTG) at the University of Oslo.
|