Update README.md
Browse files
README.md
CHANGED
@@ -7,6 +7,6 @@ We pretrained a RoBERTa-based Japanese masked language model on paper abstracts
|
|
7 |
The vocabulary consists of 32000 tokens including subwords induced by the unigram language model of sentencepiece.
|
8 |
|
9 |
---
|
10 |
-
license: apache-2.0
|
11 |
language:ja
|
12 |
---
|
|
|
7 |
The vocabulary consists of 32000 tokens including subwords induced by the unigram language model of sentencepiece.
|
8 |
|
9 |
---
|
10 |
+
license: apache-2.0 <br>
|
11 |
language:ja
|
12 |
---
|