Update README.md
Browse files
README.md
CHANGED
@@ -23,7 +23,9 @@ should probably proofread and complete it, then remove this comment. -->
|
|
23 |
|
24 |
(Japanese caption : 日本語の固有表現抽出のモデル)
|
25 |
|
26 |
-
This model is a fine-tuned
|
|
|
|
|
27 |
See [here](https://github.com/stockmarkteam/ner-wikipedia-dataset) for the license of this dataset.
|
28 |
|
29 |
Each token is labeled by :
|
|
|
23 |
|
24 |
(Japanese caption : 日本語の固有表現抽出のモデル)
|
25 |
|
26 |
+
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) (pre-trained cross-lingual ```RobertaModel```) trained for named entity recognition (NER) token classification.
|
27 |
+
|
28 |
+
The model is fine-tuned on NER dataset provided by Stockmark Inc, in which data is collected from Japanese Wikipedia articles.<br>
|
29 |
See [here](https://github.com/stockmarkteam/ner-wikipedia-dataset) for the license of this dataset.
|
30 |
|
31 |
Each token is labeled by :
|