metadata
language:
- ja
tags:
- japanese
- wikipedia
- cc100
- oscar
- pos
- dependency-parsing
base_model: ku-nlp/deberta-v2-large-japanese
datasets:
- universal_dependencies
license: cc-by-sa-4.0
pipeline_tag: token-classification
deberta-large-japanese-juman-ud-goeswith
Model Description
This is a DeBERTa(V2) model pretrained on Japanese Wikipedia, CC-100, and OSCAR texts for POS-tagging and dependency-parsing (using goeswith
for subwords), derived from deberta-v2-large-japanese.
How to Use
from transformers import pipeline
nlp=pipeline("universal-dependencies","KoichiYasuoka/deberta-large-japanese-juman-ud-goeswith",trust_remote_code=True,aggregation_strategy="simple")
print(nlp("全学年にわたって小学校の国語の教科書に挿し絵が用いられている"))
fugashi is required.