|
--- |
|
language: |
|
- "ja" |
|
tags: |
|
- "japanese" |
|
- "pos" |
|
- "dependency-parsing" |
|
base_model: goldfish-models/jpn_jpan_100mb |
|
datasets: |
|
- "universal_dependencies" |
|
license: "apache-2.0" |
|
pipeline_tag: "token-classification" |
|
widget: |
|
- text: "全学年にわたって小学校の国語の教科書に挿し絵が用いられている" |
|
--- |
|
|
|
# goldfish-gpt2-japanese-100mb-ud-causal |
|
|
|
## Model Description |
|
|
|
This is a GPT-2 model pretrained for POS-tagging and dependency-parsing, derived from [jpn_jpan_100mb](https://huggingface.co/goldfish-models/jpn_jpan_100mb) refined for [UD_Japanese-GSDLUW](https://github.com/UniversalDependencies/UD_Japanese-GSDLUW). |
|
|
|
## How to Use |
|
|
|
``` |
|
from transformers import pipeline |
|
nlp=pipeline("universal-dependencies","KoichiYasuoka/goldfish-gpt2-japanese-100mb-ud-causal",trust_remote_code=True) |
|
print(nlp("全学年にわたって小学校の国語の教科書に挿し絵が用いられている")) |
|
``` |
|
|
|
## Reference |
|
|
|
安岡孝一: [GPT系言語モデルによる国語研長単位係り受け解析](http://id.nii.ac.jp/1001/00241391/), 人文科学とコンピュータシンポジウム「じんもんこん2024」論文集 (2024年12月), pp.83-90. |
|
|