Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
token-classification | transformers |
# bert-base-japanese-luw-upos
## Model Description
This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-base-japanese-char-extended](https://huggingface.co/KoichiYasuoka/bert-base-japanese-char-extended). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/).
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-japanese-luw-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-base-japanese-luw-upos")
s="国境の長いトンネルを抜けると雪国であった。"
p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
print(list(zip(s,p)))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/bert-base-japanese-luw-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## Reference
安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| {"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "token-classification", "pos", "wikipedia", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u56fd\u5883\u306e\u9577\u3044\u30c8\u30f3\u30cd\u30eb\u3092\u629c\u3051\u308b\u3068\u96ea\u56fd\u3067\u3042\u3063\u305f\u3002"}]} | KoichiYasuoka/bert-base-japanese-luw-upos | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"japanese",
"pos",
"wikipedia",
"dependency-parsing",
"ja",
"dataset:universal_dependencies",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
# bert-base-japanese-unidic-luw-upos
## Model Description
This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-base-japanese-v2](https://huggingface.co/cl-tohoku/bert-base-japanese-v2). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-japanese-unidic-luw-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-base-japanese-unidic-luw-upos")
s="国境の長いトンネルを抜けると雪国であった。"
t=tokenizer.tokenize(s)
p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
print(list(zip(t,p)))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/bert-base-japanese-unidic-luw-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
[fugashi](https://pypi.org/project/fugashi) and [unidic-lite](https://pypi.org/project/unidic-lite) are required.
## Reference
安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| {"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "token-classification", "pos", "wikipedia", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u56fd\u5883\u306e\u9577\u3044\u30c8\u30f3\u30cd\u30eb\u3092\u629c\u3051\u308b\u3068\u96ea\u56fd\u3067\u3042\u3063\u305f\u3002"}]} | KoichiYasuoka/bert-base-japanese-unidic-luw-upos | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"japanese",
"pos",
"wikipedia",
"dependency-parsing",
"ja",
"dataset:universal_dependencies",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
# bert-base-japanese-upos
## Model Description
This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-base-japanese-char-extended](https://huggingface.co/KoichiYasuoka/bert-base-japanese-char-extended). Every short-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-japanese-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-base-japanese-upos")
s="国境の長いトンネルを抜けると雪国であった。"
p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
print(list(zip(s,p)))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/bert-base-japanese-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| {"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "token-classification", "pos", "wikipedia", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u56fd\u5883\u306e\u9577\u3044\u30c8\u30f3\u30cd\u30eb\u3092\u629c\u3051\u308b\u3068\u96ea\u56fd\u3067\u3042\u3063\u305f\u3002"}]} | KoichiYasuoka/bert-base-japanese-upos | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"japanese",
"pos",
"wikipedia",
"dependency-parsing",
"ja",
"dataset:universal_dependencies",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
# bert-base-thai-upos
## Model Description
This is a BERT model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-base-th-cased](https://huggingface.co/Geotrend/bert-base-th-cased). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-thai-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-base-thai-upos")
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/bert-base-thai-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| {"language": ["th"], "license": "apache-2.0", "tags": ["thai", "token-classification", "pos", "wikipedia", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u0e2b\u0e25\u0e32\u0e22\u0e2b\u0e31\u0e27\u0e14\u0e35\u0e01\u0e27\u0e48\u0e32\u0e2b\u0e31\u0e27\u0e40\u0e14\u0e35\u0e22\u0e27"}]} | KoichiYasuoka/bert-base-thai-upos | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"thai",
"pos",
"wikipedia",
"dependency-parsing",
"th",
"dataset:universal_dependencies",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
# bert-large-japanese-char-extended
## Model Description
This is a BERT model pre-trained on Japanese Wikipedia texts, derived from [bert-large-japanese-char](https://huggingface.co/cl-tohoku/bert-large-japanese-char). Character-embeddings are enhanced to include all 常用漢字/人名用漢字 characters using BertTokenizerFast. You can fine-tune `bert-large-japanese-char-extended` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/bert-large-japanese-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/bert-large-japanese-wikipedia-ud-head), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-large-japanese-char-extended")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/bert-large-japanese-char-extended")
```
| {"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "masked-lm", "wikipedia"], "pipeline_tag": "fill-mask", "mask_token": "[MASK]", "widget": [{"text": "\u9178\u7d20\u30dc\u30f3\u30d9\u3092\u5145[MASK]\u3059\u308b\u3002"}]} | KoichiYasuoka/bert-large-japanese-char-extended | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"japanese",
"masked-lm",
"wikipedia",
"ja",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
# bert-large-japanese-luw-upos
## Model Description
This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-large-japanese-char-extended](https://huggingface.co/KoichiYasuoka/bert-large-japanese-char-extended). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/).
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-large-japanese-luw-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-large-japanese-luw-upos")
s="国境の長いトンネルを抜けると雪国であった。"
p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
print(list(zip(s,p)))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/bert-large-japanese-luw-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## Reference
安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| {"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "token-classification", "pos", "wikipedia", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u56fd\u5883\u306e\u9577\u3044\u30c8\u30f3\u30cd\u30eb\u3092\u629c\u3051\u308b\u3068\u96ea\u56fd\u3067\u3042\u3063\u305f\u3002"}]} | KoichiYasuoka/bert-large-japanese-luw-upos | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"japanese",
"pos",
"wikipedia",
"dependency-parsing",
"ja",
"dataset:universal_dependencies",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
# bert-large-japanese-unidic-luw-upos
## Model Description
This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-large-japanese](https://huggingface.co/cl-tohoku/bert-large-japanese). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-large-japanese-unidic-luw-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-large-japanese-unidic-luw-upos")
s="国境の長いトンネルを抜けると雪国であった。"
t=tokenizer.tokenize(s)
p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
print(list(zip(t,p)))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/bert-large-japanese-unidic-luw-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
[fugashi](https://pypi.org/project/fugashi) and [unidic-lite](https://pypi.org/project/unidic-lite) are required.
## Reference
安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| {"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "token-classification", "pos", "wikipedia", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u56fd\u5883\u306e\u9577\u3044\u30c8\u30f3\u30cd\u30eb\u3092\u629c\u3051\u308b\u3068\u96ea\u56fd\u3067\u3042\u3063\u305f\u3002"}]} | KoichiYasuoka/bert-large-japanese-unidic-luw-upos | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"japanese",
"pos",
"wikipedia",
"dependency-parsing",
"ja",
"dataset:universal_dependencies",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
# bert-large-japanese-upos
## Model Description
This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-large-japanese-char-extended](https://huggingface.co/KoichiYasuoka/bert-large-japanese-char-extended). Every short-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-large-japanese-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-large-japanese-upos")
s="国境の長いトンネルを抜けると雪国であった。"
p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
print(list(zip(s,p)))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/bert-large-japanese-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| {"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "token-classification", "pos", "wikipedia", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u56fd\u5883\u306e\u9577\u3044\u30c8\u30f3\u30cd\u30eb\u3092\u629c\u3051\u308b\u3068\u96ea\u56fd\u3067\u3042\u3063\u305f\u3002"}]} | KoichiYasuoka/bert-large-japanese-upos | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"japanese",
"pos",
"wikipedia",
"dependency-parsing",
"ja",
"dataset:universal_dependencies",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
# chinese-bert-wwm-ext-upos
## Model Description
This is a BERT model pre-trained on Chinese Wikipedia texts (both simplified and traditional) for POS-tagging and dependency-parsing, derived from [chinese-bert-wwm-ext](https://huggingface.co/hfl/chinese-bert-wwm-ext). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/chinese-bert-wwm-ext-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/chinese-bert-wwm-ext-upos")
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/chinese-bert-wwm-ext-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| {"language": ["zh"], "license": "apache-2.0", "tags": ["chinese", "token-classification", "pos", "wikipedia", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification"} | KoichiYasuoka/chinese-bert-wwm-ext-upos | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"chinese",
"pos",
"wikipedia",
"dependency-parsing",
"zh",
"dataset:universal_dependencies",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
# chinese-roberta-base-upos
## Model Description
This is a BERT model pre-trained on Chinese Wikipedia texts (both simplified and traditional) for POS-tagging and dependency-parsing, derived from [chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/chinese-roberta-base-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/chinese-roberta-base-upos")
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/chinese-roberta-base-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| {"language": ["zh"], "license": "apache-2.0", "tags": ["chinese", "token-classification", "pos", "wikipedia", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification"} | KoichiYasuoka/chinese-roberta-base-upos | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"chinese",
"pos",
"wikipedia",
"dependency-parsing",
"zh",
"dataset:universal_dependencies",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
# chinese-roberta-large-upos
## Model Description
This is a BERT model pre-trained on Chinese Wikipedia texts (both simplified and traditional) for POS-tagging and dependency-parsing, derived from [chinese-roberta-wwm-ext-large](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/chinese-roberta-large-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/chinese-roberta-large-upos")
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/chinese-roberta-large-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| {"language": ["zh"], "license": "apache-2.0", "tags": ["chinese", "token-classification", "pos", "wikipedia", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification"} | KoichiYasuoka/chinese-roberta-large-upos | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"chinese",
"pos",
"wikipedia",
"dependency-parsing",
"zh",
"dataset:universal_dependencies",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
# roberta-base-english-upos
## Model Description
This is a RoBERTa model pre-trained with [UD_English](https://universaldependencies.org/en/) for POS-tagging and dependency-parsing, derived from [roberta-base](https://huggingface.co/roberta-base). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-english-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-english-upos")
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-base-english-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| {"language": ["en"], "license": "cc-by-sa-4.0", "tags": ["english", "token-classification", "pos", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification"} | KoichiYasuoka/roberta-base-english-upos | null | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"english",
"pos",
"dependency-parsing",
"en",
"dataset:universal_dependencies",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
# roberta-base-japanese-aozora-char
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts with character tokenizer. You can fine-tune `roberta-base-japanese-aozora-char` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-japanese-char-luw-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-base-japanese-aozora-ud-head), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-japanese-aozora-char")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-japanese-aozora-char")
```
## Reference
安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
| {"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "masked-lm"], "pipeline_tag": "fill-mask", "mask_token": "[MASK]", "widget": [{"text": "\u65e5\u672c\u306b\u7740\u3044\u305f\u3089[MASK]\u3092\u8a2a\u306d\u306a\u3055\u3044\u3002"}]} | KoichiYasuoka/roberta-base-japanese-aozora-char | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"japanese",
"masked-lm",
"ja",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
# roberta-base-japanese-aozora
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts with [Japanese-LUW-Tokenizer](https://github.com/KoichiYasuoka/Japanese-LUW-Tokenizer). You can fine-tune `roberta-base-japanese-aozora` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-japanese-luw-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-base-japanese-aozora-ud-goeswith), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-japanese-aozora")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-japanese-aozora")
```
## Reference
安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
| {"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "masked-lm"], "pipeline_tag": "fill-mask", "mask_token": "[MASK]", "widget": [{"text": "\u65e5\u672c\u306b\u7740\u3044\u305f\u3089[MASK]\u3092\u8a2a\u306d\u306a\u3055\u3044\u3002"}]} | KoichiYasuoka/roberta-base-japanese-aozora | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"japanese",
"masked-lm",
"ja",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
# roberta-base-japanese-char-luw-upos
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [roberta-base-japanese-aozora-char](https://huggingface.co/KoichiYasuoka/roberta-base-japanese-aozora-char). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-japanese-char-luw-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-japanese-char-luw-upos")
pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple")
nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)]
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-base-japanese-char-luw-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## Reference
安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| {"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "token-classification", "pos", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u56fd\u5883\u306e\u9577\u3044\u30c8\u30f3\u30cd\u30eb\u3092\u629c\u3051\u308b\u3068\u96ea\u56fd\u3067\u3042\u3063\u305f\u3002"}]} | KoichiYasuoka/roberta-base-japanese-char-luw-upos | null | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"japanese",
"pos",
"dependency-parsing",
"ja",
"dataset:universal_dependencies",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
# roberta-base-japanese-luw-upos
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [roberta-base-japanese-aozora](https://huggingface.co/KoichiYasuoka/roberta-base-japanese-aozora). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-japanese-luw-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-japanese-luw-upos")
pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple")
nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)]
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-base-japanese-luw-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## Reference
安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| {"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "token-classification", "pos", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u56fd\u5883\u306e\u9577\u3044\u30c8\u30f3\u30cd\u30eb\u3092\u629c\u3051\u308b\u3068\u96ea\u56fd\u3067\u3042\u3063\u305f\u3002"}]} | KoichiYasuoka/roberta-base-japanese-luw-upos | null | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"japanese",
"pos",
"dependency-parsing",
"ja",
"dataset:universal_dependencies",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
# roberta-base-thai-char-upos
## Model Description
This is a RoBERTa model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from [roberta-base-thai-char](https://huggingface.co/KoichiYasuoka/roberta-base-thai-char). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-char-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-thai-char-upos")
s="หลายหัวดีกว่าหัวเดียว"
t=tokenizer.tokenize(s)
p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
print(list(zip(t,p)))
```
or
```
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-base-thai-char-upos")
print(nlp("หลายหัวดีกว่าหัวเดียว"))
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| {"language": ["th"], "license": "apache-2.0", "tags": ["thai", "token-classification", "pos", "wikipedia", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u0e2b\u0e25\u0e32\u0e22\u0e2b\u0e31\u0e27\u0e14\u0e35\u0e01\u0e27\u0e48\u0e32\u0e2b\u0e31\u0e27\u0e40\u0e14\u0e35\u0e22\u0e27"}]} | KoichiYasuoka/roberta-base-thai-char-upos | null | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"thai",
"pos",
"wikipedia",
"dependency-parsing",
"th",
"dataset:universal_dependencies",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
# roberta-base-thai-char
## Model Description
This is a RoBERTa model pre-trained on Thai Wikipedia texts with character-wise embeddings to use BertTokenizerFast. You can fine-tune `roberta-base-thai-char` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-thai-char-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-base-thai-char-ud-goeswith), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-char")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-thai-char")
```
| {"language": ["th"], "license": "apache-2.0", "tags": ["thai", "masked-lm", "wikipedia"], "pipeline_tag": "fill-mask", "mask_token": "[MASK]"} | KoichiYasuoka/roberta-base-thai-char | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"thai",
"masked-lm",
"wikipedia",
"th",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
# roberta-base-thai-spm-upos
## Model Description
This is a RoBERTa model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from [roberta-base-thai-spm](https://huggingface.co/KoichiYasuoka/roberta-base-thai-spm). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-spm-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-thai-spm-upos")
s="หลายหัวดีกว่าหัวเดียว"
t=tokenizer.tokenize(s)
p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
print(list(zip(t,p)))
```
or
```
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-base-thai-spm-upos")
print(nlp("หลายหัวดีกว่าหัวเดียว"))
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| {"language": ["th"], "license": "apache-2.0", "tags": ["thai", "token-classification", "pos", "wikipedia", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u0e2b\u0e25\u0e32\u0e22\u0e2b\u0e31\u0e27\u0e14\u0e35\u0e01\u0e27\u0e48\u0e32\u0e2b\u0e31\u0e27\u0e40\u0e14\u0e35\u0e22\u0e27"}]} | KoichiYasuoka/roberta-base-thai-spm-upos | null | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"thai",
"pos",
"wikipedia",
"dependency-parsing",
"th",
"dataset:universal_dependencies",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
# roberta-base-thai-spm
## Model Description
This is a RoBERTa model pre-trained on Thai Wikipedia texts. You can fine-tune `roberta-base-thai-spm` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-thai-spm-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-base-thai-spm-ud-head), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-spm")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-thai-spm")
```
| {"language": ["th"], "license": "apache-2.0", "tags": ["thai", "masked-lm", "wikipedia"], "pipeline_tag": "fill-mask", "mask_token": "[MASK]"} | KoichiYasuoka/roberta-base-thai-spm | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"thai",
"masked-lm",
"wikipedia",
"th",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
# roberta-base-thai-syllable-upos
## Model Description
This is a RoBERTa model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from [roberta-base-thai-syllable](https://huggingface.co/KoichiYasuoka/roberta-base-thai-syllable). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-syllable-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-thai-syllable-upos")
s="หลายหัวดีกว่าหัวเดียว"
t=tokenizer.tokenize(s)
p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
print(list(zip(t,p)))
```
or
```
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-base-thai-syllable-upos")
print(nlp("หลายหัวดีกว่าหัวเดียว"))
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| {"language": ["th"], "license": "apache-2.0", "tags": ["thai", "token-classification", "pos", "wikipedia", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u0e2b\u0e25\u0e32\u0e22\u0e2b\u0e31\u0e27\u0e14\u0e35\u0e01\u0e27\u0e48\u0e32\u0e2b\u0e31\u0e27\u0e40\u0e14\u0e35\u0e22\u0e27"}]} | KoichiYasuoka/roberta-base-thai-syllable-upos | null | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"thai",
"pos",
"wikipedia",
"dependency-parsing",
"th",
"dataset:universal_dependencies",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
# roberta-base-thai-syllable
## Model Description
This is a RoBERTa model pre-trained on Thai Wikipedia texts, derived from [wangchanberta-base-wiki-syllable](https://huggingface.co/airesearch/wangchanberta-base-wiki-syllable). Character-embeddings are modified to use BertTokenizerFast. You can fine-tune `roberta-base-thai-syllable` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-thai-syllable-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-base-thai-syllable-ud-goeswith), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-syllable")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-thai-syllable")
```
| {"language": ["th"], "license": "apache-2.0", "tags": ["thai", "masked-lm", "wikipedia"], "pipeline_tag": "fill-mask", "mask_token": "<mask>", "widget": [{"text": "\u0e41\u0e1c\u0e19\u0e01\u0e19\u0e35\u0e49\u0e01\u0e33\u0e25\u0e31\u0e07<mask>\u0e01\u0e31\u0e1a\u0e04\u0e27\u0e32\u0e21\u0e17\u0e49\u0e32\u0e17\u0e32\u0e22\u0e43\u0e2b\u0e21\u0e48"}]} | KoichiYasuoka/roberta-base-thai-syllable | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"thai",
"masked-lm",
"wikipedia",
"th",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
# roberta-classical-chinese-base-char
## Model Description
This is a RoBERTa model pre-trained on Classical Chinese texts, derived from [GuwenBERT-base](https://huggingface.co/ethanyt/guwenbert-base). Character-embeddings are enhanced into traditional/simplified characters. You can fine-tune `roberta-classical-chinese-base-char` for downstream tasks, such as [sentence-segmentation](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-base-sentence-segmentation), [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-base-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-base-ud-goeswith), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-char")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-char")
```
## See Also
[SuPar-Kanbun](https://github.com/KoichiYasuoka/SuPar-Kanbun): Tokenizer POS-tagger and Dependency-parser for Classical Chinese
| {"language": ["lzh"], "license": "apache-2.0", "tags": ["classical chinese", "literary chinese", "ancient chinese", "masked-lm"], "pipeline_tag": "fill-mask", "mask_token": "[MASK]", "widget": [{"text": "\u5b5f\u5b50[MASK]\u6881\u60e0\u738b"}]} | KoichiYasuoka/roberta-classical-chinese-base-char | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"classical chinese",
"literary chinese",
"ancient chinese",
"masked-lm",
"lzh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
# roberta-classical-chinese-base-sentence-segmentation
## Model Description
This is a RoBERTa model pre-trained on Classical Chinese texts for sentence segmentation, derived from [roberta-classical-chinese-base-char](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-base-char). Every segmented sentence begins with token-class "B" and ends with token-class "E" (except for single-character sentence with token-class "S").
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-sentence-segmentation")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-sentence-segmentation")
s="子曰學而時習之不亦説乎有朋自遠方來不亦樂乎人不知而不慍不亦君子乎"
p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
print("".join(c+"。" if q=="E" or q=="S" else c for c,q in zip(s,p)))
```
## Reference
Koichi Yasuoka: [Sentence Segmentation of Classical Chinese Texts Using Transformers and BERT/RoBERTa Models](http://hdl.handle.net/2433/266539), IPSJ Symposium Series, Vol.2021, No.1 (December 2021), pp.104-109.
| {"language": ["lzh"], "license": "apache-2.0", "tags": ["classical chinese", "literary chinese", "ancient chinese", "sentence segmentation", "token-classification"], "pipeline_tag": "token-classification", "widget": [{"text": "\u5b50\u66f0\u5b78\u800c\u6642\u7fd2\u4e4b\u4e0d\u4ea6\u8aac\u4e4e\u6709\u670b\u81ea\u9060\u65b9\u4f86\u4e0d\u4ea6\u6a02\u4e4e\u4eba\u4e0d\u77e5\u800c\u4e0d\u614d\u4e0d\u4ea6\u541b\u5b50\u4e4e"}]} | KoichiYasuoka/roberta-classical-chinese-base-sentence-segmentation | null | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"classical chinese",
"literary chinese",
"ancient chinese",
"sentence segmentation",
"lzh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
# roberta-classical-chinese-base-upos
## Model Description
This is a RoBERTa model pre-trained on Classical Chinese texts for POS-tagging and dependency-parsing, derived from [roberta-classical-chinese-base-char](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-base-char). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-upos")
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-classical-chinese-base-upos")
```
## Reference
Koichi Yasuoka: [Universal Dependencies Treebank of the Four Books in Classical Chinese](http://hdl.handle.net/2433/245217), DADH2019: 10th International Conference of Digital Archives and Digital Humanities (December 2019), pp.20-28.
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| {"language": ["lzh"], "license": "apache-2.0", "tags": ["classical chinese", "literary chinese", "ancient chinese", "token-classification", "pos", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u5b50\u66f0\u5b78\u800c\u6642\u7fd2\u4e4b\u4e0d\u4ea6\u8aac\u4e4e\u6709\u670b\u81ea\u9060\u65b9\u4f86\u4e0d\u4ea6\u6a02\u4e4e\u4eba\u4e0d\u77e5\u800c\u4e0d\u614d\u4e0d\u4ea6\u541b\u5b50\u4e4e"}]} | KoichiYasuoka/roberta-classical-chinese-base-upos | null | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"classical chinese",
"literary chinese",
"ancient chinese",
"pos",
"dependency-parsing",
"lzh",
"dataset:universal_dependencies",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
# roberta-classical-chinese-large-char
## Model Description
This is a RoBERTa model pre-trained on Classical Chinese texts, derived from [GuwenBERT-large](https://huggingface.co/ethanyt/guwenbert-large). Character-embeddings are enhanced into traditional/simplified characters. You can fine-tune `roberta-classical-chinese-large-char` for downstream tasks, such as [sentence-segmentation](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-large-sentence-segmentation), [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-large-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-large-ud-goeswith), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-large-char")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-classical-chinese-large-char")
```
## See Also
[SuPar-Kanbun](https://github.com/KoichiYasuoka/SuPar-Kanbun): Tokenizer POS-tagger and Dependency-parser for Classical Chinese
| {"language": ["lzh"], "license": "apache-2.0", "tags": ["classical chinese", "literary chinese", "ancient chinese", "masked-lm"], "pipeline_tag": "fill-mask", "mask_token": "[MASK]", "widget": [{"text": "\u5b5f\u5b50[MASK]\u6881\u60e0\u738b"}]} | KoichiYasuoka/roberta-classical-chinese-large-char | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"classical chinese",
"literary chinese",
"ancient chinese",
"masked-lm",
"lzh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
# roberta-classical-chinese-large-sentence-segmentation
## Model Description
This is a RoBERTa model pre-trained on Classical Chinese texts for sentence segmentation, derived from [roberta-classical-chinese-large-char](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-large-char). Every segmented sentence begins with token-class "B" and ends with token-class "E" (except for single-character sentence with token-class "S").
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-large-sentence-segmentation")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-classical-chinese-large-sentence-segmentation")
s="子曰學而時習之不亦説乎有朋自遠方來不亦樂乎人不知而不慍不亦君子乎"
p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
print("".join(c+"。" if q=="E" or q=="S" else c for c,q in zip(s,p)))
```
## Reference
Koichi Yasuoka: [Sentence Segmentation of Classical Chinese Texts Using Transformers and BERT/RoBERTa Models](http://hdl.handle.net/2433/266539), IPSJ Symposium Series, Vol.2021, No.1 (December 2021), pp.104-109.
| {"language": ["lzh"], "license": "apache-2.0", "tags": ["classical chinese", "literary chinese", "ancient chinese", "sentence segmentation", "token-classification"], "pipeline_tag": "token-classification", "widget": [{"text": "\u5b50\u66f0\u5b78\u800c\u6642\u7fd2\u4e4b\u4e0d\u4ea6\u8aac\u4e4e\u6709\u670b\u81ea\u9060\u65b9\u4f86\u4e0d\u4ea6\u6a02\u4e4e\u4eba\u4e0d\u77e5\u800c\u4e0d\u614d\u4e0d\u4ea6\u541b\u5b50\u4e4e"}]} | KoichiYasuoka/roberta-classical-chinese-large-sentence-segmentation | null | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"classical chinese",
"literary chinese",
"ancient chinese",
"sentence segmentation",
"lzh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
# roberta-classical-chinese-large-upos
## Model Description
This is a RoBERTa model pre-trained on Classical Chinese texts for POS-tagging and dependency-parsing, derived from [roberta-classical-chinese-large-char](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-large-char). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-large-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-classical-chinese-large-upos")
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-classical-chinese-large-upos")
```
## Reference
Koichi Yasuoka: [Universal Dependencies Treebank of the Four Books in Classical Chinese](http://hdl.handle.net/2433/245217), DADH2019: 10th International Conference of Digital Archives and Digital Humanities (December 2019), pp.20-28.
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| {"language": ["lzh"], "license": "apache-2.0", "tags": ["classical chinese", "literary chinese", "ancient chinese", "token-classification", "pos", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u5b50\u66f0\u5b78\u800c\u6642\u7fd2\u4e4b\u4e0d\u4ea6\u8aac\u4e4e\u6709\u670b\u81ea\u9060\u65b9\u4f86\u4e0d\u4ea6\u6a02\u4e4e\u4eba\u4e0d\u77e5\u800c\u4e0d\u614d\u4e0d\u4ea6\u541b\u5b50\u4e4e"}]} | KoichiYasuoka/roberta-classical-chinese-large-upos | null | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"classical chinese",
"literary chinese",
"ancient chinese",
"pos",
"dependency-parsing",
"lzh",
"dataset:universal_dependencies",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
# roberta-large-english-upos
## Model Description
This is a RoBERTa model pre-trained with [UD_English](https://universaldependencies.org/en/) for POS-tagging and dependency-parsing, derived from [roberta-large](https://huggingface.co/roberta-large). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-large-english-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-large-english-upos")
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-large-english-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| {"language": ["en"], "license": "cc-by-sa-4.0", "tags": ["english", "token-classification", "pos", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification"} | KoichiYasuoka/roberta-large-english-upos | null | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"english",
"pos",
"dependency-parsing",
"en",
"dataset:universal_dependencies",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
# roberta-large-japanese-aozora-char
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts with character tokenizer. You can fine-tune `roberta-large-japanese-aozora-char` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-large-japanese-char-luw-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-large-japanese-aozora-ud-head), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-large-japanese-aozora-char")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-large-japanese-aozora-char")
```
## Reference
安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
| {"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "masked-lm"], "pipeline_tag": "fill-mask", "mask_token": "[MASK]", "widget": [{"text": "\u65e5\u672c\u306b\u7740\u3044\u305f\u3089[MASK]\u3092\u8a2a\u306d\u306a\u3055\u3044\u3002"}]} | KoichiYasuoka/roberta-large-japanese-aozora-char | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"japanese",
"masked-lm",
"ja",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
# roberta-large-japanese-aozora
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts with [Japanese-LUW-Tokenizer](https://github.com/KoichiYasuoka/Japanese-LUW-Tokenizer). You can fine-tune `roberta-large-japanese-aozora` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-large-japanese-luw-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-large-japanese-aozora-ud-goeswith), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-large-japanese-aozora")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-large-japanese-aozora")
```
## Reference
安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
| {"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "masked-lm"], "pipeline_tag": "fill-mask", "mask_token": "[MASK]", "widget": [{"text": "\u65e5\u672c\u306b\u7740\u3044\u305f\u3089[MASK]\u3092\u8a2a\u306d\u306a\u3055\u3044\u3002"}]} | KoichiYasuoka/roberta-large-japanese-aozora | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"japanese",
"masked-lm",
"ja",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
# roberta-large-japanese-char-luw-upos
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [roberta-large-japanese-aozora-char](https://huggingface.co/KoichiYasuoka/roberta-large-japanese-aozora-char). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-large-japanese-char-luw-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-large-japanese-char-luw-upos")
pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple")
nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)]
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-large-japanese-char-luw-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## Reference
安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| {"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "token-classification", "pos", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u56fd\u5883\u306e\u9577\u3044\u30c8\u30f3\u30cd\u30eb\u3092\u629c\u3051\u308b\u3068\u96ea\u56fd\u3067\u3042\u3063\u305f\u3002"}]} | KoichiYasuoka/roberta-large-japanese-char-luw-upos | null | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"japanese",
"pos",
"dependency-parsing",
"ja",
"dataset:universal_dependencies",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
# roberta-large-japanese-luw-upos
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [roberta-large-japanese-aozora](https://huggingface.co/KoichiYasuoka/roberta-large-japanese-aozora). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-large-japanese-luw-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-large-japanese-luw-upos")
pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple")
nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)]
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-large-japanese-luw-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## Reference
安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| {"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "token-classification", "pos", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u56fd\u5883\u306e\u9577\u3044\u30c8\u30f3\u30cd\u30eb\u3092\u629c\u3051\u308b\u3068\u96ea\u56fd\u3067\u3042\u3063\u305f\u3002"}]} | KoichiYasuoka/roberta-large-japanese-luw-upos | null | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"japanese",
"pos",
"dependency-parsing",
"ja",
"dataset:universal_dependencies",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
# roberta-small-japanese-aozora-char
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts with character tokenizer. You can fine-tune `roberta-small-japanese-aozora-char` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-small-japanese-char-luw-upos), dependency-parsing, and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-small-japanese-aozora-char")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-small-japanese-aozora-char")
```
| {"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "masked-lm"], "pipeline_tag": "fill-mask", "mask_token": "[MASK]", "widget": [{"text": "\u65e5\u672c\u306b\u7740\u3044\u305f\u3089[MASK]\u3092\u8a2a\u306d\u306a\u3055\u3044\u3002"}]} | KoichiYasuoka/roberta-small-japanese-aozora-char | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"japanese",
"masked-lm",
"ja",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
# roberta-small-japanese-aozora
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts with [Japanese-LUW-Tokenizer](https://github.com/KoichiYasuoka/Japanese-LUW-Tokenizer). You can fine-tune `roberta-small-japanese-aozora` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-small-japanese-luw-upos), dependency-parsing, and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-small-japanese-aozora")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-small-japanese-aozora")
```
| {"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "masked-lm"], "pipeline_tag": "fill-mask", "mask_token": "[MASK]", "widget": [{"text": "\u65e5\u672c\u306b\u7740\u3044\u305f\u3089[MASK]\u3092\u8a2a\u306d\u306a\u3055\u3044\u3002"}]} | KoichiYasuoka/roberta-small-japanese-aozora | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"japanese",
"masked-lm",
"ja",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
# roberta-small-japanese-char-luw-upos
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [roberta-small-japanese-aozora-char](https://huggingface.co/KoichiYasuoka/roberta-small-japanese-aozora-char). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-small-japanese-char-luw-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-small-japanese-char-luw-upos")
pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple")
nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)]
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-small-japanese-char-luw-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| {"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "token-classification", "pos", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u56fd\u5883\u306e\u9577\u3044\u30c8\u30f3\u30cd\u30eb\u3092\u629c\u3051\u308b\u3068\u96ea\u56fd\u3067\u3042\u3063\u305f\u3002"}]} | KoichiYasuoka/roberta-small-japanese-char-luw-upos | null | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"japanese",
"pos",
"dependency-parsing",
"ja",
"dataset:universal_dependencies",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
# roberta-small-japanese-luw-upos
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [roberta-small-japanese-aozora](https://huggingface.co/KoichiYasuoka/roberta-small-japanese-aozora). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-small-japanese-luw-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-small-japanese-luw-upos")
pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple")
nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)]
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-small-japanese-luw-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| {"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "token-classification", "pos", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u56fd\u5883\u306e\u9577\u3044\u30c8\u30f3\u30cd\u30eb\u3092\u629c\u3051\u308b\u3068\u96ea\u56fd\u3067\u3042\u3063\u305f\u3002"}]} | KoichiYasuoka/roberta-small-japanese-luw-upos | null | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"japanese",
"pos",
"dependency-parsing",
"ja",
"dataset:universal_dependencies",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
# xlm-roberta-base-english-upos
## Model Description
This is an XLM-RoBERTa model pre-trained with [UD_English-EWT](https://github.com/UniversalDependencies/UD_English-EWT) for POS-tagging and dependency-parsing, derived from [xlm-roberta-base](https://huggingface.co/xlm-roberta-base). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/xlm-roberta-base-english-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/xlm-roberta-base-english-upos")
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/xlm-roberta-base-english-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| {"language": ["en"], "license": "cc-by-sa-4.0", "tags": ["english", "token-classification", "pos", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification"} | KoichiYasuoka/xlm-roberta-base-english-upos | null | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"english",
"pos",
"dependency-parsing",
"en",
"dataset:universal_dependencies",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | null | #Harry Potter DialoGPT Model | {"tags": ["conversational"]} | Konggate/DialoGPT-small-harrypotter | null | [
"conversational",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
# Α lite RoBERTa fill mask model trained mostly in greek tweets
The training dataset of this model consists of 23 million tweets in Greek, of approximately 5000 users in total, spanning from 2008 to 2018.
The model has been trained to support the work for the paper [Multimodal Hate Speech Detection in Greek Social Media](https://www.mdpi.com/2414-4088/5/7/34)
## Load the pretrained model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Konstantinos/BERTaTweetGR")
model = AutoModel.from_pretrained("Konstantinos/BERTaTweetGR")
```
| {"language": "el", "widget": [{"text": "\u03bc\u03c0\u03b1\u03b9\u03bd\u03c9 \u03c3\u03c4\u03bf <mask> \u03ba\u03b1\u03b9 \u03c4\u03b9 \u03bd\u03b1 \u03b4\u03c9."}]} | Konstantinos/BERTaTweetGR | null | [
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"el",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
model = AutoModelForCausalLM.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua") | {} | Kookly/Kooklybots | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Koraiem/test_1 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
I'm dumb | {"tags": ["conversational"]} | Koriyy/DialoGPT-medium-gf | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# Rick and Morty DialoGPT Model | {"tags": ["conversational"]} | Koro/DialoGPT-medium-rickandmorty | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | null |
# Rick and Morty DialoGPT Model | {"tags": ["conversational"]} | Koro/DialoGPT-small-rickandmorty | null | [
"conversational",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Koshi-108/distilbert-base-uncased-finetuned-squad | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Kosmo/Kosmo | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Kothi/model_name | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Kouki/wav2vec2-common-voice-ja | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers | # Bangla BERT Base
Here we published a pretrained Bangla bert language model as **bangla-bert**! which is now available in huggingface model hub.
Here we described [bangla-bert](https://github.com/Kowsher/bert-base-bangla) which is a pretrained Bangla language model based on mask language modeling described in [BERT](https://arxiv.org/abs/1810.04805) and the GitHub [repository](https://github.com/google-research/bert)
## Corpus Details
We trained the Bangla bert language model using BanglaLM dataset from kaggle [BanglaLM](https://www.kaggle.com/gakowsher/bangla-language-model-dataset). There is 3 version of dataset which is almost 40GB.
After downloading the dataset, we went on the way to mask LM.
**bangla-bert Tokenizer**
```py
from transformers import AutoTokenizer, AutoModel
bnbert_tokenizer = AutoTokenizer.from_pretrained("Kowsher/bangla-bert")
text = "খাঁটি সোনার চাইতে খাঁটি আমার দেশের মাটি"
bnbert_tokenizer.tokenize(text)
# output: ['খাটি', 'সে', '##ানার', 'চাইতে', 'খাটি', 'আমার', 'দেশের', 'মাটি']
```
**MASK Generation**
here, we can use bert base bangla model as for masked language modeling:
```py
from transformers import BertForMaskedLM, BertTokenizer, pipeline
model = BertForMaskedLM.from_pretrained("Kowsher/bangla-bert")
tokenizer = BertTokenizer.from_pretrained("Kowsher/bangla-bert")
nlp = pipeline('fill-mask', model=model, tokenizer=tokenizer)
for pred in nlp(f"আমি বাংলার গান {nlp.tokenizer.mask_token}"):
print(pred)
# {'sequence': 'আমি বাংলার গান লিখি', 'score': 0.17955434322357178, 'token': 24749, 'token_str': 'লিখি'}
nlp = pipeline('fill-mask', model=model, tokenizer=tokenizer)
for pred in nlp(f"তুই রাজাকার তুই {nlp.tokenizer.mask_token}"):
print(pred)
# {'sequence': 'তুই রাজাকার তুই রাজাকার', 'score': 0.9975168704986572, 'token': 13401, 'token_str': 'রাজাকার'}
nlp = pipeline('fill-mask', model=model, tokenizer=tokenizer)
for pred in nlp(f"বাংলা আমার {nlp.tokenizer.mask_token}"):
print(pred)
# {'sequence': 'বাংলা আমার অহংকার', 'score': 0.5679506063461304, 'token': 19009, 'token_str': 'অহংকার'}
```
**Cite this work**
M. Kowsher, A. A. Sami, N. J. Prottasha, M. S. Arefin, P. K. Dhar and T. Koshiba, "Bangla-BERT: Transformer-based Efficient Model for Transfer Learning and Language Understanding," in IEEE Access, 2022, doi: 10.1109/ACCESS.2022.3197662.
## Author
[Kowsher](http://kowsher.org/)
| {"language": "bn", "tags": ["Bert base Bangla", "Bengali Bert", "Bengali lm", "Bangla Base Bert", "Bangla Bert language model", "Bangla Bert"], "datasets": ["BanglaLM dataset"]} | Kowsher/bangla-bert | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"Bert base Bangla",
"Bengali Bert",
"Bengali lm",
"Bangla Base Bert",
"Bangla Bert language model",
"Bangla Bert",
"bn",
"arxiv:1810.04805",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers | {} | Kowsher/bert-base-bangla-ner | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers | {} | Kowsher/model-bangla-bert | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Kr33p/DialoGPT-medium-Albedo | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | KranNaut/bert-tagalog-base-uncased-finetuned-ner | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | KranNaut/finetuned-bert-ner | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9005
- Mae: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.108 | 1.0 | 235 | 0.9801 | 0.5610 |
| 0.9592 | 2.0 | 470 | 0.9005 | 0.5 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "model-index": [{"name": "xlm-roberta-base-finetuned-marc-en", "results": []}]} | Krassy/xlm-roberta-base-finetuned-marc-en | null | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# Santa Chatbot | {"tags": ["conversational"]} | KringleClaus/Dialog-santa | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-plot
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.9.0
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "gpt2-plot", "results": []}]} | KrishParikh/gpt2_imdb_movie_plots | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | KrishanuMishra/DialoGPT-medium-Rick | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | ---
tags:
- conversational
--- | {} | KrishnaChandra4/DialoGPT-small-Rick | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | KrishnaChandra4/DialoGPT-small-joshua | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# Harry Potter DialoGPTModel | {"tags": ["conversational"]} | KrispyIChris/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers | # Buro discord bot | {"tags": ["conversational"]} | Kryptone/Burobot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers | # Rin chatbot | {"tags": ["conversational"]} | Kryptone/RinAI | null | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# MoniKA unstable | {"tags": ["conversational"]} | Kryptone/monikAI-Unstable | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers | # Monika Discord Chatbot | {"tags": ["conversational"]} | Kryptone/monikAI | null | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text2text-generation | transformers | ## mDialBART: A Cross-Lingual Dialogue Summarization Model
This model is introduced by [*ClidSum: A Benchmark Dataset for Cross-Lingual Dialogue Summarization*](https://arxiv.org/abs/2202.05599). | {"license": "cc-by-nc-sa-4.0"} | Krystalan/mdialbart_de | null | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"arxiv:2202.05599",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text2text-generation | transformers | ## mDialBART: A Cross-Lingual Dialogue Summarization Model
This model is introduced by [*ClidSum: A Benchmark Dataset for Cross-Lingual Dialogue Summarization*](https://arxiv.org/abs/2202.05599). | {"license": "cc-by-nc-sa-4.0"} | Krystalan/mdialbart_zh | null | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"arxiv:2202.05599",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# Rick Sanchez DialoGPT Model | {"tags": ["conversational"]} | Kshaunish/DialoGPT-small-rick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Kudoz/DialoGPT-medium-Morty | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Kuge266/DialoGPT-medium-Rollercoaster | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Kuge266/DialoGPT-small-Rollercoaster | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7758
- Matthews Correlation: 0.5259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.1926 | 1.0 | 535 | 0.7758 | 0.5259 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5258663312307151, "name": "Matthews Correlation"}]}]}]} | Kumicho/distilbert-base-uncased-finetuned-cola | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Kup/gpt2-wikitext2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# librispeech-100h-supervised
This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0955
- Wer: 0.0345
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 24
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.8277 | 0.42 | 500 | 2.9071 | 1.0 |
| 2.0261 | 0.84 | 1000 | 0.3060 | 0.2496 |
| 0.2181 | 1.26 | 1500 | 0.1172 | 0.0873 |
| 0.1255 | 1.68 | 2000 | 0.0894 | 0.0637 |
| 0.0971 | 2.1 | 2500 | 0.0821 | 0.0560 |
| 0.078 | 2.52 | 3000 | 0.0751 | 0.0500 |
| 0.0706 | 2.94 | 3500 | 0.0721 | 0.0456 |
| 0.0609 | 3.36 | 4000 | 0.0755 | 0.0464 |
| 0.0572 | 3.78 | 4500 | 0.0705 | 0.0431 |
| 0.0528 | 4.2 | 5000 | 0.0715 | 0.0423 |
| 0.0481 | 4.62 | 5500 | 0.0691 | 0.0403 |
| 0.0471 | 5.04 | 6000 | 0.0743 | 0.0401 |
| 0.0412 | 5.46 | 6500 | 0.0757 | 0.0399 |
| 0.0416 | 5.88 | 7000 | 0.0688 | 0.0378 |
| 0.0391 | 6.3 | 7500 | 0.0704 | 0.0383 |
| 0.0367 | 6.72 | 8000 | 0.0742 | 0.0387 |
| 0.0349 | 7.14 | 8500 | 0.0732 | 0.0388 |
| 0.033 | 7.56 | 9000 | 0.0719 | 0.0374 |
| 0.0327 | 7.98 | 9500 | 0.0750 | 0.0369 |
| 0.0292 | 8.4 | 10000 | 0.0734 | 0.0368 |
| 0.0303 | 8.82 | 10500 | 0.0733 | 0.0365 |
| 0.0283 | 9.24 | 11000 | 0.0766 | 0.0357 |
| 0.0269 | 9.66 | 11500 | 0.0761 | 0.0350 |
| 0.0268 | 10.08 | 12000 | 0.0802 | 0.0359 |
| 0.0245 | 10.42 | 12500 | 0.0758 | 0.0354 |
| 0.023 | 10.84 | 13000 | 0.0775 | 0.0349 |
| 0.0186 | 11.26 | 13500 | 0.0817 | 0.0355 |
| 0.0176 | 11.68 | 14000 | 0.0853 | 0.0354 |
| 0.0163 | 12.1 | 14500 | 0.0880 | 0.0347 |
| 0.0156 | 12.52 | 15000 | 0.0864 | 0.0357 |
| 0.0141 | 12.94 | 15500 | 0.0897 | 0.0355 |
| 0.0134 | 13.36 | 16000 | 0.0915 | 0.0349 |
| 0.013 | 13.78 | 16500 | 0.0928 | 0.0350 |
| 0.0097 | 13.42 | 17000 | 0.0955 | 0.0345 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.2
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "librispeech-100h-supervised", "results": []}]} | Kuray107/librispeech-100h-supervised | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# timit-5percent-supervised
This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6615
- Wer: 0.2788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 5.3773 | 33.33 | 500 | 2.9693 | 1.0 |
| 1.4746 | 66.67 | 1000 | 0.5050 | 0.3359 |
| 0.1067 | 100.0 | 1500 | 0.5981 | 0.3054 |
| 0.0388 | 133.33 | 2000 | 0.6192 | 0.2712 |
| 0.0244 | 166.67 | 2500 | 0.6392 | 0.2776 |
| 0.018 | 200.0 | 3000 | 0.6615 | 0.2788 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.2
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "timit-5percent-supervised", "results": []}]} | Kuray107/timit-5percent-supervised | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# timit-supervised
This model is a fine-tuned version of [Experiments/single_dataset/timit-supervised/checkpoint-3500](https://huggingface.co/Experiments/single_dataset/timit-supervised/checkpoint-3500) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1272
- Wer: 0.0532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0554 | 1.77 | 500 | 0.1310 | 0.0697 |
| 0.0509 | 3.53 | 1000 | 0.1497 | 0.0710 |
| 0.038 | 5.3 | 1500 | 0.1190 | 0.0659 |
| 0.0328 | 7.07 | 2000 | 0.0926 | 0.0596 |
| 0.0247 | 8.83 | 2500 | 0.0873 | 0.0570 |
| 0.0229 | 10.6 | 3000 | 0.0890 | 0.0532 |
| 0.0183 | 12.37 | 3500 | 0.0969 | 0.0532 |
| 0.0326 | 14.13 | 4000 | 0.0809 | 0.0469 |
| 0.03 | 15.9 | 4500 | 0.0758 | 0.0444 |
| 0.0264 | 17.67 | 5000 | 0.0973 | 0.0520 |
| 0.0244 | 19.43 | 5500 | 0.1272 | 0.0532 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.2
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "timit-supervised", "results": []}]} | Kuray107/timit-supervised | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wsj0-full-supervised
This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0623
- Wer: 0.0343
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.517 | 0.86 | 500 | 2.9475 | 1.0 |
| 2.2387 | 1.72 | 1000 | 0.4004 | 0.3498 |
| 0.3081 | 2.57 | 1500 | 0.1362 | 0.1159 |
| 0.1744 | 3.43 | 2000 | 0.1125 | 0.0929 |
| 0.1285 | 4.29 | 2500 | 0.0894 | 0.0727 |
| 0.1015 | 5.15 | 3000 | 0.0852 | 0.0642 |
| 0.0811 | 6.0 | 3500 | 0.0789 | 0.0614 |
| 0.0748 | 6.86 | 4000 | 0.0746 | 0.0529 |
| 0.0639 | 7.72 | 4500 | 0.0714 | 0.0481 |
| 0.0606 | 8.58 | 5000 | 0.0698 | 0.0489 |
| 0.0525 | 9.43 | 5500 | 0.0747 | 0.0464 |
| 0.0489 | 10.29 | 6000 | 0.0594 | 0.0396 |
| 0.0419 | 11.15 | 6500 | 0.0600 | 0.0359 |
| 0.0414 | 12.01 | 7000 | 0.0612 | 0.0412 |
| 0.0383 | 12.86 | 7500 | 0.0676 | 0.0392 |
| 0.0352 | 13.72 | 8000 | 0.0626 | 0.0388 |
| 0.034 | 14.58 | 8500 | 0.0699 | 0.0372 |
| 0.0309 | 15.44 | 9000 | 0.0807 | 0.0420 |
| 0.0295 | 16.3 | 9500 | 0.0796 | 0.0396 |
| 0.0273 | 17.15 | 10000 | 0.0716 | 0.0376 |
| 0.0271 | 18.01 | 10500 | 0.0657 | 0.0384 |
| 0.0251 | 18.87 | 11000 | 0.0585 | 0.0351 |
| 0.024 | 19.73 | 11500 | 0.0557 | 0.0347 |
| 0.0252 | 20.58 | 12000 | 0.0609 | 0.0327 |
| 0.0231 | 21.44 | 12500 | 0.0720 | 0.0368 |
| 0.0202 | 22.3 | 13000 | 0.0625 | 0.0343 |
| 0.0195 | 23.16 | 13500 | 0.0635 | 0.0372 |
| 0.0201 | 24.01 | 14000 | 0.0582 | 0.0335 |
| 0.0183 | 24.87 | 14500 | 0.0562 | 0.0343 |
| 0.0183 | 25.73 | 15000 | 0.0629 | 0.0335 |
| 0.0175 | 26.59 | 15500 | 0.0593 | 0.0323 |
| 0.017 | 27.44 | 16000 | 0.0631 | 0.0339 |
| 0.0162 | 28.3 | 16500 | 0.0597 | 0.0335 |
| 0.0169 | 29.16 | 17000 | 0.0623 | 0.0343 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.2
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wsj0-full-supervised", "results": []}]} | Kuray107/wsj0-full-supervised | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# Harry Potter DialoGPT Model | {"tags": ["conversational"]} | Kush/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Kutlwano/AutoLyrist | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Kyaw/distilgpt2-finetuned-wikitext2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Kyaw/t5-small-finetuned-xsum | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Kyobkiq/opus-mt-finetuned-en-to-de | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Kyon/K | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Kyon/Kyon | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
feature-extraction | transformers | This is **KOREAN** Bert Masked LM pretrained model adapted in **BEAUTY** domain. (BertForMaskedLM)
About 60,000 reviews were used.
It was fine-tuned based on _beomi/kcbert-base_ model weights.
Enjoy! | {} | Kyoungmin/beauty-base-KLCP | null | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | **Second** BertForMaskedLM pretrained model in **KOREAN Beauty** domain.
About 120,000 reviews were used.
It was trained based on _beomi/kcbert-base_ .
Check out _Kyoungmin/beauty-base-KLCP_ for smaller model !! | {} | Kyoungmin/beauty-base-KLCP2 | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | No use | {} | Kyoungmin/beauty-word2vec | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | This is practice model for kcbert-base with Korean petition data! | {} | Kyoungmin/kcbert-base-petition | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | transformers | {} | Kyuyoung11/haremotions-v1 | null | [
"transformers",
"electra",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | transformers | {} | Kyuyoung11/haremotions-v2 | null | [
"transformers",
"pytorch",
"electra",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | transformers | {} | Kyuyoung11/haremotions-v3 | null | [
"transformers",
"pytorch",
"electra",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | transformers | {} | Kyuyoung11/haremotions-v4 | null | [
"transformers",
"pytorch",
"electra",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | transformers | {} | Kyuyoung11/haremotions-v5 | null | [
"transformers",
"pytorch",
"electra",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Kyuyoung11/haremotions_audio_v1 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
#VADER DialogGPT Model | {"tags": ["conversational"]} | LARACHNIDE/DialogGPT-small-sw | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | LARACHNIDE/DialogGPT-small-sw2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
multiple-choice | transformers |
# Roberta Large Fine Tuned on RACE
## Model description
This model follows the implementation by Allen AI team about [Aristo Roberta V7 Model](https://leaderboard.allenai.org/arc/submission/blcotvl7rrltlue6bsv0) given in [ARC Challenge](https://leaderboard.allenai.org/arc/submissions/public)
#### How to use
```python
import datasets
from transformers import RobertaTokenizer
from transformers import RobertaForMultipleChoice
tokenizer = RobertaTokenizer.from_pretrained(
"LIAMF-USP/aristo-roberta")
model = RobertaForMultipleChoice.from_pretrained(
"LIAMF-USP/aristo-roberta")
dataset = datasets.load_dataset(
"arc",,
split=["train", "validation", "test"],
)
training_examples = dataset[0]
evaluation_examples = dataset[1]
test_examples = dataset[2]
example=training_examples[0]
example_id = example["example_id"]
question = example["question"]
label_example = example["answer"]
options = example["options"]
if label_example in ["A", "B", "C", "D", "E"]:
label_map = {label: i for i, label in enumerate(
["A", "B", "C", "D", "E"])}
elif label_example in ["1", "2", "3", "4", "5"]:
label_map = {label: i for i, label in enumerate(
["1", "2", "3", "4", "5"])}
else:
print(f"{label_example} not found")
while len(options) < 5:
empty_option = {}
empty_option['option_context'] = ''
empty_option['option_text'] = ''
options.append(empty_option)
choices_inputs = []
for ending_idx, option in enumerate(options):
ending = option["option_text"]
context = option["option_context"]
if question.find("_") != -1:
# fill in the banks questions
question_option = question.replace("_", ending)
else:
question_option = question + " " + ending
inputs = tokenizer(
context,
question_option,
add_special_tokens=True,
max_length=MAX_SEQ_LENGTH,
padding="max_length",
truncation=True,
return_overflowing_tokens=False,
)
if "num_truncated_tokens" in inputs and inputs["num_truncated_tokens"] > 0:
logging.warning(f"Question: {example_id} with option {ending_idx} was truncated")
choices_inputs.append(inputs)
label = label_map[label_example]
input_ids = [x["input_ids"] for x in choices_inputs]
attention_mask = (
[x["attention_mask"] for x in choices_inputs]
# as the senteces follow the same structure, just one of them is
# necessary to check
if "attention_mask" in choices_inputs[0]
else None
)
example_encoded = {
"example_id": example_id,
"input_ids": input_ids,
"attention_mask": attention_mask,
"token_type_ids": token_type_ids,
"label": label
}
output = model(**example_encoded)
```
## Training data
the Training data was the same as proposed [here](https://leaderboard.allenai.org/arc/submission/blcotvl7rrltlue6bsv0)
The only diferrence was the hypeparameters of RACE fine tuned model, which were reported [here](https://huggingface.co/LIAMF-USP/roberta-large-finetuned-race#eval-results)
## Training procedure
It was necessary to preprocess the data with a method that is exemplified for a single instance in the _How to use_ section. The used hyperparameters were the following:
| Hyperparameter | Value |
|:----:|:----:|
| adam_beta1 | 0.9 |
| adam_beta2 | 0.98 |
| adam_epsilon | 1.000e-8 |
| eval_batch_size | 16 |
| train_batch_size | 4 |
| fp16 | True |
| gradient_accumulation_steps | 4 |
| learning_rate | 0.00001 |
| warmup_steps | 0.06 |
| max_length | 256 |
| epochs | 4 |
The other parameters were the default ones from [Trainer](https://huggingface.co/transformers/main_classes/trainer.html) and [Trainer Arguments](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments)
## Eval results:
| Dataset Acc | Challenge Test |
|:----:|:----:|
| | 65.358 |
**The model was trained with a TITAN RTX**
| {"language": "english", "license": "mit", "datasets": ["race", "ai2_arc", "openbookqa"], "metrics": ["accuracy"]} | LIAMF-USP/aristo-roberta | null | [
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"multiple-choice",
"dataset:race",
"dataset:ai2_arc",
"dataset:openbookqa",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
multiple-choice | transformers |
# Roberta Large Fine Tuned on RACE
## Model description
This model is a fine-tuned model of Roberta-large applied on RACE
#### How to use
```python
import datasets
from transformers import RobertaTokenizer
from transformers import RobertaForMultipleChoice
tokenizer = RobertaTokenizer.from_pretrained(
"LIAMF-USP/roberta-large-finetuned-race")
model = RobertaForMultipleChoice.from_pretrained(
"LIAMF-USP/roberta-large-finetuned-race")
dataset = datasets.load_dataset(
"race",
"all",
split=["train", "validation", "test"],
)training_examples = dataset[0]
evaluation_examples = dataset[1]
test_examples = dataset[2]
example=training_examples[0]
example_id = example["example_id"]
question = example["question"]
context = example["article"]
options = example["options"]
label_example = example["answer"]
label_map = {label: i
for i, label in enumerate(["A", "B", "C", "D"])}
choices_inputs = []
for ending_idx, (_, ending) in enumerate(
zip(context, options)):
if question.find("_") != -1:
# fill in the banks questions
question_option = question.replace("_", ending)
else:
question_option = question + " " + ending
inputs = tokenizer(
context,
question_option,
add_special_tokens=True,
max_length=MAX_SEQ_LENGTH,
padding="max_length",
truncation=True,
return_overflowing_tokens=False,
)
label = label_map[label_example]
input_ids = [x["input_ids"] for x in choices_inputs]
attention_mask = (
[x["attention_mask"] for x in choices_inputs]
# as the senteces follow the same structure,
#just one of them is necessary to check
if "attention_mask" in choices_inputs[0]
else None
)
example_encoded = {
"example_id": example_id,
"input_ids": input_ids,
"attention_mask": attention_mask,
"label": label,
}
output = model(**example_encoded)
```
## Training data
The initial model was [roberta large model](https://huggingface.co/roberta-large) which was then fine-tuned on [RACE dataset](https://www.cs.cmu.edu/~glai1/data/race/)
## Training procedure
It was necessary to preprocess the data with a method that is exemplified for a single instance in the _How to use_ section. The used hyperparameters were the following:
| Hyperparameter | Value |
|:----:|:----:|
| adam_beta1 | 0.9 |
| adam_beta2 | 0.98 |
| adam_epsilon | 1.000e-8 |
| eval_batch_size | 32 |
| train_batch_size | 1 |
| fp16 | True |
| gradient_accumulation_steps | 16 |
| learning_rate | 0.00001 |
| warmup_steps | 1000 |
| max_length | 512 |
| epochs | 4 |
## Eval results:
| Dataset Acc | Eval | All Test |High School Test |Middle School Test |
|:----:|:----:|:----:|:----:|:----:|
| | 85.2 | 84.9|83.5|88.0|
**The model was trained with a Tesla V100-PCIE-16GB** | {"language": "english", "license": "mit", "datasets": ["race"], "metrics": ["accuracy"]} | LIAMF-USP/roberta-large-finetuned-race | null | [
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"multiple-choice",
"dataset:race",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | LJ/distilbert-base-uncased-finetuned-squad | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.