repo_id
stringlengths
4
122
author
stringlengths
2
38
model_type
stringlengths
2
33
files_per_repo
int64
2
39k
downloads_30d
int64
0
33.7M
library
stringlengths
2
37
likes
int64
0
4.87k
pipeline
stringlengths
5
30
pytorch
bool
2 classes
tensorflow
bool
2 classes
jax
bool
2 classes
license
stringlengths
2
33
languages
stringlengths
2
1.63k
datasets
stringlengths
2
2.58k
co2
stringlengths
6
258
prs_count
int64
0
125
prs_open
int64
0
120
prs_merged
int64
0
46
prs_closed
int64
0
34
discussions_count
int64
0
218
discussions_open
int64
0
148
discussions_closed
int64
0
70
tags
stringlengths
2
513
has_model_index
bool
2 classes
has_metadata
bool
2 classes
has_text
bool
1 class
text_length
int64
201
598k
readme
stringlengths
0
598k
KETI-AIR/ke-t5-small-newslike
KETI-AIR
t5
9
4
transformers
0
text2text-generation
true
true
true
apache-2.0
['ko', 'en']
null
null
0
0
0
0
0
0
0
['t5']
false
true
true
2,391
# ke-t5 base Pretrained T5 Model on Korean and English. See [Github](https://github.com/AIRC-KETI/ke-t5) and [Paper](https://aclanthology.org/2021.findings-emnlp.33/) [Korean paper](https://koreascience.kr/article/CFKO202130060717834.pdf) for more details. ## How to use ```python from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("KETI-AIR/ke-t5-small-newslike") tokenizer = AutoTokenizer.from_pretrained("KETI-AIR/ke-t5-small-newslike") ``` ## BibTeX entry and citation info ```bibtex @inproceedings{kim-etal-2021-model-cross, title = "A Model of Cross-Lingual Knowledge-Grounded Response Generation for Open-Domain Dialogue Systems", author = "Kim, San and Jang, Jin Yea and Jung, Minyoung and Shin, Saim", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021", month = nov, year = "2021", address = "Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-emnlp.33", doi = "10.18653/v1/2021.findings-emnlp.33", pages = "352--365", abstract = "Research on open-domain dialogue systems that allow free topics is challenging in the field of natural language processing (NLP). The performance of the dialogue system has been improved recently by the method utilizing dialogue-related knowledge; however, non-English dialogue systems suffer from reproducing the performance of English dialogue systems because securing knowledge in the same language with the dialogue system is relatively difficult. Through experiments with a Korean dialogue system, this paper proves that the performance of a non-English dialogue system can be improved by utilizing English knowledge, highlighting the system uses cross-lingual knowledge. For the experiments, we 1) constructed a Korean version of the Wizard of Wikipedia dataset, 2) built Korean-English T5 (KE-T5), a language model pre-trained with Korean and English corpus, and 3) developed a knowledge-grounded Korean dialogue model based on KE-T5. We observed the performance improvement in the open-domain Korean dialogue model even only English knowledge was given. The experimental results showed that the knowledge inherent in cross-lingual language models can be helpful for generating responses in open dialogue systems.", } ```
KETI-AIR/ke-t5-small
KETI-AIR
t5
9
939
transformers
1
text2text-generation
true
true
true
apache-2.0
['en', 'ko']
null
null
0
0
0
0
0
0
0
['t5']
false
true
true
2,373
# ke-t5 base Pretrained T5 Model on Korean and English. See [Github](https://github.com/AIRC-KETI/ke-t5) and [Paper](https://aclanthology.org/2021.findings-emnlp.33/) [Korean paper](https://koreascience.kr/article/CFKO202130060717834.pdf) for more details. ## How to use ```python from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("KETI-AIR/ke-t5-small") tokenizer = AutoTokenizer.from_pretrained("KETI-AIR/ke-t5-small") ``` ## BibTeX entry and citation info ```bibtex @inproceedings{kim-etal-2021-model-cross, title = "A Model of Cross-Lingual Knowledge-Grounded Response Generation for Open-Domain Dialogue Systems", author = "Kim, San and Jang, Jin Yea and Jung, Minyoung and Shin, Saim", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021", month = nov, year = "2021", address = "Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-emnlp.33", doi = "10.18653/v1/2021.findings-emnlp.33", pages = "352--365", abstract = "Research on open-domain dialogue systems that allow free topics is challenging in the field of natural language processing (NLP). The performance of the dialogue system has been improved recently by the method utilizing dialogue-related knowledge; however, non-English dialogue systems suffer from reproducing the performance of English dialogue systems because securing knowledge in the same language with the dialogue system is relatively difficult. Through experiments with a Korean dialogue system, this paper proves that the performance of a non-English dialogue system can be improved by utilizing English knowledge, highlighting the system uses cross-lingual knowledge. For the experiments, we 1) constructed a Korean version of the Wizard of Wikipedia dataset, 2) built Korean-English T5 (KE-T5), a language model pre-trained with Korean and English corpus, and 3) developed a knowledge-grounded Korean dialogue model based on KE-T5. We observed the performance improvement in the open-domain Korean dialogue model even only English knowledge was given. The experimental results showed that the knowledge inherent in cross-lingual language models can be helpful for generating responses in open dialogue systems.", } ```
Kalindu/SinBerto
Kalindu
roberta
7
7
transformers
0
fill-mask
true
false
false
null
['si']
null
null
0
0
0
0
0
0
0
['SinBERTo', 'Sinhala', 'roberta']
false
true
true
713
### Overview SinBerto is a small language model trained on a small news corpus. SinBerto is trained on Sinhala Language which is a low resource language compared to other languages. ### Model Specifications. model : [Roberta](https://arxiv.org/abs/1907.11692) vocab_size=52_000, max_position_embeddings=514, num_attention_heads=12, num_hidden_layers=6, type_vocab_size=1 ### How to use from the Transformers Library from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("Kalindu/SinBerto") model = AutoModelForMaskedLM.from_pretrained("Kalindu/SinBerto") ### OR Clone the model repo git lfs install git clone https://huggingface.co/Kalindu/SinBerto
KamSut/distilbert-base-uncased-finetuned-ner
KamSut
distilbert
13
11
transformers
0
token-classification
true
false
false
apache-2.0
null
['conll2003']
null
1
1
0
0
0
0
0
['generated_from_trainer']
false
true
true
1,554
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0604 - Precision: 0.9271 - Recall: 0.9381 - F1: 0.9326 - Accuracy: 0.9836 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2324 | 1.0 | 878 | 0.0688 | 0.9146 | 0.9264 | 0.9205 | 0.9816 | | 0.0517 | 2.0 | 1756 | 0.0620 | 0.9207 | 0.9329 | 0.9268 | 0.9829 | | 0.0301 | 3.0 | 2634 | 0.0604 | 0.9271 | 0.9381 | 0.9326 | 0.9836 | ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
SI2M-Lab/DarijaBERT-arabizi
SI2M-Lab
bert
8
74
transformers
1
fill-mask
true
false
false
null
['ar']
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,381
AIOX Lab and SI2M Lab INSEA have joined forces to offer researchers, industrialists and the NLP (Natural Language Processing) community the first intelligent Open Source system that understands Moroccan dialectal language "Darija". **DarijaBERT** is the first BERT model for the Moroccan Arabic dialect called “Darija”. It is based on the same architecture as BERT-base, but without the Next Sentence Prediction (NSP) objective. This model is the Arabizi specific version of DarijaBERT and it was trained on a total of ~4.6 Million sequences of Darija dialect written in Latin letters. The model was trained on a dataset issued from Youtube comments. More details about DarijaBert are available in the dedicated GitHub [repository](https://github.com/AIOXLABS/DBert) **Loading the model** The model can be loaded directly using the Huggingface library: ```python from transformers import AutoTokenizer, AutoModel DarijaBERT_tokenizer = AutoTokenizer.from_pretrained("Kamel/DarijaBERT-arabizi") DarijaBert_model = AutoModel.from_pretrained("Kamel/DarijaBERT-arabizi") ``` **Acknowledgments** We gratefully acknowledge Google’s TensorFlow Research Cloud (TRC) program for providing us with free Cloud TPUs. <font size =2>**Warning** This model being trained on texts from social networks, it can unfortunately generate toxic outputs reflecting part of the learned data</font>
SI2M-Lab/DarijaBERT
SI2M-Lab
bert
8
283
transformers
6
fill-mask
true
false
false
null
['ar']
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,366
AIOX Lab and SI2M Lab INSEA have joined forces to offer researchers, industrialists and the NLP (Natural Language Processing) community the first intelligent Open Source system that understands Moroccan dialectal language "Darija". **DarijaBERT** is the first BERT model for the Moroccan Arabic dialect called “Darija”. It is based on the same architecture as BERT-base, but without the Next Sentence Prediction (NSP) objective. This model was trained on a total of ~3 Million sequences of Darija dialect representing 691MB of text or a total of ~100M tokens. The model was trained on a dataset issued from three different sources: * Stories written in Darija scrapped from a dedicated website * Youtube comments from 40 different Moroccan channels * Tweets crawled based on a list of Darija keywords. More details about DarijaBert are available in the dedicated GitHub [repository](https://github.com/AIOXLABS/DBert) **Loading the model** The model can be loaded directly using the Huggingface library: ```python from transformers import AutoTokenizer, AutoModel DarijaBERT_tokenizer = AutoTokenizer.from_pretrained("SI2M-Lab/DarijaBERT") DarijaBert_model = AutoModel.from_pretrained("SI2M-Lab/DarijaBERT") ``` **Acknowledgments** We gratefully acknowledge Google’s TensorFlow Research Cloud (TRC) program for providing us with free Cloud TPUs.
Kamel/t5-darija-summarization
Kamel
t5
8
5
transformers
1
text2text-generation
true
false
false
null
['ar']
null
null
0
0
0
0
0
0
0
[]
false
true
true
2,124
# MArSum: Moroccan Articles Summarization dataset - [Description](#description) - [Dataset](#dataset) - [Citation](#citation) - [License](#license) ## Description This dataset contains **19,806** news articles written in Moroccan Arabic dialect along with their titles. The articles were crawled from [Goud.ma](http://www.goud.ma) website between 01/01/2018 and 12/31/2020. The articles are written mainly in Moroccan Arabic dialect (Darija) but some of them contain Modern Standard Arabic (MSA) passages. All the titles are written in Darija. The following table summarize some tatistics on the MArSum Dataset. <table class="tg"> <thead> <tr> <th class="tg-0pky" rowspan="2">Size</th> <th class="tg-0pky" colspan="3">Titles length</th> <th class="tg-0pky" colspan="3">Articles length</th> </tr> <tr> <th class="tg-lqy6">Min.</th> <th class="tg-lqy6">Max.</th> <th class="tg-lqy6">Avg.</th> <th class="tg-lqy6">Min.</th> <th class="tg-lqy6">Max.</th> <th class="tg-0lax">Avg.</th> </tr> </thead> <tbody> <tr> <td class="tg-dvpl">19,806</td> <td class="tg-dvpl">2</td> <td class="tg-dvpl">74</td> <td class="tg-dvpl">14.6</td> <td class="tg-dvpl">30</td> <td class="tg-dvpl">2964</td> <td class="tg-0pky">140.7</td> </tr> </tbody> </table> The following figure describes the creation process of MArSum: ![alt text](MArSum_schema_Color1.png) You may refer to our paper, cited below, for more details on this process. ## Dataset The dataset is split into Train/Test subsets using a 90/10 split strategy. Both subsets are available for direct [donwload](https://github.com/KamelGaanoun/MoroccanSummarization). ## Citation Please cite the following paper if you decide to use the dataset: Gaanoun, K., Naira, A. M., Allak, A., & Benelallam, I. (2022). Automatic Text Summarization for Moroccan Arabic Dialect Using an Artificial Intelligence Approach. In International Conference on Business Intelligence (pp. 158-177). Springer, Cham. ## License The dataset is distributed under the CC BY 4.0 license.
Katsiaryna/distilbert-base-uncased-finetuned
Katsiaryna
distilbert
20
3
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,469
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8229 - Accuracy: 0.54 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 7 | 0.7709 | 0.74 | | No log | 2.0 | 14 | 0.7048 | 0.72 | | No log | 3.0 | 21 | 0.8728 | 0.46 | | No log | 4.0 | 28 | 0.7849 | 0.64 | | No log | 5.0 | 35 | 0.8229 | 0.54 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
Katsiaryna/distilbert-base-uncased-finetuned_9th
Katsiaryna
distilbert
12
3
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,475
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned_9th This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2826 - Accuracy: 0.4462 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2357 | 1.0 | 569 | 0.2277 | 0.3474 | | 0.2237 | 2.0 | 1138 | 0.2316 | 0.3474 | | 0.1847 | 3.0 | 1707 | 0.2456 | 0.3712 | | 0.1302 | 4.0 | 2276 | 0.2763 | 0.4602 | | 0.0863 | 5.0 | 2845 | 0.2826 | 0.4462 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
Kayvane/distilbert-complaints-product
Kayvane
distilbert
8
22
transformers
0
text-classification
true
false
false
null
null
['consumer_complaints']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,575
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-complaints-product This model was trained from the [CFBP](https://www.consumerfinance.gov/data-research/consumer-complaints/) dataset, also made available on the HuggingFace Datasets library. This model predicts the type of financial complaint based on the text provided ## Model description A DistilBert Text Classification Model, with 18 possible classes to determine the nature of a financial customer complaint. ## Intended uses & limitations This model is used as part of.a demonstration for E2E Machine Learning Projects focused on Contact Centre Automation: - **Infrastructure:** Terraform - **ML Ops:** HuggingFace (Datasets, Hub, Transformers) - **Ml Explainability:** SHAP - **Cloud:** AWS - Model Hosting: Lambda - DB Backend: DynamoDB - Orchestration: Step-Functions - UI Hosting: EC2 - Routing: API Gateway - **UI:** Budibase ## Training and evaluation data consumer_complaints dataset ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Framework versions - Transformers 4.16.1 - Pytorch 1.10.0+cu111 - Datasets 1.18.2 - Tokenizers 0.11.0
Kayvane/distilbert-undersampled-noweights
Kayvane
distilbert
10
3
transformers
0
text-classification
true
false
false
null
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
918
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-undersampled-noweights This model was trained from scratch on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 33 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 5 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
Kayvane/distilbert-undersampled
Kayvane
distilbert
10
3
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,544
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-undersampled This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0826 - Accuracy: 0.9811 - F1: 0.9810 - Recall: 0.9811 - Precision: 0.9812 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 33 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:------:|:---------:| | 0.0959 | 0.2 | 2000 | 0.0999 | 0.9651 | 0.9628 | 0.9651 | 0.9655 | | 0.0618 | 0.41 | 4000 | 0.0886 | 0.9717 | 0.9717 | 0.9717 | 0.9731 | | 0.159 | 0.61 | 6000 | 0.0884 | 0.9719 | 0.9720 | 0.9719 | 0.9728 | | 0.0513 | 0.81 | 8000 | 0.0785 | 0.9782 | 0.9782 | 0.9782 | 0.9788 | | 0.0219 | 1.01 | 10000 | 0.0680 | 0.9779 | 0.9779 | 0.9779 | 0.9783 | | 0.036 | 1.22 | 12000 | 0.0745 | 0.9787 | 0.9787 | 0.9787 | 0.9792 | | 0.0892 | 1.42 | 14000 | 0.0675 | 0.9786 | 0.9786 | 0.9786 | 0.9789 | | 0.0214 | 1.62 | 16000 | 0.0760 | 0.9799 | 0.9798 | 0.9799 | 0.9801 | | 0.0882 | 1.83 | 18000 | 0.0800 | 0.9800 | 0.9800 | 0.9800 | 0.9802 | | 0.0234 | 2.03 | 20000 | 0.0720 | 0.9813 | 0.9813 | 0.9813 | 0.9815 | | 0.0132 | 2.23 | 22000 | 0.0738 | 0.9803 | 0.9803 | 0.9803 | 0.9805 | | 0.0136 | 2.43 | 24000 | 0.0847 | 0.9804 | 0.9804 | 0.9804 | 0.9806 | | 0.0119 | 2.64 | 26000 | 0.0826 | 0.9811 | 0.9810 | 0.9811 | 0.9812 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
Kceilord/autonlp-tc-13522454
Kceilord
distilbert
9
3
transformers
0
text-classification
true
false
false
null
['en']
['Kceilord/autonlp-data-tc']
null
0
0
0
0
0
0
0
autonlp
false
true
true
921
# Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 13522454 ## Validation Metrics - Loss: 0.31450966000556946 - Accuracy: 0.8461538461538461 - Precision: 0.8181818181818182 - Recall: 0.782608695652174 - AUC: 0.9369259032455604 - F1: 0.8 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Kceilord/autonlp-tc-13522454 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Kceilord/autonlp-tc-13522454", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Kceilord/autonlp-tc-13522454", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
Ketzu/koelectra-sts-v0.4
Ketzu
electra
16
3
transformers
0
text-classification
true
false
false
null
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
false
true
true
1,533
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # koelectra-sts-v0.4 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3368 - Pearson: 0.9303 - Spearmanr: 0.9287 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:| | 0.0345 | 1.0 | 730 | 0.3368 | 0.9303 | 0.9287 | | 0.0343 | 2.0 | 1460 | 0.3368 | 0.9303 | 0.9287 | | 0.0337 | 3.0 | 2190 | 0.3368 | 0.9303 | 0.9287 | | 0.0345 | 4.0 | 2920 | 0.3368 | 0.9303 | 0.9287 | | 0.0347 | 5.0 | 3650 | 0.3368 | 0.9303 | 0.9287 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.10.1+cu113 - Datasets 1.17.0 - Tokenizers 0.10.3
Kevincp560/bart-base-finetuned-pubmed
Kevincp560
bart
13
5
transformers
0
text2text-generation
true
false
false
apache-2.0
null
['pub_med_summarization_dataset']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,857
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-base-finetuned-pubmed This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the pub_med_summarization_dataset dataset. It achieves the following results on the evaluation set: - Loss: 2.0277 - Rouge1: 9.3963 - Rouge2: 4.0473 - Rougel: 8.4526 - Rougelsum: 8.9659 - Gen Len: 20.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 2.3706 | 1.0 | 4000 | 2.1245 | 9.1644 | 3.8264 | 8.2223 | 8.718 | 20.0 | | 2.2246 | 2.0 | 8000 | 2.0811 | 9.023 | 3.7716 | 8.1453 | 8.5998 | 20.0 | | 2.1034 | 3.0 | 12000 | 2.0469 | 9.4412 | 4.0783 | 8.4949 | 8.9977 | 20.0 | | 2.0137 | 4.0 | 16000 | 2.0390 | 9.2261 | 3.9307 | 8.3154 | 8.7937 | 20.0 | | 1.9288 | 5.0 | 20000 | 2.0277 | 9.3963 | 4.0473 | 8.4526 | 8.9659 | 20.0 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
Kevincp560/bart-large-cnn-finetuned-pubmed
Kevincp560
bart
13
3
transformers
0
text2text-generation
true
false
false
mit
null
['pub_med_summarization_dataset']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,905
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-finetuned-pubmed This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the pub_med_summarization_dataset dataset. It achieves the following results on the evaluation set: - Loss: 1.8416 - Rouge1: 40.4866 - Rouge2: 16.7472 - Rougel: 24.9831 - Rougelsum: 36.4002 - Gen Len: 142.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | 1.932 | 1.0 | 4000 | 1.8110 | 38.1151 | 15.2255 | 23.4286 | 34.2521 | 141.8905 | | 1.7001 | 2.0 | 8000 | 1.7790 | 39.8217 | 16.3042 | 24.649 | 35.831 | 142.0 | | 1.5 | 3.0 | 12000 | 1.7971 | 40.6108 | 17.0446 | 25.1977 | 36.5556 | 141.9865 | | 1.3316 | 4.0 | 16000 | 1.8106 | 40.0466 | 16.4851 | 24.7094 | 36.0998 | 141.9335 | | 1.1996 | 5.0 | 20000 | 1.8416 | 40.4866 | 16.7472 | 24.9831 | 36.4002 | 142.0 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
Kevincp560/bart-large-finetuned-pubmed
Kevincp560
bart
13
5
transformers
0
text2text-generation
true
false
false
apache-2.0
null
['pub_med_summarization_dataset']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,871
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-finetuned-pubmed This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the pub_med_summarization_dataset dataset. It achieves the following results on the evaluation set: - Loss: 1.8135 - Rouge1: 10.946 - Rouge2: 5.0933 - Rougel: 9.5608 - Rougelsum: 10.4259 - Gen Len: 19.0495 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:| | 2.0861 | 1.0 | 4000 | 1.8909 | 8.7344 | 3.6919 | 7.8804 | 8.3305 | 20.0 | | 1.8996 | 2.0 | 8000 | 1.8261 | 10.2124 | 4.6212 | 8.9842 | 9.7417 | 17.632 | | 1.7459 | 3.0 | 12000 | 1.8160 | 9.4933 | 4.4117 | 8.3977 | 9.0758 | 16.4775 | | 1.6258 | 4.0 | 16000 | 1.8136 | 10.8248 | 5.0335 | 9.4286 | 10.3123 | 18.724 | | 1.5214 | 5.0 | 20000 | 1.8135 | 10.946 | 5.0933 | 9.5608 | 10.4259 | 19.0495 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
Khanh/bert-base-multilingual-cased-finetuned-squad
Khanh
bert
12
5
transformers
0
question-answering
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,294
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-multilingual-cased-finetuned-squad This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4919 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1782 | 1.0 | 579 | 0.5258 | | 0.4938 | 2.0 | 1158 | 0.4639 | | 0.32 | 3.0 | 1737 | 0.4919 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
Khanh/bert-base-multilingual-cased-finetuned-viquad
Khanh
bert
12
7
transformers
0
question-answering
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,295
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-multilingual-cased-finetuned-viquad This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9815 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 65 | 2.5534 | | No log | 2.0 | 130 | 2.1165 | | No log | 3.0 | 195 | 1.9815 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
Khanh/distilbert-base-multilingual-cased-finetuned-squad
Khanh
distilbert
12
5
transformers
0
question-answering
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,312
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-multilingual-cased-finetuned-squad This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6587 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.923 | 1.0 | 579 | 0.8439 | | 0.8479 | 2.0 | 1158 | 0.6784 | | 0.6148 | 3.0 | 1737 | 0.6587 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
Khanh/distilbert-base-multilingual-cased-finetuned-viquad
Khanh
distilbert
12
7
transformers
0
question-answering
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,415
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-multilingual-cased-finetuned-viquad This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.4241 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 65 | 4.0975 | | No log | 2.0 | 130 | 3.9315 | | No log | 3.0 | 195 | 3.6742 | | No log | 4.0 | 260 | 3.4878 | | No log | 5.0 | 325 | 3.4241 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
Khanh/xlm-roberta-base-finetuned-squad
Khanh
xlm-roberta
13
5
transformers
0
question-answering
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,205
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-squad This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5539 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.7665 | 1.0 | 2295 | 0.5231 | | 0.5236 | 2.0 | 4590 | 0.5539 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
Khanh/xlm-roberta-base-finetuned-viquad
Khanh
xlm-roberta
11
7
transformers
0
question-answering
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,206
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-viquad This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3761 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 259 | 2.9945 | | 3.3665 | 2.0 | 518 | 2.3761 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
Kien/distilbert-base-uncased-finetuned-cola
Kien
distilbert
13
5
transformers
0
text-classification
true
false
false
apache-2.0
null
['glue']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,572
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5327 - Matthews Correlation: 0.5233 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5314 | 1.0 | 535 | 0.4955 | 0.4270 | | 0.3545 | 2.0 | 1070 | 0.5327 | 0.5233 | | 0.2418 | 3.0 | 1605 | 0.6180 | 0.5132 | | 0.1722 | 4.0 | 2140 | 0.7344 | 0.5158 | | 0.1243 | 5.0 | 2675 | 0.8581 | 0.5196 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
Kieran/distilbert-base-uncased-finetuned-cola
Kieran
distilbert
53
3
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
false
true
true
1,571
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unkown dataset. It achieves the following results on the evaluation set: - Loss: 0.1037 - Matthews Correlation: 0.9719 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.2094 | 1.0 | 525 | 0.1069 | 0.9607 | | 0.0483 | 2.0 | 1050 | 0.0878 | 0.9719 | | 0.0296 | 3.0 | 1575 | 0.1263 | 0.9664 | | 0.0108 | 4.0 | 2100 | 0.1037 | 0.9719 | | 0.0096 | 5.0 | 2625 | 0.1065 | 0.9719 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
Kiran146/distilbert-base-uncased-finetuned-emotion
Kiran146
distilbert
12
8
transformers
0
text-classification
true
false
false
apache-2.0
null
['emotion']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,338
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2224 - Accuracy: 0.9225 - F1: 0.9228 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.84 | 1.0 | 250 | 0.3133 | 0.909 | 0.9070 | | 0.2459 | 2.0 | 500 | 0.2224 | 0.9225 | 0.9228 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
Kirili4ik/mbart_ruDialogSum
Kirili4ik
mbart
7
479
transformers
16
text2text-generation
true
false
false
null
['rus']
['IlyaGusev/gazeta', 'samsum', 'samsum_(translated_into_Russian)']
null
0
0
0
0
0
0
0
['mbart']
true
true
true
1,264
### 📝 Description MBart for Russian summarization fine-tuned for **dialogues** summarization. This model was firstly fine-tuned by [Ilya Gusev](https://hf.co/IlyaGusev) on [Gazeta dataset](https://huggingface.co/datasets/IlyaGusev/gazeta). We have **fine tuned** that model on [SamSum dataset]() **translated to Russian** using GoogleTranslateAPI 🤗 Moreover! We have implemented a **! telegram bot [@summarization_bot](https://t.me/summarization_bot) !** with the inference of this model. Add it to the chat and get summaries instead of dozens spam messages!  🤗 ### ❓ How to use with code ```python from transformers import MBartTokenizer, MBartForConditionalGeneration # Download model and tokenizer model_name = "Kirili4ik/mbart_ruDialogSum" tokenizer = AutoTokenizer.from_pretrained(model_name) model = MBartForConditionalGeneration.from_pretrained(model_name) model.eval() article_text = "..." input_ids = tokenizer( [article_text], max_length=600, padding="max_length", truncation=True, return_tensors="pt", )["input_ids"] output_ids = model.generate( input_ids=input_ids, top_k=0, num_beams=3, no_repeat_ngram_size=3 )[0] summary = tokenizer.decode(output_ids, skip_special_tokens=True) print(summary) ```
Kirili4ik/ruDialoGpt3-medium-finetuned-telegram
Kirili4ik
gpt2
10
397
transformers
10
conversational
true
false
false
null
['ru', 'ru-RU']
null
null
0
0
0
0
0
0
0
['conversational']
false
true
true
4,029
### 📝 Description DialoGPT trained on Russian language and fine tuned on my telegram chat. This model was created by [sberbank-ai](https://hf.co/sberbank-ai) and trained on Russian forums (see [Grossmend's model](https://hf.co/Grossmend/rudialogpt3_medium_based_on_gpt2)). You can find info about how it has been trained on [habr](https://habr.com/ru/company/icl_services/blog/548244/) (in Russian). I have created a **simple pipeline** and **fine tuned** that model on my own **exported telegram chat** (~30mb json). It is in fact very easy to get the data from telegram and fine tune a model. Therefore, I made a **colab tutorial** for it: https://colab.research.google.com/drive/1fnAVURjyZRK9VQg1Co_-SKUQnRES8l9R?usp=sharing ⚠️ Due to specifics of the data Hosted inference API may not work properly ⚠️ 🤗To try it use my [Spaces demo](https://huggingface.co/spaces/Kirili4ik/chat-with-Kirill)🤗 ### ❓ How to use with code ```python # Download model and tokenizer checkpoint = "Kirili4ik/ruDialoGpt3-medium-finetuned-telegram" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint) model.eval() # util function to get expected len after tokenizing def get_length_param(text: str, tokenizer) -> str: tokens_count = len(tokenizer.encode(text)) if tokens_count <= 15: len_param = '1' elif tokens_count <= 50: len_param = '2' elif tokens_count <= 256: len_param = '3' else: len_param = '-' return len_param # util function to get next person number (1/0) for Machine or Human in the dialogue def get_user_param(text: dict, machine_name_in_chat: str) -> str: if text['from'] == machine_name_in_chat: return '1' # machine else: return '0' # human chat_history_ids = torch.zeros((1, 0), dtype=torch.int) while True: next_who = input("Who's phrase?\t") #input("H / G?") # Human or GPT # In case Human if next_who == "H" or next_who == "Human": input_user = input("===> Human: ") # encode the new user input, add parameters and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(f"|0|{get_length_param(input_user, tokenizer)}|" \ + input_user + tokenizer.eos_token, return_tensors="pt") # append the new user input tokens to the chat history chat_history_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if next_who == "G" or next_who == "GPT": next_len = input("Phrase len? 1/2/3/-\t") #input("Exp. len?(-/1/2/3): ") # encode the new user input, add parameters and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(f"|1|{next_len}|", return_tensors="pt") # append the new user input tokens to the chat history chat_history_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) # print(tokenizer.decode(chat_history_ids[-1])) # uncomment to see full gpt input # save previous len input_len = chat_history_ids.shape[-1] # generated a response; PS you can read about the parameters at hf.co/blog/how-to-generate chat_history_ids = model.generate( chat_history_ids, num_return_sequences=1, # use for more variants, but have to print [i] max_length=512, no_repeat_ngram_size=3, do_sample=True, top_k=50, top_p=0.9, temperature = 0.6, # 0 for greedy mask_token_id=tokenizer.mask_token_id, eos_token_id=tokenizer.eos_token_id, unk_token_id=tokenizer.unk_token_id, pad_token_id=tokenizer.pad_token_id, device='cpu' ) # pretty print last ouput tokens from bot print(f"===> GPT-3: {tokenizer.decode(chat_history_ids[:, input_len:][0], skip_special_tokens=True)}") ```
Kittipot/Wangchanberta-Depress-Finetuned
Kittipot
camembert
20
8
transformers
0
text-classification
true
false
false
null
null
['wisesight_sentiment']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
3,913
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Wangchanberta-Depress-Finetuned This model is a fine-tuned version of [airesearch/wangchanberta-base-att-spm-uncased](https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased) on the wisesight_sentiment dataset. It achieves the following results on the evaluation set: - Loss: 0.5910 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.0114 | 0.08 | 200 | 0.9538 | | 0.8617 | 0.15 | 400 | 0.8280 | | 0.7882 | 0.23 | 600 | 0.7472 | | 0.7132 | 0.3 | 800 | 0.7264 | | 0.7226 | 0.38 | 1000 | 0.7265 | | 0.6854 | 0.45 | 1200 | 0.6792 | | 0.621 | 0.53 | 1400 | 0.6451 | | 0.6093 | 0.61 | 1600 | 0.6364 | | 0.6099 | 0.68 | 1800 | 0.6128 | | 0.5766 | 0.76 | 2000 | 0.6388 | | 0.6033 | 0.83 | 2200 | 0.6148 | | 0.5966 | 0.91 | 2400 | 0.6440 | | 0.6208 | 0.98 | 2600 | 0.5910 | | 0.5178 | 1.06 | 2800 | 0.6340 | | 0.4863 | 1.13 | 3000 | 0.7177 | | 0.4852 | 1.21 | 3200 | 0.6766 | | 0.4711 | 1.29 | 3400 | 0.6739 | | 0.5203 | 1.36 | 3600 | 0.6429 | | 0.5167 | 1.44 | 3800 | 0.6539 | | 0.5053 | 1.51 | 4000 | 0.6172 | | 0.5076 | 1.59 | 4200 | 0.6053 | | 0.4704 | 1.66 | 4400 | 0.6474 | | 0.4807 | 1.74 | 4600 | 0.6225 | | 0.4792 | 1.82 | 4800 | 0.6282 | | 0.5177 | 1.89 | 5000 | 0.6011 | | 0.4839 | 1.97 | 5200 | 0.6231 | | 0.4155 | 2.04 | 5400 | 0.6668 | | 0.3923 | 2.12 | 5600 | 0.6886 | | 0.3713 | 2.19 | 5800 | 0.6895 | | 0.364 | 2.27 | 6000 | 0.6886 | | 0.3774 | 2.34 | 6200 | 0.7117 | | 0.4001 | 2.42 | 6400 | 0.7081 | | 0.3531 | 2.5 | 6600 | 0.7465 | | 0.3768 | 2.57 | 6800 | 0.7706 | | 0.3324 | 2.65 | 7000 | 0.7456 | | 0.3597 | 2.72 | 7200 | 0.7507 | | 0.3868 | 2.8 | 7400 | 0.7542 | | 0.4141 | 2.87 | 7600 | 0.7223 | | 0.3701 | 2.95 | 7800 | 0.7374 | | 0.3175 | 3.03 | 8000 | 0.7615 | | 0.2951 | 3.1 | 8200 | 0.7880 | | 0.2885 | 3.18 | 8400 | 0.8158 | | 0.2913 | 3.25 | 8600 | 0.8565 | | 0.2815 | 3.33 | 8800 | 0.8649 | | 0.2748 | 3.4 | 9000 | 0.8783 | | 0.2776 | 3.48 | 9200 | 0.8851 | | 0.2982 | 3.56 | 9400 | 0.8922 | | 0.2939 | 3.63 | 9600 | 0.8796 | | 0.2712 | 3.71 | 9800 | 0.8873 | | 0.2918 | 3.78 | 10000 | 0.8973 | | 0.3144 | 3.86 | 10200 | 0.8978 | | 0.2988 | 3.93 | 10400 | 0.8951 | ### Framework versions - Transformers 4.11.2 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.10.3
KoboldAI/GPT-J-6B-Janeway
KoboldAI
gptj
10
1,700
transformers
1
text-generation
true
false
false
mit
['en']
null
null
0
0
0
0
0
0
0
[]
false
true
true
2,786
# GPT-J 6B - Janeway ## Model Description GPT-J 6B-Janeway is a finetune created using EleutherAI's GPT-J 6B model. ## Training data The training data contains around 2210 ebooks, mostly in the sci-fi and fantasy genres. The dataset is based on the same dataset used by GPT-Neo-2.7B-Picard, with 20% more data in various genres. Some parts of the dataset have been prepended using the following text: `[Genre: <genre1>,<genre2>]` ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='KoboldAI/GPT-J-6B-Janeway') >>> generator("Welcome Captain Janeway, I apologize for the delay.", do_sample=True, min_length=50) [{'generated_text': 'Welcome Captain Janeway, I apologize for the delay."\nIt's all right," Janeway said. "I'm certain that you're doing your best to keep me informed of what\'s going on."'}] ``` ### Limitations and Biases The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output. GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ### BibTeX entry and citation info The model uses the following model as base: ```bibtex @misc{gpt-j, author = {Wang, Ben and Komatsuzaki, Aran}, title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha.
KoboldAI/GPT-J-6B-Shinen
KoboldAI
gptj
10
4,929
transformers
2
text-generation
true
false
false
mit
['en']
null
null
0
0
0
0
0
0
0
[]
false
true
true
2,751
# GPT-J 6B - Shinen ## Model Description GPT-J 6B-Shinen is a finetune created using EleutherAI's GPT-J 6B model. Compared to GPT-Neo-2.7-Horni, this model is much heavier on the sexual content. **Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.** ## Training data The training data contains user-generated stories from sexstories.com. All stories are tagged using the following way: ``` [Theme: <theme1>, <theme2> ,<theme3>] <Story goes here> ``` ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='KoboldAI/GPT-J-6B-Shinen') >>> generator("She was staring at me", do_sample=True, min_length=50) [{'generated_text': 'She was staring at me with a look that said it all. She wanted me so badly tonight that I wanted'}] ``` ### Limitations and Biases The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output. GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ### BibTeX entry and citation info The model uses the following model as base: ```bibtex @misc{gpt-j, author = {Wang, Ben and Komatsuzaki, Aran}, title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha.
KoboldAI/GPT-J-6B-Skein
KoboldAI
gptj
8
2,627
transformers
8
text-generation
true
false
false
null
null
null
null
2
0
1
1
1
1
0
['text-generation']
false
true
true
6,021
# Model Card for GPT-J-6B-Skein # Model Details ## Model Description - **Developed by:** KoboldAI - **Shared by [Optional]:** KoboldAI - **Model type:** Text Generation - **Language(s) (NLP):** English - **License:** Apache License 2.0 - **Related Models:** [GPT-J 6B](https://huggingface.co/EleutherAI/gpt-j-6B?text=My+name+is+Mariama%2C+my+favorite) - **Parent Model:** GPT-J - **Resources for more information:** - [GitHub Repo](https://github.com/kingoflolz/mesh-transformer-jax) - [Associated Model Doc](https://huggingface.co/docs/transformers/main/en/model_doc/gptj#transformers.GPTJForCausalLM) # Uses ## Direct Use This model is designed for creative story generation. It can understand both free-form text and text written in interactive fiction style with actions starting with "> You", such as: ``` You become aware of her breathing -- the slight expansion of her ribs, the soft exhalation -- natural, and yet somehow studied. "Ah -- by the way," she says, in a way that utterly fails to be casual, "have you seen the artist out there? -- My artist, that is." "No," you respond, uneasy. You open your mouth and close it again. > You ask about the experience of waking up ``` ## Downstream Use [Optional] More information needed ## Out-of-Scope Use The model should not be used to intentionally create hostile or alienating environments for people. # Bias, Risks, and Limitations The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output. GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. See the [GPT-J 6B model card](https://huggingface.co/EleutherAI/gpt-j-6B?text=My+name+is+Mariama%2C+my+favorite) for more information. ## Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. # Training Details ## Training Data The data are mostly comprised of light novels from the dataset of the [KoboldAI/GPT-Neo-2.7B-Horni-LN](https://huggingface.co/KoboldAI/GPT-Neo-2.7B-Horni-LN) model and assorted interactive fiction. The dataset uses `[Themes: <comma-separated list of genres>]` for tagging, which means that if similar text is placed in the context, the model will attempt to generate text in the specified style(s). For more details about the dataset, consult [this document](https://wandb.ai/ve-forbryderne/skein/runs/files/files/datasets/README.txt). ## Training Procedure ### Preprocessing The data were preprocessed using the Python package ftfy to eliminate as much as possible non-ASCII punctuation characters and possible encoding errors. The interactive fiction in the dataset also underwent deduplication since interactive fiction logs often contain duplicate text from, for example, visiting the same in-game area several times. spaCy was used for grammatical analysis with the purpose of reformatting the actions commonly found in old text adventure games into more complete sentences. There was also some manual elimination of things such as "thank you for playing" messages and title messages. ### Speeds, Sizes, Times Training took approximately 14 hours in total, with the average speed being 5265 tokens per second. # Evaluation ## Testing Data, Factors & Metrics ### Testing Data More information needed ### Factors ### Metrics More information needed ## Results More information needed # Model Examination More information needed # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** More information needed - **Hours used:** More information needed - **Cloud Provider:** More information needed - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Technical Specifications [optional] ## Model Architecture and Objective More information needed ## Compute Infrastructure More information needed ### Hardware More information needed ### Software https://github.com/kingoflolz/mesh-transformer-jax # Citation **BibTeX:** ``` @misc{mesh-transformer-jax, author = {Wang, Ben}, title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` # Glossary [optional] More information needed # More Information [optional] More information needed # Model Card Authors [optional] KoboldAI in collaboration with Ezi Ozoani and the Hugging Face team # Model Card Contact More information needed # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("KoboldAI/GPT-J-6B-Skein") model = AutoModelForCausalLM.from_pretrained("KoboldAI/GPT-J-6B-Skein") ``` </details>
KoboldAI/GPT-Neo-125M-AID
KoboldAI
gpt_neo
14
71
transformers
1
text-generation
true
false
false
null
null
null
null
0
0
0
0
1
1
0
[]
false
false
true
344
# GPT-Neo-125M-AID This model was finetuned by Henk717 on Google Colab, it contains text adventure tuning and its the smallest 'Adventure' model of its size. Because of its limited size the behavior is mostly suitable for testing text adventure gamemodes at fast speeds, for a coherent adventure you are better off using one of the 2.7B models.
KoboldAI/GPT-Neo-2.7B-Janeway
KoboldAI
gpt_neo
8
931
transformers
2
text-generation
true
false
false
mit
['en']
null
null
0
0
0
0
0
0
0
[]
false
true
true
2,644
# GPT-Neo 2.7B - Janeway ## Model Description GPT-Neo 2.7B-Janeway is a finetune created using EleutherAI's GPT-Neo 2.7B model. ## Training data The training data contains around 2210 ebooks, mostly in the sci-fi and fantasy genres. The dataset is based on the same dataset used by GPT-Neo-2.7B-Picard, with 20% more data in various genres. Some parts of the dataset have been prepended using the following text: `[Genre: <genre1>,<genre2>]` ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='KoboldAI/GPT-Neo-2.7B-Janeway') >>> generator("Welcome Captain Janeway, I apologize for the delay.", do_sample=True, min_length=50) [{'generated_text': 'Welcome Captain Janeway, I apologize for the delay."\nIt's all right," Janeway said. "I'm certain that you're doing your best to keep me informed of what\'s going on."'}] ``` ### Limitations and Biases GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ### BibTeX entry and citation info The model is made using the following software: ```bibtex @software{gpt-neo, author = {Black, Sid and Leo, Gao and Wang, Phil and Leahy, Connor and Biderman, Stella}, title = {{GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow}}, month = mar, year = 2021, note = {{If you use this software, please cite it using these metadata.}}, publisher = {Zenodo}, version = {1.0}, doi = {10.5281/zenodo.5297715}, url = {https://doi.org/10.5281/zenodo.5297715} } ```
KoboldAI/GPT-Neo-2.7B-Picard
KoboldAI
gpt_neo
12
237
transformers
4
text-generation
true
false
false
mit
['en']
null
null
0
0
0
0
0
0
0
[]
false
true
true
2,303
# GPT-Neo 2.7B - Picard ## Model Description GPT-Neo 2.7B-Picard is a finetune created using EleutherAI's GPT-Neo 2.7B model. ## Training data The training data contains around 1800 ebooks, mostly in the sci-fi and fantasy genres. ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='mrseeker87/GPT-Neo-2.7B-Picard') >>> generator("Jean-Luc Picard", do_sample=True, min_length=50) [{'generated_text': 'Jean-Luc Picard, the captain of a Federation starship in command of one of Starfleet's few fulltime scientists.'}] ``` ### Limitations and Biases GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ### BibTeX entry and citation info The model is made using the following software: ```bibtex @software{gpt-neo, author = {Black, Sid and Leo, Gao and Wang, Phil and Leahy, Connor and Biderman, Stella}, title = {{GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow}}, month = mar, year = 2021, note = {{If you use this software, please cite it using these metadata.}}, publisher = {Zenodo}, version = {1.0}, doi = {10.5281/zenodo.5297715}, url = {https://doi.org/10.5281/zenodo.5297715} } ```
KoboldAI/GPT-Neo-2.7B-Shinen
KoboldAI
gpt_neo
12
4,511
transformers
6
text-generation
true
false
false
mit
['en']
null
null
1
0
0
1
0
0
0
[]
false
true
true
2,409
# GPT-Neo 2.7B - Shinen ## Model Description GPT-Neo 2.7B-Shinen is a finetune created using EleutherAI's GPT-Neo 2.7B model. Compared to GPT-Neo-2.7-Horni, this model is much heavier on the sexual content. **Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.** ## Training data The training data contains user-generated stories from sexstories.com. All stories are tagged using the following way: ``` [Theme: <theme1>, <theme2> ,<theme3>] <Story goes here> ``` ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='KoboldAI/GPT-Neo-2.7B-Shinen') >>> generator("She was staring at me", do_sample=True, min_length=50) [{'generated_text': 'She was staring at me with a look that said it all. She wanted me so badly tonight that I wanted'}] ``` ### Limitations and Biases GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. GPT-Neo-Shinen was trained on a dataset known to contain profanity, lewd, and otherwise abrasive language. GPT-Neo-Shinen *WILL* produce socially unacceptable text without warning. GPT-Neo-Shinen will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ### BibTeX entry and citation info The model is made using the following software: ```bibtex @software{gpt-neo, author = {Black, Sid and Leo, Gao and Wang, Phil and Leahy, Connor and Biderman, Stella}, title = {{GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow}}, month = mar, year = 2021, note = {{If you use this software, please cite it using these metadata.}}, publisher = {Zenodo}, version = {1.0}, doi = {10.5281/zenodo.5297715}, url = {https://doi.org/10.5281/zenodo.5297715} } ```
KoboldAI/fairseq-dense-1.3B
KoboldAI
xglm
7
125
transformers
1
text-generation
true
false
false
null
['en']
null
null
0
0
0
0
0
0
0
[]
false
true
true
387
This is a Hugging Face transformers-compatible conversion of the original dense 1.3B-parameter model from the paper "[Efficient Large Scale Language Modeling with Mixtures of Experts](https://arxiv.org/abs/2112.10684)" from Artetxe et al. Please refer to the original model card, which can be found at https://github.com/facebookresearch/fairseq/blob/main/examples/moe_lm/model_card.md.
KoboldAI/fairseq-dense-125M
KoboldAI
xglm
7
1,811
transformers
0
text-generation
true
false
false
null
['en']
null
null
0
0
0
0
1
1
0
[]
false
true
true
387
This is a Hugging Face transformers-compatible conversion of the original dense 125M-parameter model from the paper "[Efficient Large Scale Language Modeling with Mixtures of Experts](https://arxiv.org/abs/2112.10684)" from Artetxe et al. Please refer to the original model card, which can be found at https://github.com/facebookresearch/fairseq/blob/main/examples/moe_lm/model_card.md.
KoboldAI/fairseq-dense-13B
KoboldAI
xglm
7
1,027
transformers
8
text-generation
true
false
false
null
['en']
null
null
0
0
0
0
0
0
0
[]
false
true
true
386
This is a Hugging Face transformers-compatible conversion of the original dense 13B-parameter model from the paper "[Efficient Large Scale Language Modeling with Mixtures of Experts](https://arxiv.org/abs/2112.10684)" from Artetxe et al. Please refer to the original model card, which can be found at https://github.com/facebookresearch/fairseq/blob/main/examples/moe_lm/model_card.md.
KoboldAI/fairseq-dense-2.7B-Janeway
KoboldAI
xglm
8
295
transformers
2
text-generation
true
false
false
mit
['en']
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,288
# Fairseq-dense 2.7B - Janeway ## Model Description Fairseq-dense 2.7B-Janeway is a finetune created using Fairseq's MoE dense model. ## Training data The training data contains around 2210 ebooks, mostly in the sci-fi and fantasy genres. The dataset is identical as dataset used by GPT-Neo-2.7B-Janeway. Some parts of the dataset have been prepended using the following text: `[Genre: <genre1>,<genre2>]` ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='KoboldAI/fairseq-dense-2.7B-Janeway') >>> generator("Welcome Captain Janeway, I apologize for the delay.", do_sample=True, min_length=50) [{'generated_text': 'Welcome Captain Janeway, I apologize for the delay."\nIt's all right," Janeway said. "I'm certain that you're doing your best to keep me informed of what\'s going on."'}] ``` ### Limitations and Biases Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion). ### BibTeX entry and citation info ``` Artetxe et al. (2021): Efficient Large Scale Language Modeling with Mixtures of Experts ```
KoboldAI/fairseq-dense-2.7B
KoboldAI
xglm
7
260
transformers
1
text-generation
true
false
false
null
['en']
null
null
0
0
0
0
0
0
0
[]
false
true
true
387
This is a Hugging Face transformers-compatible conversion of the original dense 2.7B-parameter model from the paper "[Efficient Large Scale Language Modeling with Mixtures of Experts](https://arxiv.org/abs/2112.10684)" from Artetxe et al. Please refer to the original model card, which can be found at https://github.com/facebookresearch/fairseq/blob/main/examples/moe_lm/model_card.md.
KoboldAI/fairseq-dense-355M
KoboldAI
xglm
7
122
transformers
2
text-generation
true
false
false
null
['en']
null
null
0
0
0
0
1
1
0
[]
false
true
true
387
This is a Hugging Face transformers-compatible conversion of the original dense 355M-parameter model from the paper "[Efficient Large Scale Language Modeling with Mixtures of Experts](https://arxiv.org/abs/2112.10684)" from Artetxe et al. Please refer to the original model card, which can be found at https://github.com/facebookresearch/fairseq/blob/main/examples/moe_lm/model_card.md.
KoboldAI/fairseq-dense-6.7B
KoboldAI
xglm
7
119
transformers
2
text-generation
true
false
false
null
['en']
null
null
0
0
0
0
0
0
0
[]
false
true
true
387
This is a Hugging Face transformers-compatible conversion of the original dense 6.7B-parameter model from the paper "[Efficient Large Scale Language Modeling with Mixtures of Experts](https://arxiv.org/abs/2112.10684)" from Artetxe et al. Please refer to the original model card, which can be found at https://github.com/facebookresearch/fairseq/blob/main/examples/moe_lm/model_card.md.
KoichiYasuoka/SuPar-Kanbun
KoichiYasuoka
roberta
97
14
transformers
0
token-classification
true
false
false
mit
['lzh']
['universal_dependencies']
null
0
0
0
0
0
0
0
['classical chinese', 'literary chinese', 'ancient chinese', 'token-classification', 'pos']
false
true
true
2,931
[![Current PyPI packages](https://badge.fury.io/py/suparkanbun.svg)](https://pypi.org/project/suparkanbun/) # SuPar-Kanbun Tokenizer, POS-Tagger and Dependency-Parser for Classical Chinese Texts (漢文/文言文) with [spaCy](https://spacy.io), [Transformers](https://huggingface.co/transformers/) and [SuPar](https://github.com/yzhangcs/parser). ## Basic usage ```py >>> import suparkanbun >>> nlp=suparkanbun.load() >>> doc=nlp("不入虎穴不得虎子") >>> print(type(doc)) <class 'spacy.tokens.doc.Doc'> >>> print(suparkanbun.to_conllu(doc)) # text = 不入虎穴不得虎子 1 不 不 ADV v,副詞,否定,無界 Polarity=Neg 2 advmod _ Gloss=not|SpaceAfter=No 2 入 入 VERB v,動詞,行為,移動 _ 0 root _ Gloss=enter|SpaceAfter=No 3 虎 虎 NOUN n,名詞,主体,動物 _ 4 nmod _ Gloss=tiger|SpaceAfter=No 4 穴 穴 NOUN n,名詞,固定物,地形 Case=Loc 2 obj _ Gloss=cave|SpaceAfter=No 5 不 不 ADV v,副詞,否定,無界 Polarity=Neg 6 advmod _ Gloss=not|SpaceAfter=No 6 得 得 VERB v,動詞,行為,得失 _ 2 parataxis _ Gloss=get|SpaceAfter=No 7 虎 虎 NOUN n,名詞,主体,動物 _ 8 nmod _ Gloss=tiger|SpaceAfter=No 8 子 子 NOUN n,名詞,人,関係 _ 6 obj _ Gloss=child|SpaceAfter=No >>> import deplacy >>> deplacy.render(doc) 不 ADV <════╗ advmod 入 VERB ═══╗═╝═╗ ROOT 虎 NOUN <╗ ║ ║ nmod 穴 NOUN ═╝<╝ ║ obj 不 ADV <════╗ ║ advmod 得 VERB ═══╗═╝<╝ parataxis 虎 NOUN <╗ ║ nmod 子 NOUN ═╝<╝ obj ``` `suparkanbun.load()` has two options `suparkanbun.load(BERT="roberta-classical-chinese-base-char",Danku=False)`. With the option `Danku=True` the pipeline tries to segment sentences automatically. Available `BERT` options are: * `BERT="roberta-classical-chinese-base-char"` utilizes [roberta-classical-chinese-base-char](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-base-char) (default) * `BERT="roberta-classical-chinese-large-char"` utilizes [roberta-classical-chinese-large-char](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-large-char) * `BERT="guwenbert-base"` utilizes [GuwenBERT-base](https://huggingface.co/ethanyt/guwenbert-base) * `BERT="guwenbert-large"` utilizes [GuwenBERT-large](https://huggingface.co/ethanyt/guwenbert-large) * `BERT="sikubert"` utilizes [SikuBERT](https://huggingface.co/SIKU-BERT/sikubert) * `BERT="sikuroberta"` utilizes [SikuRoBERTa](https://huggingface.co/SIKU-BERT/sikuroberta) ## Installation for Linux ```sh pip3 install suparkanbun --user ``` ## Installation for Cygwin64 Make sure to get `python37-devel` `python37-pip` `python37-cython` `python37-numpy` `python37-wheel` `gcc-g++` `mingw64-x86_64-gcc-g++` `git` `curl` `make` `cmake` packages, and then: ```sh curl -L https://raw.githubusercontent.com/KoichiYasuoka/CygTorch/master/installer/supar.sh | sh pip3.7 install suparkanbun --no-build-isolation ``` ## Installation for Jupyter Notebook (Google Colaboratory) ```py !pip install suparkanbun ``` Try [notebook](https://colab.research.google.com/github/KoichiYasuoka/SuPar-Kanbun/blob/main/suparkanbun.ipynb) for Google Colaboratory. ## Author Koichi Yasuoka (安岡孝一)
KoichiYasuoka/bert-base-japanese-char-extended
KoichiYasuoka
bert
8
8
transformers
0
fill-mask
true
false
false
cc-by-sa-4.0
['ja']
null
null
0
0
0
0
0
0
0
['japanese', 'masked-lm', 'wikipedia']
false
true
true
859
# bert-base-japanese-char-extended ## Model Description This is a BERT model pre-trained on Japanese Wikipedia texts, derived from [bert-base-japanese-char-v2](https://huggingface.co/cl-tohoku/bert-base-japanese-char-v2). Character-embeddings are enhanced to include all 常用漢字/人名用漢字 characters using BertTokenizerFast. You can fine-tune `bert-base-japanese-char-extended` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/bert-base-japanese-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/bert-base-japanese-wikipedia-ud-head), and so on. ## How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-japanese-char-extended") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/bert-base-japanese-char-extended") ```
KoichiYasuoka/bert-base-japanese-luw-upos
KoichiYasuoka
bert
9
55
transformers
1
token-classification
true
false
false
cc-by-sa-4.0
['ja']
['universal_dependencies']
null
0
0
0
0
0
0
0
['japanese', 'token-classification', 'pos', 'wikipedia', 'dependency-parsing']
false
true
true
1,335
# bert-base-japanese-luw-upos ## Model Description This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-base-japanese-char-extended](https://huggingface.co/KoichiYasuoka/bert-base-japanese-char-extended). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/). ## How to Use ```py import torch from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-japanese-luw-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-base-japanese-luw-upos") s="国境の長いトンネルを抜けると雪国であった。" p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]] print(list(zip(s,p))) ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/bert-base-japanese-luw-upos") print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` ## Reference 安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8. ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
KoichiYasuoka/bert-base-japanese-unidic-luw-upos
KoichiYasuoka
bert
8
17
transformers
0
token-classification
true
false
false
cc-by-sa-4.0
['ja']
['universal_dependencies']
null
0
0
0
0
0
0
0
['japanese', 'token-classification', 'pos', 'wikipedia', 'dependency-parsing']
false
true
true
1,482
# bert-base-japanese-unidic-luw-upos ## Model Description This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-base-japanese-v2](https://huggingface.co/cl-tohoku/bert-base-japanese-v2). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py import torch from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-japanese-unidic-luw-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-base-japanese-unidic-luw-upos") s="国境の長いトンネルを抜けると雪国であった。" t=tokenizer.tokenize(s) p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]] print(list(zip(t,p))) ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/bert-base-japanese-unidic-luw-upos") print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` [fugashi](https://pypi.org/project/fugashi), [unidic-lite](https://pypi.org/project/unidic-lite) and [pytokenizations](https://pypi.org/project/pytokenizations) are required. ## Reference 安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8. ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
KoichiYasuoka/bert-base-japanese-upos
KoichiYasuoka
bert
9
119
transformers
1
token-classification
true
false
false
cc-by-sa-4.0
['ja']
['universal_dependencies']
null
0
0
0
0
0
0
0
['japanese', 'token-classification', 'pos', 'wikipedia', 'dependency-parsing']
false
true
true
1,114
# bert-base-japanese-upos ## Model Description This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-base-japanese-char-extended](https://huggingface.co/KoichiYasuoka/bert-base-japanese-char-extended). Every short-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py import torch from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-japanese-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-base-japanese-upos") s="国境の長いトンネルを抜けると雪国であった。" p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]] print(list(zip(s,p))) ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/bert-base-japanese-upos") print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
KoichiYasuoka/bert-base-thai-upos
KoichiYasuoka
bert
8
11
transformers
0
token-classification
true
false
false
apache-2.0
['th']
['universal_dependencies']
null
0
0
0
0
0
0
0
['thai', 'token-classification', 'pos', 'wikipedia', 'dependency-parsing']
false
true
true
820
# bert-base-thai-upos ## Model Description This is a BERT model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-base-th-cased](https://huggingface.co/Geotrend/bert-base-th-cased). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-thai-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-base-thai-upos") ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/bert-base-thai-upos") ``` ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
KoichiYasuoka/bert-large-japanese-char-extended
KoichiYasuoka
bert
8
16
transformers
0
fill-mask
true
false
false
cc-by-sa-4.0
['ja']
null
null
0
0
0
0
0
0
0
['japanese', 'masked-lm', 'wikipedia']
false
true
true
861
# bert-large-japanese-char-extended ## Model Description This is a BERT model pre-trained on Japanese Wikipedia texts, derived from [bert-large-japanese-char](https://huggingface.co/cl-tohoku/bert-large-japanese-char). Character-embeddings are enhanced to include all 常用漢字/人名用漢字 characters using BertTokenizerFast. You can fine-tune `bert-large-japanese-char-extended` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/bert-large-japanese-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/bert-large-japanese-wikipedia-ud-head), and so on. ## How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-large-japanese-char-extended") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/bert-large-japanese-char-extended") ```
KoichiYasuoka/bert-large-japanese-luw-upos
KoichiYasuoka
bert
9
72
transformers
0
token-classification
true
false
false
cc-by-sa-4.0
['ja']
['universal_dependencies']
null
0
0
0
0
0
0
0
['japanese', 'token-classification', 'pos', 'wikipedia', 'dependency-parsing']
false
true
true
1,341
# bert-large-japanese-luw-upos ## Model Description This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-large-japanese-char-extended](https://huggingface.co/KoichiYasuoka/bert-large-japanese-char-extended). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/). ## How to Use ```py import torch from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-large-japanese-luw-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-large-japanese-luw-upos") s="国境の長いトンネルを抜けると雪国であった。" p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]] print(list(zip(s,p))) ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/bert-large-japanese-luw-upos") print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` ## Reference 安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8. ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
KoichiYasuoka/bert-large-japanese-unidic-luw-upos
KoichiYasuoka
bert
8
13
transformers
0
token-classification
true
false
false
cc-by-sa-4.0
['ja']
['universal_dependencies']
null
0
0
0
0
0
0
0
['japanese', 'token-classification', 'pos', 'wikipedia', 'dependency-parsing']
false
true
true
1,482
# bert-large-japanese-unidic-luw-upos ## Model Description This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-large-japanese](https://huggingface.co/cl-tohoku/bert-large-japanese). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py import torch from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-large-japanese-unidic-luw-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-large-japanese-unidic-luw-upos") s="国境の長いトンネルを抜けると雪国であった。" t=tokenizer.tokenize(s) p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]] print(list(zip(t,p))) ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/bert-large-japanese-unidic-luw-upos") print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` [fugashi](https://pypi.org/project/fugashi), [unidic-lite](https://pypi.org/project/unidic-lite) and [pytokenizations](https://pypi.org/project/pytokenizations) are required. ## Reference 安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8. ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
KoichiYasuoka/bert-large-japanese-upos
KoichiYasuoka
bert
9
11
transformers
2
token-classification
true
false
false
cc-by-sa-4.0
['ja']
['universal_dependencies']
null
0
0
0
0
0
0
0
['japanese', 'token-classification', 'pos', 'wikipedia', 'dependency-parsing']
false
true
true
1,120
# bert-large-japanese-upos ## Model Description This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-large-japanese-char-extended](https://huggingface.co/KoichiYasuoka/bert-large-japanese-char-extended). Every short-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py import torch from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-large-japanese-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-large-japanese-upos") s="国境の長いトンネルを抜けると雪国であった。" p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]] print(list(zip(s,p))) ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/bert-large-japanese-upos") print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
KoichiYasuoka/chinese-bert-wwm-ext-upos
KoichiYasuoka
bert
8
19
transformers
4
token-classification
true
false
false
apache-2.0
['zh']
['universal_dependencies']
null
0
0
0
0
0
0
0
['chinese', 'token-classification', 'pos', 'wikipedia', 'dependency-parsing']
false
true
true
880
# chinese-bert-wwm-ext-upos ## Model Description This is a BERT model pre-trained on Chinese Wikipedia texts (both simplified and traditional) for POS-tagging and dependency-parsing, derived from [chinese-bert-wwm-ext](https://huggingface.co/hfl/chinese-bert-wwm-ext). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/chinese-bert-wwm-ext-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/chinese-bert-wwm-ext-upos") ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/chinese-bert-wwm-ext-upos") ``` ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
KoichiYasuoka/chinese-roberta-base-upos
KoichiYasuoka
bert
8
10
transformers
3
token-classification
true
false
false
apache-2.0
['zh']
['universal_dependencies']
null
0
0
0
0
0
0
0
['chinese', 'token-classification', 'pos', 'wikipedia', 'dependency-parsing']
false
true
true
886
# chinese-roberta-base-upos ## Model Description This is a BERT model pre-trained on Chinese Wikipedia texts (both simplified and traditional) for POS-tagging and dependency-parsing, derived from [chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/chinese-roberta-base-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/chinese-roberta-base-upos") ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/chinese-roberta-base-upos") ``` ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
KoichiYasuoka/chinese-roberta-large-upos
KoichiYasuoka
bert
8
8
transformers
0
token-classification
true
false
false
apache-2.0
['zh']
['universal_dependencies']
null
0
0
0
0
0
0
0
['chinese', 'token-classification', 'pos', 'wikipedia', 'dependency-parsing']
false
true
true
902
# chinese-roberta-large-upos ## Model Description This is a BERT model pre-trained on Chinese Wikipedia texts (both simplified and traditional) for POS-tagging and dependency-parsing, derived from [chinese-roberta-wwm-ext-large](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/chinese-roberta-large-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/chinese-roberta-large-upos") ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/chinese-roberta-large-upos") ``` ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
KoichiYasuoka/roberta-base-english-upos
KoichiYasuoka
roberta
10
2,169
transformers
0
token-classification
true
false
false
cc-by-sa-4.0
['en']
['universal_dependencies']
null
0
0
0
0
0
0
0
['english', 'token-classification', 'pos', 'dependency-parsing']
false
true
true
859
# roberta-base-english-upos ## Model Description This is a RoBERTa model pre-trained with [UD_English](https://universaldependencies.org/en/) for POS-tagging and dependency-parsing, derived from [roberta-base](https://huggingface.co/roberta-base). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-english-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-english-upos") ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/roberta-base-english-upos") ``` ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
KoichiYasuoka/roberta-base-japanese-aozora-char
KoichiYasuoka
roberta
8
5
transformers
1
fill-mask
true
false
false
cc-by-sa-4.0
['ja']
null
null
0
0
0
0
0
0
0
['japanese', 'masked-lm']
false
true
true
840
# roberta-base-japanese-aozora-char ## Model Description This is a RoBERTa model pre-trained on 青空文庫 texts with character tokenizer. You can fine-tune `roberta-base-japanese-aozora-char` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-japanese-char-luw-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-base-japanese-aozora-ud-head), and so on. ## How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-japanese-aozora-char") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-japanese-aozora-char") ``` ## Reference 安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
KoichiYasuoka/roberta-base-japanese-aozora
KoichiYasuoka
roberta
8
23
transformers
0
fill-mask
true
false
false
cc-by-sa-4.0
['ja']
null
null
0
0
0
0
0
0
0
['japanese', 'masked-lm']
false
true
true
881
# roberta-base-japanese-aozora ## Model Description This is a RoBERTa model pre-trained on 青空文庫 texts with [Japanese-LUW-Tokenizer](https://github.com/KoichiYasuoka/Japanese-LUW-Tokenizer). You can fine-tune `roberta-base-japanese-aozora` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-japanese-luw-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-base-japanese-aozora-ud-goeswith), and so on. ## How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-japanese-aozora") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-japanese-aozora") ``` ## Reference 安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
KoichiYasuoka/roberta-base-japanese-char-luw-upos
KoichiYasuoka
roberta
9
9
transformers
0
token-classification
true
false
false
cc-by-sa-4.0
['ja']
['universal_dependencies']
null
0
0
0
0
0
0
0
['japanese', 'token-classification', 'pos', 'dependency-parsing']
false
true
true
1,407
# roberta-base-japanese-char-luw-upos ## Model Description This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [roberta-base-japanese-aozora-char](https://huggingface.co/KoichiYasuoka/roberta-base-japanese-aozora-char). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-japanese-char-luw-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-japanese-char-luw-upos") pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple") nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)] print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/roberta-base-japanese-char-luw-upos") print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` ## Reference 安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8. ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
KoichiYasuoka/roberta-base-japanese-luw-upos
KoichiYasuoka
roberta
9
38
transformers
0
token-classification
true
false
false
cc-by-sa-4.0
['ja']
['universal_dependencies']
null
0
0
0
0
0
0
0
['japanese', 'token-classification', 'pos', 'dependency-parsing']
false
true
true
1,323
# roberta-base-japanese-luw-upos ## Model Description This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [roberta-base-japanese-aozora](https://huggingface.co/KoichiYasuoka/roberta-base-japanese-aozora). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-japanese-luw-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-japanese-luw-upos") pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple") nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)] print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/roberta-base-japanese-luw-upos") print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` ## Reference 安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8. ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
KoichiYasuoka/roberta-base-thai-char-upos
KoichiYasuoka
roberta
9
7
transformers
0
token-classification
true
false
false
apache-2.0
['th']
['universal_dependencies']
null
0
0
0
0
0
0
0
['thai', 'token-classification', 'pos', 'wikipedia', 'dependency-parsing']
false
true
true
1,120
# roberta-base-thai-char-upos ## Model Description This is a RoBERTa model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from [roberta-base-thai-char](https://huggingface.co/KoichiYasuoka/roberta-base-thai-char). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py import torch from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-char-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-thai-char-upos") s="หลายหัวดีกว่าหัวเดียว" t=tokenizer.tokenize(s) p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]] print(list(zip(t,p))) ``` or ``` import esupar nlp=esupar.load("KoichiYasuoka/roberta-base-thai-char-upos") print(nlp("หลายหัวดีกว่าหัวเดียว")) ``` ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
KoichiYasuoka/roberta-base-thai-char
KoichiYasuoka
roberta
8
604
transformers
0
fill-mask
true
false
false
apache-2.0
['th']
null
null
0
0
0
0
0
0
0
['thai', 'masked-lm', 'wikipedia']
false
true
true
676
# roberta-base-thai-char ## Model Description This is a RoBERTa model pre-trained on Thai Wikipedia texts with character-wise embeddings to use BertTokenizerFast. You can fine-tune `roberta-base-thai-char` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-thai-char-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-base-thai-char-ud-goeswith), and so on. ## How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-char") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-thai-char") ```
KoichiYasuoka/roberta-base-thai-spm-upos
KoichiYasuoka
roberta
9
1,021
transformers
1
token-classification
true
false
false
apache-2.0
['th']
['universal_dependencies']
null
0
0
0
0
0
0
0
['thai', 'token-classification', 'pos', 'wikipedia', 'dependency-parsing']
false
true
true
1,114
# roberta-base-thai-spm-upos ## Model Description This is a RoBERTa model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from [roberta-base-thai-spm](https://huggingface.co/KoichiYasuoka/roberta-base-thai-spm). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py import torch from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-spm-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-thai-spm-upos") s="หลายหัวดีกว่าหัวเดียว" t=tokenizer.tokenize(s) p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]] print(list(zip(t,p))) ``` or ``` import esupar nlp=esupar.load("KoichiYasuoka/roberta-base-thai-spm-upos") print(nlp("หลายหัวดีกว่าหัวเดียว")) ``` ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
KoichiYasuoka/roberta-base-thai-spm
KoichiYasuoka
roberta
8
32
transformers
0
fill-mask
true
false
false
apache-2.0
['th']
null
null
0
0
0
0
0
0
0
['thai', 'masked-lm', 'wikipedia']
false
true
true
610
# roberta-base-thai-spm ## Model Description This is a RoBERTa model pre-trained on Thai Wikipedia texts. You can fine-tune `roberta-base-thai-spm` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-thai-spm-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-base-thai-spm-ud-head), and so on. ## How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-spm") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-thai-spm") ```
KoichiYasuoka/roberta-base-thai-syllable-upos
KoichiYasuoka
roberta
9
7
transformers
0
token-classification
true
false
false
apache-2.0
['th']
['universal_dependencies']
null
0
0
0
0
0
0
0
['thai', 'token-classification', 'pos', 'wikipedia', 'dependency-parsing']
false
true
true
1,144
# roberta-base-thai-syllable-upos ## Model Description This is a RoBERTa model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from [roberta-base-thai-syllable](https://huggingface.co/KoichiYasuoka/roberta-base-thai-syllable). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py import torch from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-syllable-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-thai-syllable-upos") s="หลายหัวดีกว่าหัวเดียว" t=tokenizer.tokenize(s) p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]] print(list(zip(t,p))) ``` or ``` import esupar nlp=esupar.load("KoichiYasuoka/roberta-base-thai-syllable-upos") print(nlp("หลายหัวดีกว่าหัวเดียว")) ``` ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
KoichiYasuoka/roberta-base-thai-syllable
KoichiYasuoka
roberta
8
5
transformers
0
fill-mask
true
false
false
apache-2.0
['th']
null
null
0
0
0
0
0
0
0
['thai', 'masked-lm', 'wikipedia']
false
true
true
821
# roberta-base-thai-syllable ## Model Description This is a RoBERTa model pre-trained on Thai Wikipedia texts, derived from [wangchanberta-base-wiki-syllable](https://huggingface.co/airesearch/wangchanberta-base-wiki-syllable). Character-embeddings are modified to use BertTokenizerFast. You can fine-tune `roberta-base-thai-syllable` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-thai-syllable-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-base-thai-syllable-ud-goeswith), and so on. ## How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-syllable") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-thai-syllable") ```
KoichiYasuoka/roberta-classical-chinese-base-char
KoichiYasuoka
roberta
7
44
transformers
3
fill-mask
true
false
false
apache-2.0
['lzh']
null
null
0
0
0
0
0
0
0
['classical chinese', 'literary chinese', 'ancient chinese', 'masked-lm']
false
true
true
1,101
# roberta-classical-chinese-base-char ## Model Description This is a RoBERTa model pre-trained on Classical Chinese texts, derived from [GuwenBERT-base](https://huggingface.co/ethanyt/guwenbert-base). Character-embeddings are enhanced into traditional/simplified characters. You can fine-tune `roberta-classical-chinese-base-char` for downstream tasks, such as [sentence-segmentation](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-base-sentence-segmentation), [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-base-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-base-ud-goeswith), and so on. ## How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-char") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-char") ``` ## See Also [SuPar-Kanbun](https://github.com/KoichiYasuoka/SuPar-Kanbun): Tokenizer POS-tagger and Dependency-parser for Classical Chinese
KoichiYasuoka/roberta-classical-chinese-base-sentence-segmentation
KoichiYasuoka
roberta
7
11
transformers
1
token-classification
true
false
false
apache-2.0
['lzh']
null
null
0
0
0
0
0
0
0
['classical chinese', 'literary chinese', 'ancient chinese', 'sentence segmentation', 'token-classification']
false
true
true
1,251
# roberta-classical-chinese-base-sentence-segmentation ## Model Description This is a RoBERTa model pre-trained on Classical Chinese texts for sentence segmentation, derived from [roberta-classical-chinese-base-char](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-base-char). Every segmented sentence begins with token-class "B" and ends with token-class "E" (except for single-character sentence with token-class "S"). ## How to Use ```py import torch from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-sentence-segmentation") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-sentence-segmentation") s="子曰學而時習之不亦説乎有朋自遠方來不亦樂乎人不知而不慍不亦君子乎" p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]] print("".join(c+"。" if q=="E" or q=="S" else c for c,q in zip(s,p))) ``` ## Reference Koichi Yasuoka: [Sentence Segmentation of Classical Chinese Texts Using Transformers and BERT/RoBERTa Models](http://hdl.handle.net/2433/266539), IPSJ Symposium Series, Vol.2021, No.1 (December 2021), pp.104-109.
KoichiYasuoka/roberta-classical-chinese-base-upos
KoichiYasuoka
roberta
8
14
transformers
0
token-classification
true
false
false
apache-2.0
['lzh']
['universal_dependencies']
null
0
0
0
0
0
0
0
['classical chinese', 'literary chinese', 'ancient chinese', 'token-classification', 'pos', 'dependency-parsing']
false
true
true
1,234
# roberta-classical-chinese-base-upos ## Model Description This is a RoBERTa model pre-trained on Classical Chinese texts for POS-tagging and dependency-parsing, derived from [roberta-classical-chinese-base-char](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-base-char). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-upos") ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/roberta-classical-chinese-base-upos") ``` ## Reference Koichi Yasuoka: [Universal Dependencies Treebank of the Four Books in Classical Chinese](http://hdl.handle.net/2433/245217), DADH2019: 10th International Conference of Digital Archives and Digital Humanities (December 2019), pp.20-28. ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
KoichiYasuoka/roberta-classical-chinese-large-char
KoichiYasuoka
roberta
7
27
transformers
0
fill-mask
true
false
false
apache-2.0
['lzh']
null
null
0
0
0
0
0
0
0
['classical chinese', 'literary chinese', 'ancient chinese', 'masked-lm']
false
true
true
1,110
# roberta-classical-chinese-large-char ## Model Description This is a RoBERTa model pre-trained on Classical Chinese texts, derived from [GuwenBERT-large](https://huggingface.co/ethanyt/guwenbert-large). Character-embeddings are enhanced into traditional/simplified characters. You can fine-tune `roberta-classical-chinese-large-char` for downstream tasks, such as [sentence-segmentation](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-large-sentence-segmentation), [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-large-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-large-ud-goeswith), and so on. ## How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-large-char") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-classical-chinese-large-char") ``` ## See Also [SuPar-Kanbun](https://github.com/KoichiYasuoka/SuPar-Kanbun): Tokenizer POS-tagger and Dependency-parser for Classical Chinese
KoichiYasuoka/roberta-classical-chinese-large-sentence-segmentation
KoichiYasuoka
roberta
7
43
transformers
1
token-classification
true
false
false
apache-2.0
['lzh']
null
null
0
0
0
0
0
0
0
['classical chinese', 'literary chinese', 'ancient chinese', 'sentence segmentation', 'token-classification']
false
true
true
1,256
# roberta-classical-chinese-large-sentence-segmentation ## Model Description This is a RoBERTa model pre-trained on Classical Chinese texts for sentence segmentation, derived from [roberta-classical-chinese-large-char](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-large-char). Every segmented sentence begins with token-class "B" and ends with token-class "E" (except for single-character sentence with token-class "S"). ## How to Use ```py import torch from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-large-sentence-segmentation") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-classical-chinese-large-sentence-segmentation") s="子曰學而時習之不亦説乎有朋自遠方來不亦樂乎人不知而不慍不亦君子乎" p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]] print("".join(c+"。" if q=="E" or q=="S" else c for c,q in zip(s,p))) ``` ## Reference Koichi Yasuoka: [Sentence Segmentation of Classical Chinese Texts Using Transformers and BERT/RoBERTa Models](http://hdl.handle.net/2433/266539), IPSJ Symposium Series, Vol.2021, No.1 (December 2021), pp.104-109.
KoichiYasuoka/roberta-classical-chinese-large-upos
KoichiYasuoka
roberta
8
9
transformers
0
token-classification
true
false
false
apache-2.0
['lzh']
['universal_dependencies']
null
0
0
0
0
0
0
0
['classical chinese', 'literary chinese', 'ancient chinese', 'token-classification', 'pos', 'dependency-parsing']
false
true
true
1,239
# roberta-classical-chinese-large-upos ## Model Description This is a RoBERTa model pre-trained on Classical Chinese texts for POS-tagging and dependency-parsing, derived from [roberta-classical-chinese-large-char](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-large-char). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-large-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-classical-chinese-large-upos") ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/roberta-classical-chinese-large-upos") ``` ## Reference Koichi Yasuoka: [Universal Dependencies Treebank of the Four Books in Classical Chinese](http://hdl.handle.net/2433/245217), DADH2019: 10th International Conference of Digital Archives and Digital Humanities (December 2019), pp.20-28. ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
KoichiYasuoka/roberta-large-english-upos
KoichiYasuoka
roberta
10
1,915
transformers
1
token-classification
true
false
false
cc-by-sa-4.0
['en']
['universal_dependencies']
null
0
0
0
0
0
0
0
['english', 'token-classification', 'pos', 'dependency-parsing']
false
true
true
865
# roberta-large-english-upos ## Model Description This is a RoBERTa model pre-trained with [UD_English](https://universaldependencies.org/en/) for POS-tagging and dependency-parsing, derived from [roberta-large](https://huggingface.co/roberta-large). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-large-english-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-large-english-upos") ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/roberta-large-english-upos") ``` ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
KoichiYasuoka/roberta-large-japanese-aozora-char
KoichiYasuoka
roberta
8
3
transformers
0
fill-mask
true
false
false
cc-by-sa-4.0
['ja']
null
null
0
0
0
0
0
0
0
['japanese', 'masked-lm']
false
true
true
846
# roberta-large-japanese-aozora-char ## Model Description This is a RoBERTa model pre-trained on 青空文庫 texts with character tokenizer. You can fine-tune `roberta-large-japanese-aozora-char` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-large-japanese-char-luw-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-large-japanese-aozora-ud-head), and so on. ## How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-large-japanese-aozora-char") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-large-japanese-aozora-char") ``` ## Reference 安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
KoichiYasuoka/roberta-large-japanese-aozora
KoichiYasuoka
roberta
8
3
transformers
2
fill-mask
true
false
false
cc-by-sa-4.0
['ja']
null
null
0
0
0
0
0
0
0
['japanese', 'masked-lm']
false
true
true
887
# roberta-large-japanese-aozora ## Model Description This is a RoBERTa model pre-trained on 青空文庫 texts with [Japanese-LUW-Tokenizer](https://github.com/KoichiYasuoka/Japanese-LUW-Tokenizer). You can fine-tune `roberta-large-japanese-aozora` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-large-japanese-luw-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-large-japanese-aozora-ud-goeswith), and so on. ## How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-large-japanese-aozora") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-large-japanese-aozora") ``` ## Reference 安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
KoichiYasuoka/roberta-large-japanese-char-luw-upos
KoichiYasuoka
roberta
9
12
transformers
0
token-classification
true
false
false
cc-by-sa-4.0
['ja']
['universal_dependencies']
null
0
0
0
0
0
0
0
['japanese', 'token-classification', 'pos', 'dependency-parsing']
false
true
true
1,413
# roberta-large-japanese-char-luw-upos ## Model Description This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [roberta-large-japanese-aozora-char](https://huggingface.co/KoichiYasuoka/roberta-large-japanese-aozora-char). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-large-japanese-char-luw-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-large-japanese-char-luw-upos") pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple") nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)] print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/roberta-large-japanese-char-luw-upos") print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` ## Reference 安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8. ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
KoichiYasuoka/roberta-large-japanese-luw-upos
KoichiYasuoka
roberta
9
17
transformers
0
token-classification
true
false
false
cc-by-sa-4.0
['ja']
['universal_dependencies']
null
0
0
0
0
0
0
0
['japanese', 'token-classification', 'pos', 'dependency-parsing']
false
true
true
1,328
# roberta-large-japanese-luw-upos ## Model Description This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [roberta-large-japanese-aozora](https://huggingface.co/KoichiYasuoka/roberta-large-japanese-aozora). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-large-japanese-luw-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-large-japanese-luw-upos") pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple") nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)] print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/roberta-large-japanese-luw-upos") print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` ## Reference 安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8. ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
KoichiYasuoka/roberta-small-japanese-aozora-char
KoichiYasuoka
roberta
8
4
transformers
0
fill-mask
true
false
false
cc-by-sa-4.0
['ja']
null
null
0
0
0
0
0
0
0
['japanese', 'masked-lm']
false
true
true
617
# roberta-small-japanese-aozora-char ## Model Description This is a RoBERTa model pre-trained on 青空文庫 texts with character tokenizer. You can fine-tune `roberta-small-japanese-aozora-char` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-small-japanese-char-luw-upos), dependency-parsing, and so on. ## How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-small-japanese-aozora-char") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-small-japanese-aozora-char") ```
KoichiYasuoka/roberta-small-japanese-aozora
KoichiYasuoka
roberta
8
5
transformers
0
fill-mask
true
false
false
cc-by-sa-4.0
['ja']
null
null
0
0
0
0
0
0
0
['japanese', 'masked-lm']
false
true
true
654
# roberta-small-japanese-aozora ## Model Description This is a RoBERTa model pre-trained on 青空文庫 texts with [Japanese-LUW-Tokenizer](https://github.com/KoichiYasuoka/Japanese-LUW-Tokenizer). You can fine-tune `roberta-small-japanese-aozora` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-small-japanese-luw-upos), dependency-parsing, and so on. ## How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-small-japanese-aozora") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-small-japanese-aozora") ```
KoichiYasuoka/roberta-small-japanese-char-luw-upos
KoichiYasuoka
roberta
9
13
transformers
0
token-classification
true
false
false
cc-by-sa-4.0
['ja']
['universal_dependencies']
null
0
0
0
0
0
0
0
['japanese', 'token-classification', 'pos', 'dependency-parsing']
false
true
true
1,207
# roberta-small-japanese-char-luw-upos ## Model Description This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [roberta-small-japanese-aozora-char](https://huggingface.co/KoichiYasuoka/roberta-small-japanese-aozora-char). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-small-japanese-char-luw-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-small-japanese-char-luw-upos") pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple") nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)] print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/roberta-small-japanese-char-luw-upos") print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
KoichiYasuoka/roberta-small-japanese-luw-upos
KoichiYasuoka
roberta
9
32
transformers
0
token-classification
true
false
false
cc-by-sa-4.0
['ja']
['universal_dependencies']
null
0
0
0
0
0
0
0
['japanese', 'token-classification', 'pos', 'dependency-parsing']
false
true
true
1,177
# roberta-small-japanese-luw-upos ## Model Description This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [roberta-small-japanese-aozora](https://huggingface.co/KoichiYasuoka/roberta-small-japanese-aozora). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-small-japanese-luw-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-small-japanese-luw-upos") pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple") nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)] print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/roberta-small-japanese-luw-upos") print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
KoichiYasuoka/xlm-roberta-base-english-upos
KoichiYasuoka
xlm-roberta
9
1,791
transformers
0
token-classification
true
false
false
cc-by-sa-4.0
['en']
['universal_dependencies']
null
0
0
0
0
0
0
0
['english', 'token-classification', 'pos', 'dependency-parsing']
false
true
true
910
# xlm-roberta-base-english-upos ## Model Description This is an XLM-RoBERTa model pre-trained with [UD_English-EWT](https://github.com/UniversalDependencies/UD_English-EWT) for POS-tagging and dependency-parsing, derived from [xlm-roberta-base](https://huggingface.co/xlm-roberta-base). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/xlm-roberta-base-english-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/xlm-roberta-base-english-upos") ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/xlm-roberta-base-english-upos") ``` ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
Konstantinos/BERTaTweetGR
Konstantinos
roberta
8
9
transformers
0
fill-mask
true
false
true
null
['el']
null
null
0
0
0
0
0
0
0
[]
false
true
true
599
# Α lite RoBERTa fill mask model trained mostly in greek tweets The training dataset of this model consists of 23 million tweets in Greek, of approximately 5000 users in total, spanning from 2008 to 2018. The model has been trained to support the work for the paper [Multimodal Hate Speech Detection in Greek Social Media](https://www.mdpi.com/2414-4088/5/7/34) ## Load the pretrained model ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Konstantinos/BERTaTweetGR") model = AutoModel.from_pretrained("Konstantinos/BERTaTweetGR") ```
Kookly/Kooklybots
Kookly
null
2
0
null
0
null
false
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
231
from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua") model = AutoModelForCausalLM.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
Kowsher/bangla-bert
Kowsher
bert
5
49
transformers
1
fill-mask
true
false
false
null
['bn']
['BanglaLM dataset']
null
0
0
0
0
0
0
0
['Bert base Bangla', 'Bengali Bert', 'Bengali lm', 'Bangla Base Bert', 'Bangla Bert language model', 'Bangla Bert']
false
true
true
2,365
# Bangla BERT Base Here we published a pretrained Bangla bert language model as **bangla-bert**! which is now available in huggingface model hub. Here we described [bangla-bert](https://github.com/Kowsher/bert-base-bangla) which is a pretrained Bangla language model based on mask language modeling described in [BERT](https://arxiv.org/abs/1810.04805) and the GitHub [repository](https://github.com/google-research/bert) ## Corpus Details We trained the Bangla bert language model using BanglaLM dataset from kaggle [BanglaLM](https://www.kaggle.com/gakowsher/bangla-language-model-dataset). There is 3 version of dataset which is almost 40GB. After downloading the dataset, we went on the way to mask LM. **bangla-bert Tokenizer** ```py from transformers import AutoTokenizer, AutoModel bnbert_tokenizer = AutoTokenizer.from_pretrained("Kowsher/bangla-bert") text = "খাঁটি সোনার চাইতে খাঁটি আমার দেশের মাটি" bnbert_tokenizer.tokenize(text) # output: ['খাটি', 'সে', '##ানার', 'চাইতে', 'খাটি', 'আমার', 'দেশের', 'মাটি'] ``` **MASK Generation** here, we can use bert base bangla model as for masked language modeling: ```py from transformers import BertForMaskedLM, BertTokenizer, pipeline model = BertForMaskedLM.from_pretrained("Kowsher/bangla-bert") tokenizer = BertTokenizer.from_pretrained("Kowsher/bangla-bert") nlp = pipeline('fill-mask', model=model, tokenizer=tokenizer) for pred in nlp(f"আমি বাংলার গান {nlp.tokenizer.mask_token}"): print(pred) # {'sequence': 'আমি বাংলার গান লিখি', 'score': 0.17955434322357178, 'token': 24749, 'token_str': 'লিখি'} nlp = pipeline('fill-mask', model=model, tokenizer=tokenizer) for pred in nlp(f"তুই রাজাকার তুই {nlp.tokenizer.mask_token}"): print(pred) # {'sequence': 'তুই রাজাকার তুই রাজাকার', 'score': 0.9975168704986572, 'token': 13401, 'token_str': 'রাজাকার'} nlp = pipeline('fill-mask', model=model, tokenizer=tokenizer) for pred in nlp(f"বাংলা আমার {nlp.tokenizer.mask_token}"): print(pred) # {'sequence': 'বাংলা আমার অহংকার', 'score': 0.5679506063461304, 'token': 19009, 'token_str': 'অহংকার'} ``` **Cite this work** M. Kowsher, A. A. Sami, N. J. Prottasha, M. S. Arefin, P. K. Dhar and T. Koshiba, "Bangla-BERT: Transformer-based Efficient Model for Transfer Learning and Language Understanding," in IEEE Access, 2022, doi: 10.1109/ACCESS.2022.3197662. ## Author [Kowsher](http://kowsher.org/)
Krassy/xlm-roberta-base-finetuned-marc-en
Krassy
xlm-roberta
12
4
transformers
1
text-classification
true
false
false
mit
null
['amazon_reviews_multi']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,271
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.9005 - Mae: 0.5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.108 | 1.0 | 235 | 0.9801 | 0.5610 | | 0.9592 | 2.0 | 470 | 0.9005 | 0.5 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
Kriemhild/imdb
Kriemhild
null
2
0
null
0
null
false
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
744
from sagemaker.huggingface import HuggingFaceModel import boto3 iam_client = boto3.client('iam') role = iam_client.get_role(RoleName='{IAM_ROLE_WITH_SAGEMAKER_PERMISSIONS}')['Role']['Arn'] # Hub Model configuration. https://huggingface.co/models hub = { 'HF_MODEL_ID':'bigscience/T0pp', 'HF_TASK':'text2text-generation' } # create Hugging Face Model Class huggingface_model = HuggingFaceModel( transformers_version='4.6.1', pytorch_version='1.7.1', py_version='py36', env=hub, role=role, ) # deploy model to SageMaker Inference predictor = huggingface_model.deploy( initial_instance_count=1, # number of instances instance_type='ml.m5.xlarge' # ec2 instance type ) predictor.predict({ 'inputs': "The answer to the universe is" })
KrishParikh/gpt2_imdb_movie_plots
KrishParikh
gpt2
14
367
transformers
0
text-generation
true
false
false
null
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,009
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-plot This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.8856 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.13.0.dev0 - Pytorch 1.9.0 - Datasets 1.15.1 - Tokenizers 0.10.3
Kumicho/distilbert-base-uncased-finetuned-cola
Kumicho
distilbert
15
3
transformers
0
text-classification
true
false
false
apache-2.0
null
['glue']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,276
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7758 - Matthews Correlation: 0.5259 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.1926 | 1.0 | 535 | 0.7758 | 0.5259 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
Kuray107/librispeech-100h-supervised
Kuray107
wav2vec2
7
7
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
3,300
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # librispeech-100h-supervised This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0955 - Wer: 0.0345 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 24 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 4.8277 | 0.42 | 500 | 2.9071 | 1.0 | | 2.0261 | 0.84 | 1000 | 0.3060 | 0.2496 | | 0.2181 | 1.26 | 1500 | 0.1172 | 0.0873 | | 0.1255 | 1.68 | 2000 | 0.0894 | 0.0637 | | 0.0971 | 2.1 | 2500 | 0.0821 | 0.0560 | | 0.078 | 2.52 | 3000 | 0.0751 | 0.0500 | | 0.0706 | 2.94 | 3500 | 0.0721 | 0.0456 | | 0.0609 | 3.36 | 4000 | 0.0755 | 0.0464 | | 0.0572 | 3.78 | 4500 | 0.0705 | 0.0431 | | 0.0528 | 4.2 | 5000 | 0.0715 | 0.0423 | | 0.0481 | 4.62 | 5500 | 0.0691 | 0.0403 | | 0.0471 | 5.04 | 6000 | 0.0743 | 0.0401 | | 0.0412 | 5.46 | 6500 | 0.0757 | 0.0399 | | 0.0416 | 5.88 | 7000 | 0.0688 | 0.0378 | | 0.0391 | 6.3 | 7500 | 0.0704 | 0.0383 | | 0.0367 | 6.72 | 8000 | 0.0742 | 0.0387 | | 0.0349 | 7.14 | 8500 | 0.0732 | 0.0388 | | 0.033 | 7.56 | 9000 | 0.0719 | 0.0374 | | 0.0327 | 7.98 | 9500 | 0.0750 | 0.0369 | | 0.0292 | 8.4 | 10000 | 0.0734 | 0.0368 | | 0.0303 | 8.82 | 10500 | 0.0733 | 0.0365 | | 0.0283 | 9.24 | 11000 | 0.0766 | 0.0357 | | 0.0269 | 9.66 | 11500 | 0.0761 | 0.0350 | | 0.0268 | 10.08 | 12000 | 0.0802 | 0.0359 | | 0.0245 | 10.42 | 12500 | 0.0758 | 0.0354 | | 0.023 | 10.84 | 13000 | 0.0775 | 0.0349 | | 0.0186 | 11.26 | 13500 | 0.0817 | 0.0355 | | 0.0176 | 11.68 | 14000 | 0.0853 | 0.0354 | | 0.0163 | 12.1 | 14500 | 0.0880 | 0.0347 | | 0.0156 | 12.52 | 15000 | 0.0864 | 0.0357 | | 0.0141 | 12.94 | 15500 | 0.0897 | 0.0355 | | 0.0134 | 13.36 | 16000 | 0.0915 | 0.0349 | | 0.013 | 13.78 | 16500 | 0.0928 | 0.0350 | | 0.0097 | 13.42 | 17000 | 0.0955 | 0.0345 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.2 - Datasets 1.18.2 - Tokenizers 0.10.3
Kuray107/timit-5percent-supervised
Kuray107
wav2vec2
7
8
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,591
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # timit-5percent-supervised This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6615 - Wer: 0.2788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 200 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 5.3773 | 33.33 | 500 | 2.9693 | 1.0 | | 1.4746 | 66.67 | 1000 | 0.5050 | 0.3359 | | 0.1067 | 100.0 | 1500 | 0.5981 | 0.3054 | | 0.0388 | 133.33 | 2000 | 0.6192 | 0.2712 | | 0.0244 | 166.67 | 2500 | 0.6392 | 0.2776 | | 0.018 | 200.0 | 3000 | 0.6615 | 0.2788 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.2 - Datasets 1.18.2 - Tokenizers 0.10.3
Kuray107/timit-supervised
Kuray107
wav2vec2
7
7
transformers
0
automatic-speech-recognition
true
false
false
null
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,935
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # timit-supervised This model is a fine-tuned version of [Experiments/single_dataset/timit-supervised/checkpoint-3500](https://huggingface.co/Experiments/single_dataset/timit-supervised/checkpoint-3500) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1272 - Wer: 0.0532 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0554 | 1.77 | 500 | 0.1310 | 0.0697 | | 0.0509 | 3.53 | 1000 | 0.1497 | 0.0710 | | 0.038 | 5.3 | 1500 | 0.1190 | 0.0659 | | 0.0328 | 7.07 | 2000 | 0.0926 | 0.0596 | | 0.0247 | 8.83 | 2500 | 0.0873 | 0.0570 | | 0.0229 | 10.6 | 3000 | 0.0890 | 0.0532 | | 0.0183 | 12.37 | 3500 | 0.0969 | 0.0532 | | 0.0326 | 14.13 | 4000 | 0.0809 | 0.0469 | | 0.03 | 15.9 | 4500 | 0.0758 | 0.0444 | | 0.0264 | 17.67 | 5000 | 0.0973 | 0.0520 | | 0.0244 | 19.43 | 5500 | 0.1272 | 0.0532 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.2 - Datasets 1.18.2 - Tokenizers 0.10.3
Kuray107/wsj0-full-supervised
Kuray107
wav2vec2
7
7
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
3,293
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wsj0-full-supervised This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0623 - Wer: 0.0343 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 12 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 5.517 | 0.86 | 500 | 2.9475 | 1.0 | | 2.2387 | 1.72 | 1000 | 0.4004 | 0.3498 | | 0.3081 | 2.57 | 1500 | 0.1362 | 0.1159 | | 0.1744 | 3.43 | 2000 | 0.1125 | 0.0929 | | 0.1285 | 4.29 | 2500 | 0.0894 | 0.0727 | | 0.1015 | 5.15 | 3000 | 0.0852 | 0.0642 | | 0.0811 | 6.0 | 3500 | 0.0789 | 0.0614 | | 0.0748 | 6.86 | 4000 | 0.0746 | 0.0529 | | 0.0639 | 7.72 | 4500 | 0.0714 | 0.0481 | | 0.0606 | 8.58 | 5000 | 0.0698 | 0.0489 | | 0.0525 | 9.43 | 5500 | 0.0747 | 0.0464 | | 0.0489 | 10.29 | 6000 | 0.0594 | 0.0396 | | 0.0419 | 11.15 | 6500 | 0.0600 | 0.0359 | | 0.0414 | 12.01 | 7000 | 0.0612 | 0.0412 | | 0.0383 | 12.86 | 7500 | 0.0676 | 0.0392 | | 0.0352 | 13.72 | 8000 | 0.0626 | 0.0388 | | 0.034 | 14.58 | 8500 | 0.0699 | 0.0372 | | 0.0309 | 15.44 | 9000 | 0.0807 | 0.0420 | | 0.0295 | 16.3 | 9500 | 0.0796 | 0.0396 | | 0.0273 | 17.15 | 10000 | 0.0716 | 0.0376 | | 0.0271 | 18.01 | 10500 | 0.0657 | 0.0384 | | 0.0251 | 18.87 | 11000 | 0.0585 | 0.0351 | | 0.024 | 19.73 | 11500 | 0.0557 | 0.0347 | | 0.0252 | 20.58 | 12000 | 0.0609 | 0.0327 | | 0.0231 | 21.44 | 12500 | 0.0720 | 0.0368 | | 0.0202 | 22.3 | 13000 | 0.0625 | 0.0343 | | 0.0195 | 23.16 | 13500 | 0.0635 | 0.0372 | | 0.0201 | 24.01 | 14000 | 0.0582 | 0.0335 | | 0.0183 | 24.87 | 14500 | 0.0562 | 0.0343 | | 0.0183 | 25.73 | 15000 | 0.0629 | 0.0335 | | 0.0175 | 26.59 | 15500 | 0.0593 | 0.0323 | | 0.017 | 27.44 | 16000 | 0.0631 | 0.0339 | | 0.0162 | 28.3 | 16500 | 0.0597 | 0.0335 | | 0.0169 | 29.16 | 17000 | 0.0623 | 0.0343 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.2 - Datasets 1.18.2 - Tokenizers 0.10.3
Kyoungmin/beauty-base-KLCP
Kyoungmin
bert
8
2
transformers
0
feature-extraction
true
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
205
This is **KOREAN** Bert Masked LM pretrained model adapted in **BEAUTY** domain. (BertForMaskedLM) About 60,000 reviews were used. It was fine-tuned based on _beomi/kcbert-base_ model weights. Enjoy!
Kyoungmin/beauty-base-KLCP2
Kyoungmin
bert
8
4
transformers
0
fill-mask
true
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
217
**Second** BertForMaskedLM pretrained model in **KOREAN Beauty** domain. About 120,000 reviews were used. It was trained based on _beomi/kcbert-base_ . Check out _Kyoungmin/beauty-base-KLCP_ for smaller model !!
LIAMF-USP/aristo-roberta
LIAMF-USP
roberta
11
51
transformers
0
multiple-choice
true
true
true
mit
['english']
['race', 'ai2_arc', 'openbookqa']
null
1
1
0
0
0
0
0
[]
false
true
true
4,077
# Roberta Large Fine Tuned on RACE ## Model description This model follows the implementation by Allen AI team about [Aristo Roberta V7 Model](https://leaderboard.allenai.org/arc/submission/blcotvl7rrltlue6bsv0) given in [ARC Challenge](https://leaderboard.allenai.org/arc/submissions/public) #### How to use ```python import datasets from transformers import RobertaTokenizer from transformers import RobertaForMultipleChoice tokenizer = RobertaTokenizer.from_pretrained( "LIAMF-USP/aristo-roberta") model = RobertaForMultipleChoice.from_pretrained( "LIAMF-USP/aristo-roberta") dataset = datasets.load_dataset( "arc",, split=["train", "validation", "test"], ) training_examples = dataset[0] evaluation_examples = dataset[1] test_examples = dataset[2] example=training_examples[0] example_id = example["example_id"] question = example["question"] label_example = example["answer"] options = example["options"] if label_example in ["A", "B", "C", "D", "E"]: label_map = {label: i for i, label in enumerate( ["A", "B", "C", "D", "E"])} elif label_example in ["1", "2", "3", "4", "5"]: label_map = {label: i for i, label in enumerate( ["1", "2", "3", "4", "5"])} else: print(f"{label_example} not found") while len(options) < 5: empty_option = {} empty_option['option_context'] = '' empty_option['option_text'] = '' options.append(empty_option) choices_inputs = [] for ending_idx, option in enumerate(options): ending = option["option_text"] context = option["option_context"] if question.find("_") != -1: # fill in the banks questions question_option = question.replace("_", ending) else: question_option = question + " " + ending inputs = tokenizer( context, question_option, add_special_tokens=True, max_length=MAX_SEQ_LENGTH, padding="max_length", truncation=True, return_overflowing_tokens=False, ) if "num_truncated_tokens" in inputs and inputs["num_truncated_tokens"] > 0: logging.warning(f"Question: {example_id} with option {ending_idx} was truncated") choices_inputs.append(inputs) label = label_map[label_example] input_ids = [x["input_ids"] for x in choices_inputs] attention_mask = ( [x["attention_mask"] for x in choices_inputs] # as the senteces follow the same structure, just one of them is # necessary to check if "attention_mask" in choices_inputs[0] else None ) example_encoded = { "example_id": example_id, "input_ids": input_ids, "attention_mask": attention_mask, "token_type_ids": token_type_ids, "label": label } output = model(**example_encoded) ``` ## Training data the Training data was the same as proposed [here](https://leaderboard.allenai.org/arc/submission/blcotvl7rrltlue6bsv0) The only diferrence was the hypeparameters of RACE fine tuned model, which were reported [here](https://huggingface.co/LIAMF-USP/roberta-large-finetuned-race#eval-results) ## Training procedure It was necessary to preprocess the data with a method that is exemplified for a single instance in the _How to use_ section. The used hyperparameters were the following: | Hyperparameter | Value | |:----:|:----:| | adam_beta1 | 0.9 | | adam_beta2 | 0.98 | | adam_epsilon | 1.000e-8 | | eval_batch_size | 16 | | train_batch_size | 4 | | fp16 | True | | gradient_accumulation_steps | 4 | | learning_rate | 0.00001 | | warmup_steps | 0.06 | | max_length | 256 | | epochs | 4 | The other parameters were the default ones from [Trainer](https://huggingface.co/transformers/main_classes/trainer.html) and [Trainer Arguments](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments) ## Eval results: | Dataset Acc | Challenge Test | |:----:|:----:| | | 65.358 | **The model was trained with a TITAN RTX**
LIAMF-USP/roberta-large-finetuned-race
LIAMF-USP
roberta
11
222
transformers
4
multiple-choice
true
true
true
mit
['english']
['race']
null
1
1
0
0
0
0
0
[]
false
true
true
3,046
# Roberta Large Fine Tuned on RACE ## Model description This model is a fine-tuned model of Roberta-large applied on RACE #### How to use ```python import datasets from transformers import RobertaTokenizer from transformers import RobertaForMultipleChoice tokenizer = RobertaTokenizer.from_pretrained( "LIAMF-USP/roberta-large-finetuned-race") model = RobertaForMultipleChoice.from_pretrained( "LIAMF-USP/roberta-large-finetuned-race") dataset = datasets.load_dataset( "race", "all", split=["train", "validation", "test"], )training_examples = dataset[0] evaluation_examples = dataset[1] test_examples = dataset[2] example=training_examples[0] example_id = example["example_id"] question = example["question"] context = example["article"] options = example["options"] label_example = example["answer"] label_map = {label: i for i, label in enumerate(["A", "B", "C", "D"])} choices_inputs = [] for ending_idx, (_, ending) in enumerate( zip(context, options)): if question.find("_") != -1: # fill in the banks questions question_option = question.replace("_", ending) else: question_option = question + " " + ending inputs = tokenizer( context, question_option, add_special_tokens=True, max_length=MAX_SEQ_LENGTH, padding="max_length", truncation=True, return_overflowing_tokens=False, ) label = label_map[label_example] input_ids = [x["input_ids"] for x in choices_inputs] attention_mask = ( [x["attention_mask"] for x in choices_inputs] # as the senteces follow the same structure, #just one of them is necessary to check if "attention_mask" in choices_inputs[0] else None ) example_encoded = { "example_id": example_id, "input_ids": input_ids, "attention_mask": attention_mask, "label": label, } output = model(**example_encoded) ``` ## Training data The initial model was [roberta large model](https://huggingface.co/roberta-large) which was then fine-tuned on [RACE dataset](https://www.cs.cmu.edu/~glai1/data/race/) ## Training procedure It was necessary to preprocess the data with a method that is exemplified for a single instance in the _How to use_ section. The used hyperparameters were the following: | Hyperparameter | Value | |:----:|:----:| | adam_beta1 | 0.9 | | adam_beta2 | 0.98 | | adam_epsilon | 1.000e-8 | | eval_batch_size | 32 | | train_batch_size | 1 | | fp16 | True | | gradient_accumulation_steps | 16 | | learning_rate | 0.00001 | | warmup_steps | 1000 | | max_length | 512 | | epochs | 4 | ## Eval results: | Dataset Acc | Eval | All Test |High School Test |Middle School Test | |:----:|:----:|:----:|:----:|:----:| | | 85.2 | 84.9|83.5|88.0| **The model was trained with a Tesla V100-PCIE-16GB**
Laeyoung/BTS-comments-generator
Laeyoung
gpt2
8
5
transformers
0
text-generation
true
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
862
### Model information * Fine tuning dataset: https://www.kaggle.com/seungguini/bts-youtube-comments * Base model: GPT2 Small * Epoch: 5 * API page: [Ainize](https://ainize.ai/teachable-ainize/gpt2-train?branch=train/cv695m9g40av0cdabuqp) * Demo page: [End-point](https://kubecon-tabtab-ainize-team.endpoint.ainize.ai/?modelUrl=https://train-cv695m9g40av0cdabuqp-gpt2-train-teachable-ainize.endpoint.ainize.ai/predictions/gpt-2-en-small-finetune) ### ===Teachable NLP=== ### To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free. * Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp) * Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp)
Lalita/marianmt-th-zh_cn
Lalita
marian
9
1,085
transformers
0
translation
true
false
false
null
null
null
null
1
1
0
0
0
0
0
['translation', 'torch==1.8.0']
false
true
true
1,182
### marianmt-th-zh_cn * source languages: th * target languages: zh_cn * dataset: * model: transformer-align * pre-processing: normalization + SentencePiece * test set scores: 15.53 ## Training Training scripts from [LalitaDeelert/NLP-ZH_TH-Project](https://github.com/LalitaDeelert/NLP-ZH_TH-Project). Experiments tracked at [cstorm125/marianmt-th-zh_cn](https://wandb.ai/cstorm125/marianmt-th-zh_cn). ``` export WANDB_PROJECT=marianmt-th-zh_cn python train_model.py --input_fname ../data/v1/Train.csv \\\\\\\\ \\\\t--output_dir ../models/marianmt-th-zh_cn \\\\\\\\ \\\\t--source_lang th --target_lang zh \\\\\\\\ \\\\t--metric_tokenize zh --fp16 ``` ## Usage ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("Lalita/marianmt-zh_cn-th") model = AutoModelForSeq2SeqLM.from_pretrained("Lalita/marianmt-zh_cn-th").cpu() src_text = [ 'ฉันรักคุณ', 'ฉันอยากกินข้าว', ] translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) print([tokenizer.decode(t, skip_special_tokens=True) for t in translated]) > ['我爱你', '我想吃饭。'] ``` ## Requirements ``` transformers==4.6.0 torch==1.8.0 ```