Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
text-classification | transformers | {} | airKlizz/xlm-roberta-base-germeval21-toxic | null | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | airchimedes/gpt2xl_alanwatts | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
question-answering | transformers |
# bert-base-multilingual-cased
Finetuning `bert-base-multilingual-cased` with the training set of `iapp_wiki_qa_squad`, `thaiqa_squad`, and `nsc_qa` (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 `newmm` words). Benchmarks shared on [wandb](https://wandb.ai/cstorm125/wangchanberta-qa) using validation and test sets of `iapp_wiki_qa_squad`.
Trained with [thai2transformers](https://github.com/vistec-AI/thai2transformers/blob/dev/scripts/downstream/train_question_answering_lm_finetuning.py).
Run with:
```
export MODEL_NAME=bert-base-multilingual-cased
python train_question_answering_lm_finetuning.py \
--model_name $MODEL_NAME \
--dataset_name chimera_qa \
--output_dir $MODEL_NAME-finetune-chimera_qa-model \
--log_dir $MODEL_NAME-finetune-chimera_qa-log \
--pad_on_right \
--fp16
``` | {"widget": [{"text": "\u0e2a\u0e27\u0e19\u0e01\u0e38\u0e2b\u0e25\u0e32\u0e1a\u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e2d\u0e30\u0e44\u0e23", "context": "\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e2a\u0e27\u0e19\u0e01\u0e38\u0e2b\u0e25\u0e32\u0e1a\u0e27\u0e34\u0e17\u0e22\u0e32\u0e25\u0e31\u0e22 (Suankularb Wittayalai School) (\u0e2d\u0e31\u0e01\u0e29\u0e23\u0e22\u0e48\u0e2d : \u0e2a.\u0e01. / S.K.) \u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e0a\u0e32\u0e22\u0e25\u0e49\u0e27\u0e19 \u0e23\u0e30\u0e14\u0e31\u0e1a\u0e0a\u0e31\u0e49\u0e19\u0e21\u0e31\u0e18\u0e22\u0e21\u0e28\u0e36\u0e01\u0e29\u0e32\u0e02\u0e19\u0e32\u0e14\u0e43\u0e2b\u0e0d\u0e48\u0e1e\u0e34\u0e40\u0e28\u0e29 \u0e2a\u0e31\u0e07\u0e01\u0e31\u0e14\u0e2a\u0e33\u0e19\u0e31\u0e01\u0e07\u0e32\u0e19\u0e40\u0e02\u0e15\u0e1e\u0e37\u0e49\u0e19\u0e17\u0e35\u0e48\u0e01\u0e32\u0e23\u0e28\u0e36\u0e01\u0e29\u0e32\u0e21\u0e31\u0e18\u0e22\u0e21\u0e28\u0e36\u0e01\u0e29\u0e32\u0e40\u0e02\u0e15 1 \u0e2a\u0e33\u0e19\u0e31\u0e01\u0e07\u0e32\u0e19\u0e04\u0e13\u0e30\u0e01\u0e23\u0e23\u0e21\u0e01\u0e32\u0e23\u0e01\u0e32\u0e23\u0e28\u0e36\u0e01\u0e29\u0e32\u0e02\u0e31\u0e49\u0e19\u0e1e\u0e37\u0e49\u0e19\u0e10\u0e32\u0e19 (\u0e0a\u0e37\u0e48\u0e2d\u0e40\u0e14\u0e34\u0e21: \u0e01\u0e23\u0e21\u0e2a\u0e32\u0e21\u0e31\u0e0d\u0e28\u0e36\u0e01\u0e29\u0e32) \u0e01\u0e23\u0e30\u0e17\u0e23\u0e27\u0e07\u0e28\u0e36\u0e01\u0e29\u0e32\u0e18\u0e34\u0e01\u0e32\u0e23 \u0e01\u0e48\u0e2d\u0e15\u0e31\u0e49\u0e07\u0e42\u0e14\u0e22 \u0e1e\u0e23\u0e30\u0e1a\u0e32\u0e17\u0e2a\u0e21\u0e40\u0e14\u0e47\u0e08\u0e1e\u0e23\u0e30\u0e08\u0e38\u0e25\u0e08\u0e2d\u0e21\u0e40\u0e01\u0e25\u0e49\u0e32\u0e40\u0e08\u0e49\u0e32\u0e2d\u0e22\u0e39\u0e48\u0e2b\u0e31\u0e27 \u0e44\u0e14\u0e49\u0e23\u0e31\u0e1a\u0e01\u0e32\u0e23\u0e2a\u0e16\u0e32\u0e1b\u0e19\u0e32\u0e02\u0e36\u0e49\u0e19\u0e43\u0e19\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 8 \u0e21\u0e35\u0e19\u0e32\u0e04\u0e21 \u0e1e.\u0e28. 2424 (\u0e02\u0e13\u0e30\u0e19\u0e31\u0e49\u0e19\u0e19\u0e31\u0e1a\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 1 \u0e40\u0e21\u0e29\u0e32\u0e22\u0e19 \u0e40\u0e1b\u0e47\u0e19\u0e27\u0e31\u0e19\u0e02\u0e36\u0e49\u0e19\u0e1b\u0e35\u0e43\u0e2b\u0e21\u0e48 \u0e40\u0e21\u0e37\u0e48\u0e2d\u0e19\u0e31\u0e1a\u0e2d\u0e22\u0e48\u0e32\u0e07\u0e2a\u0e32\u0e01\u0e25\u0e16\u0e37\u0e2d\u0e40\u0e1b\u0e47\u0e19 \u0e1e.\u0e28. 2425) \u0e42\u0e14\u0e22\u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e23\u0e31\u0e10\u0e1a\u0e32\u0e25\u0e41\u0e2b\u0e48\u0e07\u0e41\u0e23\u0e01\u0e02\u0e2d\u0e07\u0e1b\u0e23\u0e30\u0e40\u0e17\u0e28\u0e44\u0e17\u0e22"}]} | airesearch/bert-base-multilingual-cased-finetune-qa | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers | # Finetuend `bert-base-multilignual-cased` model on Thai sequence and token classification datasets
<br>
Finetuned XLM Roberta BASE model on Thai sequence and token classification datasets
The script and documentation can be found at [this repository](https://github.com/vistec-AI/thai2transformers).
<br>
## Model description
<br>
We use the pretrained cross-lingual BERT model (mBERT) as proposed by [[Devlin et al., 2018]](https://arxiv.org/abs/1810.04805). We download the pretrained PyTorch model via HuggingFace's Model Hub (https://huggingface.co/bert-base-multilignual-cased)
<br>
## Intended uses & limitations
<br>
You can use the finetuned models for multiclass/multilabel text classification and token classification task.
<br>
**Multiclass text classification**
- `wisesight_sentiment`
4-class text classification task (`positive`, `neutral`, `negative`, and `question`) based on social media posts and tweets.
- `wongnai_reivews`
Users' review rating classification task (scale is ranging from 1 to 5)
- `generated_reviews_enth` : (`review_star` as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
**Multilabel text classification**
- `prachathai67k`
Thai topic classification with 12 labels based on news article corpus from prachathai.com. The detail is described in this [page](https://huggingface.co/datasets/prachathai67k).
**Token classification**
- `thainer`
Named-entity recognition tagging with 13 named-entities as descibed in this [page](https://huggingface.co/datasets/thainer).
- `lst20` : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this [page](https://huggingface.co/datasets/lst20).
<br>
## How to use
<br>
The example notebook demonstrating how to use finetuned model for inference can be found at this [Colab notebook](https://colab.research.google.com/drive/1Kbk6sBspZLwcnOE61adAQo30xxqOQ9ko)
<br>
**BibTeX entry and citation info**
```
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| {} | airesearch/bert-base-multilingual-cased-finetuned | null | [
"transformers",
"bert",
"fill-mask",
"arxiv:1810.04805",
"arxiv:2101.09635",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers |
# WangchanBERTa base model: `wangchanberta-base-att-spm-uncased`
<br>
Pretrained RoBERTa BASE model on assorted Thai texts (78.5 GB).
The script and documentation can be found at [this repository](https://github.com/vistec-AI/thai2transformers).
<br>
## Model description
<br>
The architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692).
<br>
## Intended uses & limitations
<br>
You can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.
<br>
**Multiclass text classification**
- `wisesight_sentiment`
4-class text classification task (`positive`, `neutral`, `negative`, and `question`) based on social media posts and tweets.
- `wongnai_reivews`
Users' review rating classification task (scale is ranging from 1 to 5)
- `generated_reviews_enth` : (`review_star` as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
**Multilabel text classification**
- `prachathai67k`
Thai topic classification with 12 labels based on news article corpus from prachathai.com. The detail is described in this [page](https://huggingface.co/datasets/prachathai67k).
**Token classification**
- `thainer`
Named-entity recognition tagging with 13 named-entities as described in this [page](https://huggingface.co/datasets/thainer).
- `lst20` : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as described in this [page](https://huggingface.co/datasets/lst20).
<br>
## How to use
<br>
The getting started notebook of WangchanBERTa model can be found at this [Colab notebook](https://colab.research.google.com/drive/1Kbk6sBspZLwcnOE61adAQo30xxqOQ9ko)
<br>
## Training data
`wangchanberta-base-att-spm-uncased` model was pretrained on assorted Thai text dataset. The total size of uncompressed text is 78.5GB.
### Preprocessing
Texts are preprocessed with the following rules:
- Replace HTML forms of characters with the actual characters such asnbsp;with a space and \\\\\\\\\\\\\\\\<br /> with a line break [[Howard and Ruder, 2018]](https://arxiv.org/abs/1801.06146).
- Remove empty brackets ((), {}, and []) than sometimes come up as a result of text extraction such as from Wikipedia.
- Replace line breaks with spaces.
- Replace more than one spaces with a single space
- Remove more than 3 repetitive characters such as ดีมากกก to ดีมาก [Howard and Ruder, 2018]](https://arxiv.org/abs/1801.06146).
- Word-level tokenization using [[Phatthiyaphaibun et al., 2020]](https://zenodo.org/record/4319685#.YA4xEGQzaDU) ’s `newmm` dictionary-based maximal matching tokenizer.
- Replace repetitive words; this is done post-tokenization unlike [[Howard and Ruder, 2018]](https://arxiv.org/abs/1801.06146). since there is no delimitation by space in Thai as in English.
- Replace spaces with <\\\\\\\\\\\\\\\\_>. The SentencePiece tokenizer combines the spaces with other tokens. Since spaces serve as punctuation in Thai such as sentence boundaries similar to periods in English, combining it with other tokens will omit an important feature for tasks such as word tokenization and sentence breaking. Therefore, we opt to explicitly mark spaces with <\\\\\\\\\\\\\\\\_>.
<br>
Regarding the vocabulary, we use SentencePiece [[Kudo, 2018]](https://arxiv.org/abs/1808.06226) to train SentencePiece unigram model.
The tokenizer has a vocabulary size of 25,000 subwords, trained on 15M sentences sampled from the training set.
The length of each sequence is limited up to 416 subword tokens.
Regarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.
<br>
**Train/Val/Test splits**
After preprocessing and deduplication, we have a training set of 381,034,638 unique, mostly Thai sentences with sequence length of 5 to 300 words (78.5GB). The training set has a total of 16,957,775,412 words as tokenized by dictionary-based maximal matching [[Phatthiyaphaibun et al., 2020]](https://zenodo.org/record/4319685#.YA4xEGQzaDU), 8,680,485,067 subwords as tokenized by SentencePiece tokenizer, and 53,035,823,287 characters.
<br>
**Pretraining**
The model was trained on 8 V100 GPUs for 500,000 steps with the batch size of 4,096 (32 sequences per device with 16 accumulation steps) and a sequence length of 416 tokens. The optimizer we used is Adam with the learning rate of $3e-4$, $\\\\\\\\\\\\\\\\beta_1 = 0.9$, $\\\\\\\\\\\\\\\\beta_2= 0.999$ and $\\\\\\\\\\\\\\\\epsilon = 1e-6$. The learning rate is warmed up for the first 24,000 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint.
As of Sun 24 Jan 2021, we release the model from the checkpoint @360,000 steps due to the model pretraining has not yet been completed
<br>
**BibTeX entry and citation info**
```
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| {"language": "th", "widget": [{"text": "\u0e1c\u0e39\u0e49\u0e43\u0e0a\u0e49\u0e07\u0e32\u0e19\u0e17\u0e48\u0e32\u0e2d\u0e32\u0e01\u0e32\u0e28\u0e22\u0e32\u0e19\u0e19\u0e32\u0e19\u0e32\u0e0a\u0e32\u0e15\u0e34<mask>\u0e21\u0e35\u0e01\u0e27\u0e48\u0e32\u0e2a\u0e32\u0e21\u0e25\u0e49\u0e32\u0e19\u0e04\u0e19<pad>"}]} | airesearch/wangchanberta-base-att-spm-uncased | null | [
"transformers",
"pytorch",
"safetensors",
"camembert",
"fill-mask",
"th",
"arxiv:1907.11692",
"arxiv:1801.06146",
"arxiv:1808.06226",
"arxiv:2101.09635",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers | # wangchanberta-base-wiki-20210520-spm-finetune-qa
Finetuning `airesearchth/wangchanberta-base-wiki-20210520-spmd` with the training set of `iapp_wiki_qa_squad`, `thaiqa_squad`, and `nsc_qa` (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 `newmm` words). Benchmarks shared on [wandb](https://wandb.ai/cstorm125/wangchanberta-qa) using validation and test sets of `iapp_wiki_qa_squad`.
Trained with [thai2transformers](https://github.com/vistec-AI/thai2transformers/blob/dev/scripts/downstream/train_question_answering_lm_finetuning.py).
Run with:
```
export MODEL_NAME=airesearchth/wangchanberta-base-wiki-20210520-news-spm
CUDA_LAUNCH_BLOCKING=1 python train_question_answering_lm_finetuning.py \\n --model_name $MODEL_NAME \\n --dataset_name chimera_qa \\n --output_dir $MODEL_NAME-finetune-chimera_qa-model \\n --log_dir $MODEL_NAME-finetune-chimera_qa-log \\n --model_max_length 400 \\n --pad_on_right \\n --fp16
``` | {"language": "th", "widget": [{"text": "\u0e2a\u0e27\u0e19\u0e01\u0e38\u0e2b\u0e25\u0e32\u0e1a\u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e2d\u0e30\u0e44\u0e23", "context": "\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e2a\u0e27\u0e19\u0e01\u0e38\u0e2b\u0e25\u0e32\u0e1a\u0e27\u0e34\u0e17\u0e22\u0e32\u0e25\u0e31\u0e22 (Suankularb Wittayalai School) (\u0e2d\u0e31\u0e01\u0e29\u0e23\u0e22\u0e48\u0e2d : \u0e2a.\u0e01. / S.K.) \u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e0a\u0e32\u0e22\u0e25\u0e49\u0e27\u0e19 \u0e23\u0e30\u0e14\u0e31\u0e1a\u0e0a\u0e31\u0e49\u0e19\u0e21\u0e31\u0e18\u0e22\u0e21\u0e28\u0e36\u0e01\u0e29\u0e32\u0e02\u0e19\u0e32\u0e14\u0e43\u0e2b\u0e0d\u0e48\u0e1e\u0e34\u0e40\u0e28\u0e29 \u0e2a\u0e31\u0e07\u0e01\u0e31\u0e14\u0e2a\u0e33\u0e19\u0e31\u0e01\u0e07\u0e32\u0e19\u0e40\u0e02\u0e15\u0e1e\u0e37\u0e49\u0e19\u0e17\u0e35\u0e48\u0e01\u0e32\u0e23\u0e28\u0e36\u0e01\u0e29\u0e32\u0e21\u0e31\u0e18\u0e22\u0e21\u0e28\u0e36\u0e01\u0e29\u0e32\u0e40\u0e02\u0e15 1 \u0e2a\u0e33\u0e19\u0e31\u0e01\u0e07\u0e32\u0e19\u0e04\u0e13\u0e30\u0e01\u0e23\u0e23\u0e21\u0e01\u0e32\u0e23\u0e01\u0e32\u0e23\u0e28\u0e36\u0e01\u0e29\u0e32\u0e02\u0e31\u0e49\u0e19\u0e1e\u0e37\u0e49\u0e19\u0e10\u0e32\u0e19 (\u0e0a\u0e37\u0e48\u0e2d\u0e40\u0e14\u0e34\u0e21: \u0e01\u0e23\u0e21\u0e2a\u0e32\u0e21\u0e31\u0e0d\u0e28\u0e36\u0e01\u0e29\u0e32) \u0e01\u0e23\u0e30\u0e17\u0e23\u0e27\u0e07\u0e28\u0e36\u0e01\u0e29\u0e32\u0e18\u0e34\u0e01\u0e32\u0e23 \u0e01\u0e48\u0e2d\u0e15\u0e31\u0e49\u0e07\u0e42\u0e14\u0e22 \u0e1e\u0e23\u0e30\u0e1a\u0e32\u0e17\u0e2a\u0e21\u0e40\u0e14\u0e47\u0e08\u0e1e\u0e23\u0e30\u0e08\u0e38\u0e25\u0e08\u0e2d\u0e21\u0e40\u0e01\u0e25\u0e49\u0e32\u0e40\u0e08\u0e49\u0e32\u0e2d\u0e22\u0e39\u0e48\u0e2b\u0e31\u0e27 \u0e44\u0e14\u0e49\u0e23\u0e31\u0e1a\u0e01\u0e32\u0e23\u0e2a\u0e16\u0e32\u0e1b\u0e19\u0e32\u0e02\u0e36\u0e49\u0e19\u0e43\u0e19\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 8 \u0e21\u0e35\u0e19\u0e32\u0e04\u0e21 \u0e1e.\u0e28. 2424 (\u0e02\u0e13\u0e30\u0e19\u0e31\u0e49\u0e19\u0e19\u0e31\u0e1a\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 1 \u0e40\u0e21\u0e29\u0e32\u0e22\u0e19 \u0e40\u0e1b\u0e47\u0e19\u0e27\u0e31\u0e19\u0e02\u0e36\u0e49\u0e19\u0e1b\u0e35\u0e43\u0e2b\u0e21\u0e48 \u0e40\u0e21\u0e37\u0e48\u0e2d\u0e19\u0e31\u0e1a\u0e2d\u0e22\u0e48\u0e32\u0e07\u0e2a\u0e32\u0e01\u0e25\u0e16\u0e37\u0e2d\u0e40\u0e1b\u0e47\u0e19 \u0e1e.\u0e28. 2425) \u0e42\u0e14\u0e22\u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e23\u0e31\u0e10\u0e1a\u0e32\u0e25\u0e41\u0e2b\u0e48\u0e07\u0e41\u0e23\u0e01\u0e02\u0e2d\u0e07\u0e1b\u0e23\u0e30\u0e40\u0e17\u0e28\u0e44\u0e17\u0e22"}]} | airesearch/wangchanberta-base-wiki-20210520-spm-finetune-qa | null | [
"transformers",
"pytorch",
"camembert",
"question-answering",
"th",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers |
# WangchanBERTa base model: `wangchanberta-base-wiki-newmm`
<br>
Pretrained RoBERTa BASE model on Thai Wikipedia corpus.
The script and documentation can be found at [this reposiryory](https://github.com/vistec-AI/thai2transformers).
<br>
## Model description
<br>
The architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692).
<br>
## Intended uses & limitations
<br>
You can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.
<br>
**Multiclass text classification**
- `wisesight_sentiment`
4-class text classification task (`positive`, `neutral`, `negative`, and `question`) based on social media posts and tweets.
- `wongnai_reivews`
Users' review rating classification task (scale is ranging from 1 to 5)
- `generated_reviews_enth` : (`review_star` as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
**Multilabel text classification**
- `prachathai67k`
Thai topic classification with 12 labels based on news article corpus from prachathai.com. The detail is described in this [page](https://huggingface.co/datasets/prachathai67k).
**Token classification**
- `thainer`
Named-entity recognition tagging with 13 named-entities as descibed in this [page](https://huggingface.co/datasets/thainer).
- `lst20` : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this [page](https://huggingface.co/datasets/lst20).
<br>
## How to use
<br>
The getting started notebook of WangchanBERTa model can be found at this [Colab notebook](https://colab.research.google.com/drive/1Kbk6sBspZLwcnOE61adAQo30xxqOQ9ko)
<br>
## Training data
`wangchanberta-base-wiki-newmm` model was pretrained on Thai Wikipedia. Specifically, we use the Wikipedia dump articles on 20 August 2020 (dumps.wikimedia.org/thwiki/20200820/). We opt out lists, and tables.
### Preprocessing
Texts are preprocessed with the following rules:
- Replace non-breaking space, zero-width non-breaking space, and soft hyphen with spaces.
- Remove an empty parenthesis that occur right after the title of the first paragraph.
- Replace spaces wtth <_>.
<br>
Regarding the vocabulary, we use wordl-level token from [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)'s dictionary-based tokenizer namedly `newmm`. The total number of word-level tokens in the vocabulary is 97,982.
We sample sentences contigously to have the length of at most 512 tokens. For some sentences that overlap the boundary of 512 tokens, we split such sentence with an additional token as document separator. This is the same approach as proposed by [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692) (called "FULL-SENTENCES").
Regarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.
<br>
**Train/Val/Test splits**
We split sequencially 944,782 sentences for training set, 24,863 sentences for validation set and 24,862 sentences for test set.
<br>
**Pretraining**
The model was trained on 32 V100 GPUs for 31,250 steps with the batch size of 8,192 (16 sequences per device with 16 accumulation steps) and a sequence length of 512 tokens. The optimizer we used is Adam with the learning rate of $7e-4$, $\beta_1 = 0.9$, $\beta_2= 0.98$ and $\epsilon = 1e-6$. The learning rate is warmed up for the first 1250 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint.
<br>
**BibTeX entry and citation info**
```
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| {"language": "th"} | airesearch/wangchanberta-base-wiki-newmm | null | [
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"th",
"arxiv:1907.11692",
"arxiv:2101.09635",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers |
# WangchanBERTa base model: `wangchanberta-base-wiki-sefr`
<br>
Pretrained RoBERTa BASE model on Thai Wikipedia corpus.
The script and documentation can be found at [this reposiryory](https://github.com/vistec-AI/thai2transformers).
<br>
## Model description
<br>
The architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692).
<br>
## Intended uses & limitations
<br>
You can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.
<br>
**Multiclass text classification**
- `wisesight_sentiment`
4-class text classification task (`positive`, `neutral`, `negative`, and `question`) based on social media posts and tweets.
- `wongnai_reivews`
Users' review rating classification task (scale is ranging from 1 to 5)
- `generated_reviews_enth` : (`review_star` as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
**Multilabel text classification**
- `prachathai67k`
Thai topic classification with 12 labels based on news article corpus from prachathai.com. The detail is described in this [page](https://huggingface.co/datasets/prachathai67k).
**Token classification**
- `thainer`
Named-entity recognition tagging with 13 named-entities as descibed in this [page](https://huggingface.co/datasets/thainer).
- `lst20` : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this [page](https://huggingface.co/datasets/lst20).
<br>
## How to use
<br>
The getting started notebook of WangchanBERTa model can be found at this [Colab notebook](https://colab.research.google.com/drive/1Kbk6sBspZLwcnOE61adAQo30xxqOQ9ko)
<br>
## Training data
`wangchanberta-base-wiki-sefr` model was pretrained on Thai Wikipedia. Specifically, we use the Wikipedia dump articles on 20 August 2020 (dumps.wikimedia.org/thwiki/20200820/). We opt out lists, and tables.
### Preprocessing
Texts are preprocessed with the following rules:
- Replace non-breaking space, zero-width non-breaking space, and soft hyphen with spaces.
- Remove an empty parenthesis that occur right after the title of the first paragraph.
- Replace spaces wtth <_>.
<br>
Regarding the vocabulary, we use Stacked Ensemble Filter and Refine (SEFR) tokenizer `(engine="best") `[[Limkonchotiwat et al., 2020]](https://www.aclweb.org/anthology/2020.emnlp-main.315/) based on probablities from CNN-based `deepcut` [[Kittinaradorn et al., 2019]](http://doi.org/10.5281/zenodo.3457707). The total number of word-level tokens in the vocabulary is 92,177.
We sample sentences contigously to have the length of at most 512 tokens. For some sentences that overlap the boundary of 512 tokens, we split such sentence with an additional token as document separator. This is the same approach as proposed by [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692) (called "FULL-SENTENCES").
Regarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.
<br>
**Train/Val/Test splits**
We split sequencially 944,782 sentences for training set, 24,863 sentences for validation set and 24,862 sentences for test set.
<br>
**Pretraining**
The model was trained on 32 V100 GPUs for 31,250 steps with the batch size of 8,192 (16 sequences per device with 16 accumulation steps) and a sequence length of 512 tokens. The optimizer we used is Adam with the learning rate of $7e-4$, $\beta_1 = 0.9$, $\beta_2= 0.98$ and $\epsilon = 1e-6$. The learning rate is warmed up for the first 1250 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint.
<br>
**BibTeX entry and citation info**
```
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| {"language": "th"} | airesearch/wangchanberta-base-wiki-sefr | null | [
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"th",
"arxiv:1907.11692",
"arxiv:2101.09635",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers |
# WangchanBERTa base model: `wangchanberta-base-wiki-spm`
<br>
Pretrained RoBERTa BASE model on Thai Wikipedia corpus.
The script and documentation can be found at [this reposiryory](https://github.com/vistec-AI/thai2transformers).
<br>
## Model description
<br>
The architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692).
<br>
## Intended uses & limitations
<br>
You can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.
<br>
**Multiclass text classification**
- `wisesight_sentiment`
4-class text classification task (`positive`, `neutral`, `negative`, and `question`) based on social media posts and tweets.
- `wongnai_reivews`
Users' review rating classification task (scale is ranging from 1 to 5)
- `generated_reviews_enth` : (`review_star` as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
**Multilabel text classification**
- `prachathai67k`
Thai topic classification with 12 labels based on news article corpus from prachathai.com. The detail is described in this [page](https://huggingface.co/datasets/prachathai67k).
**Token classification**
- `thainer`
Named-entity recognition tagging with 13 named-entities as descibed in this [page](https://huggingface.co/datasets/thainer).
- `lst20` : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this [page](https://huggingface.co/datasets/lst20).
<br>
## How to use
<br>
The getting started notebook of WangchanBERTa model can be found at this [Colab notebook](https://colab.research.google.com/drive/1Kbk6sBspZLwcnOE61adAQo30xxqOQ9ko)
<br>
## Training data
`wangchanberta-base-wiki-spm` model was pretrained on Thai Wikipedia. Specifically, we use the Wikipedia dump articles on 20 August 2020 (dumps.wikimedia.org/thwiki/20200820/). We opt out lists, and tables.
### Preprocessing
Texts are preprocessed with the following rules:
- Replace non-breaking space, zero-width non-breaking space, and soft hyphen with spaces.
- Remove an empty parenthesis that occur right after the title of the first paragraph.
- Replace spaces wtth <_>.
<br>
Regarding the vocabulary, we use subword token trained with [SentencePice](https://github.com/google/sentencepiece) library on the training set of Thai Wikipedia corpus. The total number of subword tokens is 24,000.
We sample sentences contigously to have the length of at most 512 tokens. For some sentences that overlap the boundary of 512 tokens, we split such sentence with an additional token as document separator. This is the same approach as proposed by [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692) (called "FULL-SENTENCES").
Regarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.
<br>
**Train/Val/Test splits**
We split sequencially 944,782 sentences for training set, 24,863 sentences for validation set and 24,862 sentences for test set.
<br>
**Pretraining**
The model was trained on 32 V100 GPUs for 31,250 steps with the batch size of 8,192 (16 sequences per device with 16 accumulation steps) and a sequence length of 512 tokens. The optimizer we used is Adam with the learning rate of $7e-4$, $\beta_1 = 0.9$, $\beta_2= 0.98$ and $\epsilon = 1e-6$. The learning rate is warmed up for the first 1250 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint.
<br>
**BibTeX entry and citation info**
```
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| {"language": "th"} | airesearch/wangchanberta-base-wiki-spm | null | [
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"th",
"arxiv:1907.11692",
"arxiv:2101.09635",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers |
# WangchanBERTa base model: `wangchanberta-base-wiki-syllable`
<br>
Pretrained RoBERTa BASE model on Thai Wikipedia corpus.
The script and documentation can be found at [this reposiryory](https://github.com/vistec-AI/thai2transformers).
<br>
## Model description
<br>
The architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692).
<br>
## Intended uses & limitations
<br>
You can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.
<br>
**Multiclass text classification**
- `wisesight_sentiment`
4-class text classification task (`positive`, `neutral`, `negative`, and `question`) based on social media posts and tweets.
- `wongnai_reivews`
Users' review rating classification task (scale is ranging from 1 to 5)
- `generated_reviews_enth` : (`review_star` as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
**Multilabel text classification**
- `prachathai67k`
Thai topic classification with 12 labels based on news article corpus from prachathai.com. The detail is described in this [page](https://huggingface.co/datasets/prachathai67k).
**Token classification**
- `thainer`
Named-entity recognition tagging with 13 named-entities as descibed in this [page](https://huggingface.co/datasets/thainer).
- `lst20` : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this [page](https://huggingface.co/datasets/lst20).
<br>
## How to use
<br>
The getting started notebook of WangchanBERTa model can be found at this [Colab notebook](https://colab.research.google.com/drive/1Kbk6sBspZLwcnOE61adAQo30xxqOQ9ko)
<br>
## Training data
`wangchanberta-base-wiki-syllable` model was pretrained on Thai Wikipedia. Specifically, we use the Wikipedia dump articles on 20 August 2020 (dumps.wikimedia.org/thwiki/20200820/). We opt out lists, and tables.
### Preprocessing
Texts are preprocessed with the following rules:
- Replace non-breaking space, zero-width non-breaking space, and soft hyphen with spaces.
- Remove an empty parenthesis that occur right after the title of the first paragraph.
- Replace spaces wtth <_>.
<br>
Regarding the vocabulary, we use a Thai syllable-level dictionary-based tokenizer denoted as `syllable` from PyThaiNLP [Phatthiyaphaibun et al., 2016]. The total number of word-level tokens in the vocabulary is 59,235.
We sample sentences contigously to have the length of at most 512 tokens. For some sentences that overlap the boundary of 512 tokens, we split such sentence with an additional token as document separator. This is the same approach as proposed by [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692) (called "FULL-SENTENCES").
Regarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.
<br>
**Train/Val/Test splits**
We split sequencially 944,782 sentences for training set, 24,863 sentences for validation set and 24,862 sentences for test set.
<br>
**Pretraining**
The model was trained on 32 V100 GPUs for 31,250 steps with the batch size of 8,192 (16 sequences per device with 16 accumulation steps) and a sequence length of 512 tokens. The optimizer we used is Adam with the learning rate of $7e-4$, $\beta_1 = 0.9$, $\beta_2= 0.98$ and $\epsilon = 1e-6$. The learning rate is warmed up for the first 1250 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint.
<br>
**BibTeX entry and citation info**
```
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| {"language": "th"} | airesearch/wangchanberta-base-wiki-syllable | null | [
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"th",
"arxiv:1907.11692",
"arxiv:2101.09635",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
# `wav2vec2-large-xlsr-53-th`
Finetuning `wav2vec2-large-xlsr-53` on Thai [Common Voice 7.0](https://commonvoice.mozilla.org/en/datasets)
[Read more on our blog](https://medium.com/airesearch-in-th/airesearch-in-th-3c1019a99cd)
We finetune [wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) based on [Fine-tuning Wav2Vec2 for English ASR](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tuning_Wav2Vec2_for_English_ASR.ipynb) using Thai examples of [Common Voice Corpus 7.0](https://commonvoice.mozilla.org/en/datasets). The notebooks and scripts can be found in [vistec-ai/wav2vec2-large-xlsr-53-th](https://github.com/vistec-ai/wav2vec2-large-xlsr-53-th). The pretrained model and processor can be found at [airesearch/wav2vec2-large-xlsr-53-th](https://huggingface.co/airesearch/wav2vec2-large-xlsr-53-th).
## `robust-speech-event`
Add `syllable_tokenize`, `word_tokenize` ([PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)) and [deepcut](https://github.com/rkcosmos/deepcut) tokenizers to `eval.py` from [robust-speech-event](https://github.com/huggingface/transformers/tree/master/examples/research_projects/robust-speech-event#evaluation)
```
> python eval.py --model_id ./ --dataset mozilla-foundation/common_voice_7_0 --config th --split test --log_outputs --thai_tokenizer newmm/syllable/deepcut/cer
```
### Eval results on Common Voice 7 "test":
| | WER PyThaiNLP 2.3.1 | WER deepcut | SER | CER |
|---------------------------------|---------------------|-------------|---------|---------|
| Only Tokenization | 0.9524% | 2.5316% | 1.2346% | 0.1623% |
| Cleaning rules and Tokenization | TBD | TBD | TBD | TBD |
## Usage
```
#load pretrained processor and model
processor = Wav2Vec2Processor.from_pretrained("airesearch/wav2vec2-large-xlsr-53-th")
model = Wav2Vec2ForCTC.from_pretrained("airesearch/wav2vec2-large-xlsr-53-th")
#function to resample to 16_000
def speech_file_to_array_fn(batch,
text_col="sentence",
fname_col="path",
resampling_to=16000):
speech_array, sampling_rate = torchaudio.load(batch[fname_col])
resampler=torchaudio.transforms.Resample(sampling_rate, resampling_to)
batch["speech"] = resampler(speech_array)[0].numpy()
batch["sampling_rate"] = resampling_to
batch["target_text"] = batch[text_col]
return batch
#get 2 examples as sample input
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
#infer
with torch.no_grad():
logits = model(inputs.input_values,).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
>> Prediction: ['และ เขา ก็ สัมผัส ดีบุก', 'คุณ สามารถ รับทราบ เมื่อ ข้อความ นี้ ถูก อ่าน แล้ว']
>> Reference: ['และเขาก็สัมผัสดีบุก', 'คุณสามารถรับทราบเมื่อข้อความนี้ถูกอ่านแล้ว']
```
## Datasets
Common Voice Corpus 7.0](https://commonvoice.mozilla.org/en/datasets) contains 133 validated hours of Thai (255 total hours) at 5GB. We pre-tokenize with `pythainlp.tokenize.word_tokenize`. We preprocess the dataset using cleaning rules described in `notebooks/cv-preprocess.ipynb` by [@tann9949](https://github.com/tann9949). We then deduplicate and split as described in [ekapolc/Thai_commonvoice_split](https://github.com/ekapolc/Thai_commonvoice_split) in order to 1) avoid data leakage due to random splits after cleaning in [Common Voice Corpus 7.0](https://commonvoice.mozilla.org/en/datasets) and 2) preserve the majority of the data for the training set. The dataset loading script is `scripts/th_common_voice_70.py`. You can use this scripts together with `train_cleand.tsv`, `validation_cleaned.tsv` and `test_cleaned.tsv` to have the same splits as we do. The resulting dataset is as follows:
```
DatasetDict({
train: Dataset({
features: ['path', 'sentence'],
num_rows: 86586
})
test: Dataset({
features: ['path', 'sentence'],
num_rows: 2502
})
validation: Dataset({
features: ['path', 'sentence'],
num_rows: 3027
})
})
```
## Training
We fintuned using the following configuration on a single V100 GPU and chose the checkpoint with the lowest validation loss. The finetuning script is `scripts/wav2vec2_finetune.py`
```
# create model
model = Wav2Vec2ForCTC.from_pretrained(
"facebook/wav2vec2-large-xlsr-53",
attention_dropout=0.1,
hidden_dropout=0.1,
feat_proj_dropout=0.0,
mask_time_prob=0.05,
layerdrop=0.1,
gradient_checkpointing=True,
ctc_loss_reduction="mean",
pad_token_id=processor.tokenizer.pad_token_id,
vocab_size=len(processor.tokenizer)
)
model.freeze_feature_extractor()
training_args = TrainingArguments(
output_dir="../data/wav2vec2-large-xlsr-53-thai",
group_by_length=True,
per_device_train_batch_size=32,
gradient_accumulation_steps=1,
per_device_eval_batch_size=16,
metric_for_best_model='wer',
evaluation_strategy="steps",
eval_steps=1000,
logging_strategy="steps",
logging_steps=1000,
save_strategy="steps",
save_steps=1000,
num_train_epochs=100,
fp16=True,
learning_rate=1e-4,
warmup_steps=1000,
save_total_limit=3,
report_to="tensorboard"
)
```
## Evaluation
We benchmark on the test set using WER with words tokenized by [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp) 2.3.1 and [deepcut](https://github.com/rkcosmos/deepcut), and CER. We also measure performance when spell correction using [TNC](http://www.arts.chula.ac.th/ling/tnc/) ngrams is applied. Evaluation codes can be found in `notebooks/wav2vec2_finetuning_tutorial.ipynb`. Benchmark is performed on `test-unique` split.
| | WER PyThaiNLP 2.3.1 | WER deepcut | CER |
|--------------------------------|---------------------|----------------|----------------|
| [Kaldi from scratch](https://github.com/vistec-AI/commonvoice-th) | 23.04 | | 7.57 |
| Ours without spell correction | 13.634024 | **8.152052** | **2.813019** |
| Ours with spell correction | 17.996397 | 14.167975 | 5.225761 |
| Google Web Speech API※ | 13.711234 | 10.860058 | 7.357340 |
| Microsoft Bing Speech API※ | **12.578819** | 9.620991 | 5.016620 |
| Amazon Transcribe※ | 21.86334 | 14.487553 | 7.077562 |
| NECTEC AI for Thai Partii API※ | 20.105887 | 15.515631 | 9.551027 |
※ APIs are not finetuned with Common Voice 7.0 data
## LICENSE
[cc-by-sa 4.0](https://github.com/vistec-AI/wav2vec2-large-xlsr-53-th/blob/main/LICENSE)
## Ackowledgements
* model training and validation notebooks/scripts [@cstorm125](https://github.com/cstorm125/)
* dataset cleaning scripts [@tann9949](https://github.com/tann9949)
* dataset splits [@ekapolc](https://github.com/ekapolc/) and [@14mss](https://github.com/14mss)
* running the training [@mrpeerat](https://github.com/mrpeerat)
* spell correction [@wannaphong](https://github.com/wannaphong)
| {"language": "th", "license": "cc-by-sa-4.0", "tags": ["audio", "automatic-speech-recognition", "hf-asr-leaderboard", "robust-speech-event", "speech", "xlsr-fine-tuning"], "datasets": ["common_voice"], "model-index": [{"name": "XLS-R-53 - Thai", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "th"}, "metrics": [{"type": "wer", "value": 0.9524, "name": "Test WER"}, {"type": "ser", "value": 1.2346, "name": "Test SER"}, {"type": "cer", "value": 0.1623, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "sv"}, "metrics": [{"type": "wer", "name": "Test WER"}, {"type": "ser", "name": "Test SER"}, {"type": "cer", "name": "Test CER"}]}]}]} | airesearch/wav2vec2-large-xlsr-53-th | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"robust-speech-event",
"speech",
"xlsr-fine-tuning",
"th",
"dataset:common_voice",
"doi:10.57967/hf/0404",
"license:cc-by-sa-4.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers | # xlm-roberta-base-finetune-qa
Finetuning `xlm-roberta-base` with the training set of `iapp_wiki_qa_squad`, `thaiqa_squad`, and `nsc_qa` (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 `newmm` words). Benchmarks shared on [wandb](https://wandb.ai/cstorm125/wangchanberta-qa) using validation and test sets of `iapp_wiki_qa_squad`.
Trained with [thai2transformers](https://github.com/vistec-AI/thai2transformers/blob/dev/scripts/downstream/train_question_answering_lm_finetuning.py).
Train with:
```
export WANDB_PROJECT=wangchanberta-qa
export MODEL_NAME=xlm-roberta-base
python train_question_answering_lm_finetuning.py \
--model_name $MODEL_NAME \
--dataset_name chimera_qa \
--output_dir $MODEL_NAME-finetune-chimera_qa-model \
--log_dir $MODEL_NAME-finetune-chimera_qa-log \
--pad_on_right \
--fp16
```
| {"widget": [{"text": "\u0e2a\u0e27\u0e19\u0e01\u0e38\u0e2b\u0e25\u0e32\u0e1a\u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e2d\u0e30\u0e44\u0e23", "context": "\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e2a\u0e27\u0e19\u0e01\u0e38\u0e2b\u0e25\u0e32\u0e1a\u0e27\u0e34\u0e17\u0e22\u0e32\u0e25\u0e31\u0e22 (Suankularb Wittayalai School) (\u0e2d\u0e31\u0e01\u0e29\u0e23\u0e22\u0e48\u0e2d : \u0e2a.\u0e01. / S.K.) \u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e0a\u0e32\u0e22\u0e25\u0e49\u0e27\u0e19 \u0e23\u0e30\u0e14\u0e31\u0e1a\u0e0a\u0e31\u0e49\u0e19\u0e21\u0e31\u0e18\u0e22\u0e21\u0e28\u0e36\u0e01\u0e29\u0e32\u0e02\u0e19\u0e32\u0e14\u0e43\u0e2b\u0e0d\u0e48\u0e1e\u0e34\u0e40\u0e28\u0e29 \u0e2a\u0e31\u0e07\u0e01\u0e31\u0e14\u0e2a\u0e33\u0e19\u0e31\u0e01\u0e07\u0e32\u0e19\u0e40\u0e02\u0e15\u0e1e\u0e37\u0e49\u0e19\u0e17\u0e35\u0e48\u0e01\u0e32\u0e23\u0e28\u0e36\u0e01\u0e29\u0e32\u0e21\u0e31\u0e18\u0e22\u0e21\u0e28\u0e36\u0e01\u0e29\u0e32\u0e40\u0e02\u0e15 1 \u0e2a\u0e33\u0e19\u0e31\u0e01\u0e07\u0e32\u0e19\u0e04\u0e13\u0e30\u0e01\u0e23\u0e23\u0e21\u0e01\u0e32\u0e23\u0e01\u0e32\u0e23\u0e28\u0e36\u0e01\u0e29\u0e32\u0e02\u0e31\u0e49\u0e19\u0e1e\u0e37\u0e49\u0e19\u0e10\u0e32\u0e19 (\u0e0a\u0e37\u0e48\u0e2d\u0e40\u0e14\u0e34\u0e21: \u0e01\u0e23\u0e21\u0e2a\u0e32\u0e21\u0e31\u0e0d\u0e28\u0e36\u0e01\u0e29\u0e32) \u0e01\u0e23\u0e30\u0e17\u0e23\u0e27\u0e07\u0e28\u0e36\u0e01\u0e29\u0e32\u0e18\u0e34\u0e01\u0e32\u0e23 \u0e01\u0e48\u0e2d\u0e15\u0e31\u0e49\u0e07\u0e42\u0e14\u0e22 \u0e1e\u0e23\u0e30\u0e1a\u0e32\u0e17\u0e2a\u0e21\u0e40\u0e14\u0e47\u0e08\u0e1e\u0e23\u0e30\u0e08\u0e38\u0e25\u0e08\u0e2d\u0e21\u0e40\u0e01\u0e25\u0e49\u0e32\u0e40\u0e08\u0e49\u0e32\u0e2d\u0e22\u0e39\u0e48\u0e2b\u0e31\u0e27 \u0e44\u0e14\u0e49\u0e23\u0e31\u0e1a\u0e01\u0e32\u0e23\u0e2a\u0e16\u0e32\u0e1b\u0e19\u0e32\u0e02\u0e36\u0e49\u0e19\u0e43\u0e19\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 8 \u0e21\u0e35\u0e19\u0e32\u0e04\u0e21 \u0e1e.\u0e28. 2424 (\u0e02\u0e13\u0e30\u0e19\u0e31\u0e49\u0e19\u0e19\u0e31\u0e1a\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 1 \u0e40\u0e21\u0e29\u0e32\u0e22\u0e19 \u0e40\u0e1b\u0e47\u0e19\u0e27\u0e31\u0e19\u0e02\u0e36\u0e49\u0e19\u0e1b\u0e35\u0e43\u0e2b\u0e21\u0e48 \u0e40\u0e21\u0e37\u0e48\u0e2d\u0e19\u0e31\u0e1a\u0e2d\u0e22\u0e48\u0e32\u0e07\u0e2a\u0e32\u0e01\u0e25\u0e16\u0e37\u0e2d\u0e40\u0e1b\u0e47\u0e19 \u0e1e.\u0e28. 2425) \u0e42\u0e14\u0e22\u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e23\u0e31\u0e10\u0e1a\u0e32\u0e25\u0e41\u0e2b\u0e48\u0e07\u0e41\u0e23\u0e01\u0e02\u0e2d\u0e07\u0e1b\u0e23\u0e30\u0e40\u0e17\u0e28\u0e44\u0e17\u0e22"}]} | airesearch/xlm-roberta-base-finetune-qa | null | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers | # Finetuend `xlm-roberta-base` model on Thai sequence and token classification datasets
<br>
Finetuned XLM Roberta BASE model on Thai sequence and token classification datasets
The script and documentation can be found at [this repository](https://github.com/vistec-AI/thai2transformers).
<br>
## Model description
<br>
We use the pretrained cross-lingual RoBERTa model as proposed by [[Conneau et al., 2020]](https://arxiv.org/abs/1911.02116). We download the pretrained PyTorch model via HuggingFace's Model Hub (https://huggingface.co/xlm-roberta-base)
<br>
## Intended uses & limitations
<br>
You can use the finetuned models for multiclass/multilabel text classification and token classification task.
<br>
**Multiclass text classification**
- `wisesight_sentiment`
4-class text classification task (`positive`, `neutral`, `negative`, and `question`) based on social media posts and tweets.
- `wongnai_reivews`
Users' review rating classification task (scale is ranging from 1 to 5)
- `generated_reviews_enth` : (`review_star` as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
**Multilabel text classification**
- `prachathai67k`
Thai topic classification with 12 labels based on news article corpus from prachathai.com. The detail is described in this [page](https://huggingface.co/datasets/prachathai67k).
**Token classification**
- `thainer`
Named-entity recognition tagging with 13 named-entities as descibed in this [page](https://huggingface.co/datasets/thainer).
- `lst20` : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this [page](https://huggingface.co/datasets/lst20).
<br>
## How to use
<br>
The example notebook demonstrating how to use finetuned model for inference can be found at this [Colab notebook](https://colab.research.google.com/drive/1Kbk6sBspZLwcnOE61adAQo30xxqOQ9ko)
<br>
**BibTeX entry and citation info**
```
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| {} | airesearch/xlm-roberta-base-finetuned | null | [
"transformers",
"xlm-roberta",
"fill-mask",
"arxiv:1911.02116",
"arxiv:2101.09635",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers | {} | airesearchth/wangchanberta-base-wiki-20210520-news-spm | null | [
"transformers",
"pytorch",
"camembert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers | {} | airesearchth/wangchanberta-base-wiki-20210520-spm | null | [
"transformers",
"pytorch",
"camembert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | airshin11/KMULegalLab | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers |
# Michael Scott DialoGPT Model | {"tags": ["conversational"]} | aishanisingh/DiagloGPT-small-michaelscott | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers |
# Harry Potter DialoGPT Model | {"tags": ["conversational"]} | aishanisingh/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | pip install vaderSentiment
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
analyser = SentimentIntensityAnalyzer()
analyser.polarity_scores("I hate watching movies")
import nltk
from nltk.tokenize import word_tokenize, RegexpTokenizer
from nltk.sentiment.vader import SentimentIntensityAnalyzer
nltk.download('all')
import numpy as np
sentence = """I love dancing & painting"""
tokenized_sentence = nltk.word_tokenize(sentence)
from nltk import word_tokenize
from typing import List
Analyzer = SentimentIntensityAnalyzer()
pos_word_list=[]
neu_word_list=[]
neg_word_list=[]
pos_score_list=[]
neg_score_list=[]
score_list=[]
for word in tokenized_sentence:
if (Analyzer.polarity_scores(word)['compound']) >= 0.1:
pos_word_list.append(word)
score_list.append(Analyzer.polarity_scores(word)['compound'])
elif (Analyzer.polarity_scores(word)['compound']) <= -0.1:
neg_word_list.append(word)
score_list.append(Analyzer.polarity_scores(word)['compound'])
else:
neu_word_list.append(word)
score_list.append(Analyzer.polarity_scores(word)['compound'])
print('Positive:',pos_word_list)
print('Neutral:',neu_word_list)
print('Negative:',neg_word_list)
print('Score:', score_list)
score = Analyzer.polarity_scores(sentence)
print('\nScores:', score)
predict_log=score.values()
value_iterator=iter(predict_log)
neg_prediction=next(value_iterator)
neu_prediction=next(value_iterator)
pos_prediction=next(value_iterator)
prediction_list=[neg_prediction, pos_prediction]
prediction_list_array=np.array(prediction_list)
def predict():
probs = []
for text in texts:
offset = (self.score(text) + 1) / 2.
binned = np.digitize(5 * offset, self.classes) + 1
simulated_probs = scipy.stats.norm.pdf(self.classes, binned, scale=0.5)
probs.append(simulated_probs)
return np.array(probs)
latex_special_token = ["!@#$%^&*()"]
import operator
def generate(text_list, attention_list, latex_file, color_neg='red', color_pos='green', rescale_value = False):
print("hello")
attention_list = rescale(attention_list)
word_num = len(text_list)
print(len(attention_list))
print(len(text_list))
text_list = clean_word(text_list)
with open(latex_file,'w') as f:
f.write(r'''\documentclass[varwidth]{standalone}
\special{papersize=210mm,297mm}
\usepackage{color}
\usepackage{tcolorbox}
\usepackage{CJK}
\usepackage{adjustbox}
\tcbset{width=0.9\textwidth,boxrule=0pt,colback=red,arc=0pt,auto outer arc,left=0pt,right=0pt,boxsep=5pt}
\begin{document}
\begin{CJK*}{UTF8}{gbsn}'''+'\n')
string = r'''{\setlength{\fboxsep}{0pt}\colorbox{white!0}{\parbox{0.9\textwidth}{'''+"\n"
for idx in range(len(attention_list)):
if attention_list[idx] > 0:
string += "\\colorbox{%s!%s}{"%(color_pos, attention_list[idx])+"\\strut " + text_list[idx]+"} "
else:
string += "\\colorbox{%s!%s}{"%(color_neg, -attention_list[idx])+"\\strut " + text_list[idx]+"} "
string += "\n}}}"
f.write(string+'\n')
f.write(r'''\end{CJK*}
\end{document}''')
def rescale(input_list):
the_array = np.asarray(input_list)
the_max = np.max(abs(the_array))
rescale = the_array/the_max
rescale = rescale*100
rescale = np.round(rescale, 3)
'''
the_array = np.asarray(input_list)
the_max = np.max(the_array)
the_min = np.min(the_array)
rescale = ((the_array - the_min)/(the_max-the_min))*100
for i in rescale:
print(rescale)
'''
return rescale.tolist()
def clean_word(word_list):
new_word_list = []
for word in word_list:
for latex_sensitive in ["\\", "%", "&", "^", "#", "_", "{", "}"]:
if latex_sensitive in word:
word = word.replace(latex_sensitive, '\\'+latex_sensitive)
new_word_list.append(word)
return new_word_list
if __name__ == '__main__':
color_1 = 'red'
color_2 = 'green'
words = word_tokenize(sentence)
word_num = len(words)
generate(words, score_list, "sple.tex", color_1, color_2)
| {} | aishoo1612/VADER-With-heatmaps | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | aiswaryasankar/apiDemo | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | aiswaryasankar/mds_multi_news | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | ajaiswal1008/wav2vec2-large-xls-r-2B-hi-colab | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | ajaiswal1008/wav2vec2-large-xls-r-300m-hi-colab | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hi-colab_new
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-hi-colab_new", "results": []}]} | ajaiswal1008/wav2vec2-large-xls-r-300m-hi-colab_new | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | ajaiswal1008/wav2vec2-large-xls-r-300m-hi-dubdub | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | ajaiswal1008/wav2vec2-large-xls-r-300m-tr-colab | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | ajaiswal1008/wav2vec2-large-xls-r-300m-turkish-colab | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
image-classification | transformers |
# greens
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### cucumber

#### green beans

#### okra

#### pickle

#### zucinni
 | {"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]} | ajanco/greens | null | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers | {} | ajanco/sr_roberta_oscar | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers | {} | ajanco/yi_roberta_oscar | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | ajdude/DialoGPT-small-harrypotter | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | ajdude/DialoGPT-small-tonystark | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers |
This **cased model** was pretrained from scratch using a custom vocabulary on the following corpora
- Pubmed
- Clinical trials corpus
- and a small subset of Bookcorpus
The pretrained model was used to do NER **as is, with no fine-tuning**. The approach is described [in this post](https://ajitrajasekharan.github.io/2021/01/02/my-first-post.html). [Towards Data Science review](https://twitter.com/TDataScience/status/1486300137366466560?s=20)
[App in Spaces](https://huggingface.co/spaces/ajitrajasekharan/self-supervised-ner-biomedical) demonstrates this approach.
[Github link](https://github.com/ajitrajasekharan/unsupervised_NER) to perform NER using this model in an ensemble with bert-base cased.
The ensemble detects 69 entity subtypes (17 broad entity groups)
<img src="https://ajitrajasekharan.github.io/images/1.png" width="600">
### Ensemble model performance
<img src="https://ajitrajasekharan.github.io/images/6.png" width="600">
### Additional notes
- The model predictions on the right do not include [CLS] predictions. Hosted inference API only returns the masked position predictions. In practice, the [CLS] predictions are just as useful as the model predictions for the masked position _(if the next sentence prediction loss was low during pretraining)_ and are used for NER.
- Some of the top model predictions like "a", "the", punctuations, etc. while valid predictions, bear no entity information. These are filtered when harvesting descriptors for NER. The examples on the right are unfiltered results.
- [Use this link](https://huggingface.co/spaces/ajitrajasekharan/Qualitative-pretrained-model-evaluation) to examine both fill-mask prediction and [CLS] predictions
### License
MIT license
<a href="https://huggingface.co/exbert/?model=ajitrajasekharan/biomedical&modelKind=bidirectional&sentence=Gefitinib%20is%20an%20EGFR%20tyrosine%20kinase%20inhibitor,%20which%20is%20often%20used%20for%20breast%20cancer%20and%20NSCLC%20treatment.&layer=3&heads=..0,1,2,3,4,5,6,7,8,9,10,11&threshold=0.7&tokenInd=17&tokenSide=right&maskInds=..&hideClsSep=true">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": [{}], "license": "mit", "tags": [{}, "exbert"], "widget": [{"text": "Lou Gehrig who works for XCorp and lives in New York suffers from [MASK]", "example_title": "Test for entity type: Disease"}, {"text": "Overexpression of [MASK] occurs across a wide range of cancers", "example_title": "Test for entity type: Gene"}, {"text": "Patients treated with [MASK] are vulnerable to infectious diseases", "example_title": "Test for entity type: Drug"}, {"text": "A eGFR level below [MASK] indicates chronic kidney disease", "example_title": "Test for entity type: Measure "}, {"text": "In the [MASK], increased daily imatinib dose induced MMR", "example_title": "Test for entity type: STUDY/TRIAL"}, {"text": "Paul Erdos died at [MASK]", "example_title": "Test for entity type: TIME"}], "inference": {"parameters": {"top_k": 10}}} | ajitrajasekharan/biomedical | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8385
- Matthews Correlation: 0.5865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4887 | 1.0 | 535 | 0.5016 | 0.5107 |
| 0.286 | 2.0 | 1070 | 0.5473 | 0.5399 |
| 0.1864 | 3.0 | 1605 | 0.7114 | 0.5706 |
| 0.1163 | 4.0 | 2140 | 0.8385 | 0.5865 |
| 0.0834 | 5.0 | 2675 | 0.9610 | 0.5786 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "bert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5864941797290588, "name": "Matthews Correlation"}]}]}]} | ajrae/bert-base-uncased-finetuned-cola | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | ajrae/bert-base-uncased-finetuned-mnli | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-mrpc
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4520
- Accuracy: 0.8578
- F1: 0.9003
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 230 | 0.4169 | 0.8039 | 0.8639 |
| No log | 2.0 | 460 | 0.4299 | 0.8137 | 0.875 |
| 0.4242 | 3.0 | 690 | 0.4520 | 0.8578 | 0.9003 |
| 0.4242 | 4.0 | 920 | 0.6323 | 0.8431 | 0.8926 |
| 0.1103 | 5.0 | 1150 | 0.6163 | 0.8578 | 0.8997 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "bert-base-uncased-finetuned-mrpc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "mrpc"}, "metrics": [{"type": "accuracy", "value": 0.8578431372549019, "name": "Accuracy"}, {"type": "f1", "value": 0.9003436426116839, "name": "F1"}]}]}]} | ajrae/bert-base-uncased-finetuned-mrpc | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | ajrae/bert-base-uncased-finetuned-qnli | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | ak104/distilbert-base-uncased-finetuned-cola | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-Total
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2814
- Wer: 0.2260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.9157 | 0.2 | 400 | 2.8204 | 0.9707 |
| 0.9554 | 0.4 | 800 | 0.5295 | 0.5046 |
| 0.7585 | 0.6 | 1200 | 0.4007 | 0.3850 |
| 0.7288 | 0.8 | 1600 | 0.3632 | 0.3447 |
| 0.6792 | 1.0 | 2000 | 0.3433 | 0.3216 |
| 0.6085 | 1.2 | 2400 | 0.3254 | 0.2928 |
| 0.6225 | 1.4 | 2800 | 0.3161 | 0.2832 |
| 0.6183 | 1.6 | 3200 | 0.3111 | 0.2721 |
| 0.5947 | 1.8 | 3600 | 0.2969 | 0.2615 |
| 0.5953 | 2.0 | 4000 | 0.2912 | 0.2515 |
| 0.5358 | 2.2 | 4400 | 0.2920 | 0.2501 |
| 0.5535 | 2.4 | 4800 | 0.2939 | 0.2538 |
| 0.5408 | 2.6 | 5200 | 0.2854 | 0.2452 |
| 0.5272 | 2.8 | 5600 | 0.2816 | 0.2434 |
| 0.5248 | 3.0 | 6000 | 0.2755 | 0.2354 |
| 0.4923 | 3.2 | 6400 | 0.2795 | 0.2353 |
| 0.489 | 3.4 | 6800 | 0.2767 | 0.2330 |
| 0.4932 | 3.6 | 7200 | 0.2821 | 0.2335 |
| 0.4841 | 3.8 | 7600 | 0.2756 | 0.2349 |
| 0.4794 | 4.0 | 8000 | 0.2751 | 0.2265 |
| 0.444 | 4.2 | 8400 | 0.2809 | 0.2283 |
| 0.4533 | 4.4 | 8800 | 0.2804 | 0.2312 |
| 0.4563 | 4.6 | 9200 | 0.2830 | 0.2256 |
| 0.4498 | 4.8 | 9600 | 0.2819 | 0.2251 |
| 0.4532 | 5.0 | 10000 | 0.2814 | 0.2260 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-large-xlsr-53-Total", "results": []}]} | akadriu/wav2vec2-large-xlsr-53-Total | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
image-classification | transformers | {} | akahana/asl-vit | null | [
"transformers",
"pytorch",
"safetensors",
"vit",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers |
## how to use
```python
from transformers import pipeline, set_seed
path = "akahana/gpt2-indonesia"
generator = pipeline('text-generation',
model=path)
set_seed(42)
kalimat = "dahulu kala ada sebuah"
preds = generator(kalimat,
max_length=64,
num_return_sequences=3)
for data in preds:
print(data)
{'generated_text': 'dahulu kala ada sebuah perkampungan yang bernama pomere. namun kini kawasan ini sudah tidak dikembangkan lagi sebagai kawasan industri seperti perusahaan pupuk. sumber-sumber lain sudah sulit ditemukan karena belum adanya kilang pupuk milik indonesia yang sering di kembangkan sehingga belum ada satupun yang masih tersisa yang tersisa. kawasan ini juga memproduksi gula aren milik pt graha bina sarana'}
{'generated_text': 'dahulu kala ada sebuah desa kecil bernama desa. desa yang terkenal seperti halnya kota terdekat lainnya adalah desa tetangga yang bernama sama."\n"sebuah masjid merupakan suatu tempat suci yang digunakan umat islam untuk beribadah. beberapa masjid yang didaftarkan berikut memiliki suatu kehormatan tersendiri bagi masing-masing denominasi islam di dunia. sebuah masjid selain memiliki fungsi sebagai tempat'}
{'generated_text': 'dahulu kala ada sebuah peradaban yang dibangun di sebelah barat sungai mississippi di sekitar desa kecil desa yang bernama sama. penduduk asli di desa ini berasal dari etnis teweh yang berpindah agama menjadi kristen, namun kemudian pindah agama menjadi kristen. desa arawak mempunyai beberapa desa lain seperti adibei, deti, riuhut dan sa'}
``` | {"language": "id", "widget": [{"text": "dahulu kala ada sebuah"}]} | akahana/gpt2-indonesia | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"gpt2",
"text-generation",
"id",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | akahana/indonesia-distilbert | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | akahana/indonesia-emotion-distilbert | null | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | akahana/indonesia-emotion-roberta-small | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers |
## how to use
```python
from transformers import pipeline, set_seed
path = "akahana/indonesia-emotion-roberta"
emotion = pipeline('text-classification',
model=path,device=0)
set_seed(42)
kalimat = "dia orang yang baik ya bunds."
preds = emotion(kalimat)
preds
[{'label': 'BAHAGIA', 'score': 0.8790940046310425}]
``` | {"language": "id", "widget": [{"text": "dia orang yang baik ya bunds."}]} | akahana/indonesia-emotion-roberta | null | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"id",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers | {} | akahana/indonesia-roberta-small | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers |
## how to use
```python
from transformers import pipeline, set_seed
path = "akahana/indonesia-sentiment-roberta"
emotion = pipeline('text-classification',
model=path,device=0)
set_seed(42)
kalimat = "dia orang yang baik ya bunds."
preds = emotion(kalimat)
preds
``` | {"language": "id", "widget": [{"text": "dia orang yang baik ya bunds."}]} | akahana/indonesia-sentiment-roberta | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"id",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
feature-extraction | transformers |
# Indonesian RoBERTa Base
## How to Use
### As Masked Language Model
```python
from transformers import pipeline
pretrained_name = "akahana/roberta-base-indonesia"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("Gajah <mask> sedang makan di kebun binatang.")
```
### Feature Extraction in PyTorch
```python
from transformers import RobertaModel, RobertaTokenizerFast
pretrained_name = "akahana/roberta-base-indonesia"
model = RobertaModel.from_pretrained(pretrained_name)
tokenizer = RobertaTokenizerFast.from_pretrained(pretrained_name)
prompt = "Gajah <mask> sedang makan di kebun binatang."
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
``` | {"language": "id", "license": "mit", "tags": ["roberta-base-indonesia"], "datasets": ["wikipedia"], "widget": [{"text": "Gajah <mask> sedang makan di kebun binatang."}]} | akahana/roberta-base-indonesia | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"roberta",
"feature-extraction",
"roberta-base-indonesia",
"id",
"dataset:wikipedia",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
feature-extraction | transformers |
# Indonesian tiny-RoBERTa
## How to Use
### As Masked Language Model
```python
from transformers import pipeline
pretrained_name = "akahana/tiny-roberta-indonesia"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("ikiryo adalah <mask> hantu dalam mitologi jepang.")
```
### Feature Extraction in PyTorch
```python
from transformers import RobertaModel, RobertaTokenizerFast
pretrained_name = "akahana/tiny-roberta-indonesia"
model = RobertaModel.from_pretrained(pretrained_name)
tokenizer = RobertaTokenizerFast.from_pretrained(pretrained_name)
prompt = "ikiryo adalah <mask> hantu dalam mitologi jepang."
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
``` | {"language": "id", "license": "mit", "tags": ["tiny-roberta-indonesia"], "datasets": ["wikipedia"], "widget": [{"text": "ikiryo adalah <mask> hantu dalam mitologi jepang."}]} | akahana/tiny-roberta-indonesia | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"roberta",
"feature-extraction",
"tiny-roberta-indonesia",
"id",
"dataset:wikipedia",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-cats-vs-dogs
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cats_vs_dogs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0369
- Accuracy: 0.9883
## how to use
```python
from transformers import ViTFeatureExtractor, ViTModel
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224-in21k')
model = ViTModel.from_pretrained('akahana/vit-base-cats-vs-dogs')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0949 | 1.0 | 2488 | 0.0369 | 0.9883 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["image-classification", "generated_from_trainer"], "datasets": ["cats_vs_dogs"], "metrics": ["accuracy"], "base_model": "google/vit-base-patch16-224-in21k", "model-index": [{"name": "vit-base-cats-vs-dogs", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "cats_vs_dogs", "type": "cats_vs_dogs", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9883257403189066, "name": "Accuracy"}]}]}]} | akahana/vit-base-cats-vs-dogs | null | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:cats_vs_dogs",
"base_model:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | akahana/wav2vec2-base-indonesia-v2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
automatic-speech-recognition | transformers | {} | akahana/wav2vec2-base-indonesia | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | akashseo/prismleadindia | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | akashsivanandan/wav2vec2-large-xls-r-300m-hindi-colab-new | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | akashsivanandan/wav2vec2-large-xls-r-300m-hindi-colab | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-tamil-colab-final
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7539
- Wer: 0.6135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 11.1466 | 1.0 | 118 | 4.3444 | 1.0 |
| 3.4188 | 2.0 | 236 | 3.2496 | 1.0 |
| 2.8617 | 3.0 | 354 | 1.6165 | 1.0003 |
| 0.958 | 4.0 | 472 | 0.7984 | 0.8720 |
| 0.5929 | 5.0 | 590 | 0.6733 | 0.7831 |
| 0.4628 | 6.0 | 708 | 0.6536 | 0.7621 |
| 0.3834 | 7.0 | 826 | 0.6037 | 0.7155 |
| 0.3242 | 8.0 | 944 | 0.6376 | 0.7184 |
| 0.2736 | 9.0 | 1062 | 0.6214 | 0.7070 |
| 0.2433 | 10.0 | 1180 | 0.6158 | 0.6944 |
| 0.2217 | 11.0 | 1298 | 0.6548 | 0.6830 |
| 0.1992 | 12.0 | 1416 | 0.6331 | 0.6775 |
| 0.1804 | 13.0 | 1534 | 0.6644 | 0.6874 |
| 0.1639 | 14.0 | 1652 | 0.6629 | 0.6649 |
| 0.143 | 15.0 | 1770 | 0.6927 | 0.6836 |
| 0.1394 | 16.0 | 1888 | 0.6933 | 0.6888 |
| 0.1296 | 17.0 | 2006 | 0.7039 | 0.6860 |
| 0.1212 | 18.0 | 2124 | 0.7042 | 0.6628 |
| 0.1121 | 19.0 | 2242 | 0.7132 | 0.6475 |
| 0.1069 | 20.0 | 2360 | 0.7423 | 0.6438 |
| 0.1063 | 21.0 | 2478 | 0.7171 | 0.6484 |
| 0.1025 | 22.0 | 2596 | 0.7396 | 0.6451 |
| 0.0946 | 23.0 | 2714 | 0.7400 | 0.6432 |
| 0.0902 | 24.0 | 2832 | 0.7385 | 0.6286 |
| 0.0828 | 25.0 | 2950 | 0.7368 | 0.6286 |
| 0.079 | 26.0 | 3068 | 0.7471 | 0.6306 |
| 0.0747 | 27.0 | 3186 | 0.7524 | 0.6201 |
| 0.0661 | 28.0 | 3304 | 0.7576 | 0.6201 |
| 0.0659 | 29.0 | 3422 | 0.7579 | 0.6130 |
| 0.0661 | 30.0 | 3540 | 0.7539 | 0.6135 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-tamil-colab-final", "results": []}]} | akashsivanandan/wav2vec2-large-xls-r-300m-tamil-colab-final | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-tamil-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8072
- Wer: 0.6531
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 11.0967 | 1.0 | 118 | 4.6437 | 1.0 |
| 3.4973 | 2.0 | 236 | 3.2588 | 1.0 |
| 3.1305 | 3.0 | 354 | 2.6566 | 1.0 |
| 1.2931 | 4.0 | 472 | 0.9156 | 0.9944 |
| 0.6851 | 5.0 | 590 | 0.7474 | 0.8598 |
| 0.525 | 6.0 | 708 | 0.6649 | 0.7995 |
| 0.4325 | 7.0 | 826 | 0.6740 | 0.7752 |
| 0.3766 | 8.0 | 944 | 0.6220 | 0.7628 |
| 0.3256 | 9.0 | 1062 | 0.6316 | 0.7322 |
| 0.2802 | 10.0 | 1180 | 0.6442 | 0.7305 |
| 0.2575 | 11.0 | 1298 | 0.6885 | 0.7280 |
| 0.2248 | 12.0 | 1416 | 0.6702 | 0.7197 |
| 0.2089 | 13.0 | 1534 | 0.6781 | 0.7173 |
| 0.1893 | 14.0 | 1652 | 0.6981 | 0.7049 |
| 0.1652 | 15.0 | 1770 | 0.7154 | 0.7436 |
| 0.1643 | 16.0 | 1888 | 0.6798 | 0.7023 |
| 0.1472 | 17.0 | 2006 | 0.7381 | 0.6947 |
| 0.1372 | 18.0 | 2124 | 0.7240 | 0.7065 |
| 0.1318 | 19.0 | 2242 | 0.7305 | 0.6714 |
| 0.1211 | 20.0 | 2360 | 0.7288 | 0.6597 |
| 0.1178 | 21.0 | 2478 | 0.7417 | 0.6699 |
| 0.1118 | 22.0 | 2596 | 0.7476 | 0.6753 |
| 0.1016 | 23.0 | 2714 | 0.7973 | 0.6647 |
| 0.0998 | 24.0 | 2832 | 0.8027 | 0.6633 |
| 0.0917 | 25.0 | 2950 | 0.8045 | 0.6680 |
| 0.0907 | 26.0 | 3068 | 0.7884 | 0.6565 |
| 0.0835 | 27.0 | 3186 | 0.8009 | 0.6622 |
| 0.0749 | 28.0 | 3304 | 0.8123 | 0.6536 |
| 0.0755 | 29.0 | 3422 | 0.8006 | 0.6555 |
| 0.074 | 30.0 | 3540 | 0.8072 | 0.6531 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-tamil-colab", "results": []}]} | akashsivanandan/wav2vec2-large-xls-r-300m-tamil-colab | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | akashsivanandan/wav2vec2-large-xls-r-300m-turkish-colab | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | akaushik1/DialoGPT-medium-kbot | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers |
# Kaiser DialoGPT Model | {"tags": ["conversational"]} | akaushik1/DialoGPT-small-kaiser | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers | # Hungarian Named Entity Recognition (NER) Model
This model is the fine-tuned model of "SZTAKI-HLT/hubert-base-cc"
using the famous WikiANN dataset presented
in the "Cross-lingual Name Tagging and Linking for 282 Languages" [paper](https://aclanthology.org/P17-1178.pdf).
# Fine-tuning parameters:
```
task = "ner"
model_checkpoint = "SZTAKI-HLT/hubert-base-cc"
batch_size = 8
label_list = ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']
max_length = 512
learning_rate = 2e-5
num_train_epochs = 3
weight_decay = 0.01
```
# How to use:
```
model = AutoModelForTokenClassification.from_pretrained("akdeniz27/bert-base-hungarian-cased-ner")
tokenizer = AutoTokenizer.from_pretrained("akdeniz27/bert-base-hungarian-cased-ner")
ner = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="first")
ner("<your text here>")
```
Pls refer "https://huggingface.co/transformers/_modules/transformers/pipelines/token_classification.html" for entity grouping with aggregation_strategy parameter.
# Reference test results:
* accuracy: 0.9774538310923768
* f1: 0.9462099085573904
* precision: 0.9425718667406271
* recall: 0.9498761426661113 | {"language": "hu", "widget": [{"text": "Karik\u00f3 Katalin megkapja Szeged d\u00edszpolg\u00e1rs\u00e1g\u00e1t."}]} | akdeniz27/bert-base-hungarian-cased-ner | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"hu",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers |
# Turkish Named Entity Recognition (NER) Model
This model is the fine-tuned model of "dbmdz/bert-base-turkish-cased"
using a reviewed version of well known Turkish NER dataset
(https://github.com/stefan-it/turkish-bert/files/4558187/nerdata.txt).
# Fine-tuning parameters:
```
task = "ner"
model_checkpoint = "dbmdz/bert-base-turkish-cased"
batch_size = 8
label_list = ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']
max_length = 512
learning_rate = 2e-5
num_train_epochs = 3
weight_decay = 0.01
```
# How to use:
```
model = AutoModelForTokenClassification.from_pretrained("akdeniz27/bert-base-turkish-cased-ner")
tokenizer = AutoTokenizer.from_pretrained("akdeniz27/bert-base-turkish-cased-ner")
ner = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="first")
ner("your text here")
```
Pls refer "https://huggingface.co/transformers/_modules/transformers/pipelines/token_classification.html" for entity grouping with aggregation_strategy parameter.
# Reference test results:
* accuracy: 0.9933935699477056
* f1: 0.9592969472710453
* precision: 0.9543530277931161
* recall: 0.9642923563325274
Evaluation results with the test sets proposed in ["Küçük, D., Küçük, D., Arıcı, N. 2016. Türkçe Varlık İsmi Tanıma için bir Veri Kümesi ("A Named Entity Recognition Dataset for Turkish"). IEEE Sinyal İşleme, İletişim ve Uygulamaları Kurultayı. Zonguldak, Türkiye."](https://ieeexplore.ieee.org/document/7495744) paper.
* Test Set Acc. Prec. Rec. F1-Score
* 20010000 0.9946 0.9871 0.9463 0.9662
* 20020000 0.9928 0.9134 0.9206 0.9170
* 20030000 0.9942 0.9814 0.9186 0.9489
* 20040000 0.9943 0.9660 0.9522 0.9590
* 20050000 0.9971 0.9539 0.9932 0.9732
* 20060000 0.9993 0.9942 0.9942 0.9942
* 20070000 0.9970 0.9806 0.9439 0.9619
* 20080000 0.9988 0.9821 0.9649 0.9735
* 20090000 0.9977 0.9891 0.9479 0.9681
* 20100000 0.9961 0.9684 0.9293 0.9485
* Overall 0.9961 0.9720 0.9516 0.9617 | {"language": "tr", "widget": [{"text": "Mustafa Kemal Atat\u00fcrk 19 May\u0131s 1919'da Samsun'a \u00e7\u0131kt\u0131."}]} | akdeniz27/bert-base-turkish-cased-ner | null | [
"transformers",
"pytorch",
"onnx",
"safetensors",
"bert",
"token-classification",
"tr",
"doi:10.57967/hf/0949",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
# Turkish Text Classification for Complaints Data Set
This model is a fine-tune model of https://github.com/stefan-it/turkish-bert by using text classification data with 9 categories as follows:
id_to_category = {0: 'KONFORSUZLUK', 1: 'TARİFE İHLALİ', 2: 'DURAKTA DURMAMA', 3: 'ŞOFÖR-PERSONEL ŞİKAYETİ',
4: 'YENİ GÜZERGAH/HAT/DURAK İSTEĞİ', 5: 'TRAFİK GÜVENLİĞİ', 6: 'DİĞER ŞİKAYETLER', 7: 'TEŞEKKÜR', 8: 'DİĞER TALEPLER'}
| {"language": "tr"} | akdeniz27/bert-turkish-text-classification | null | [
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"text-classification",
"tr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers | # Turkish Named Entity Recognition (NER) Model
This model is the fine-tuned model of dbmdz/convbert-base-turkish-cased (ConvBERTurk)
using a reviewed version of well known Turkish NER dataset
(https://github.com/stefan-it/turkish-bert/files/4558187/nerdata.txt).
The ConvBERT architecture is presented in the ["ConvBERT: Improving BERT with Span-based Dynamic Convolution"](https://arxiv.org/abs/2008.02496) paper.
# Fine-tuning parameters:
```
task = "ner"
model_checkpoint = "dbmdz/convbert-base-turkish-cased"
batch_size = 8
label_list = ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']
max_length = 512
learning_rate = 2e-5
num_train_epochs = 3
weight_decay = 0.01
```
# How to use:
```
model = AutoModelForTokenClassification.from_pretrained("akdeniz27/convbert-base-turkish-cased-ner")
tokenizer = AutoTokenizer.from_pretrained("akdeniz27/convbert-base-turkish-cased-ner")
ner = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="first")
ner("<your text here>")
# Pls refer "https://huggingface.co/transformers/_modules/transformers/pipelines/token_classification.html" for entity grouping with aggregation_strategy parameter.
```
# Reference test results:
* accuracy: 0.9937648915431506
* f1: 0.9610945644080416
* precision: 0.9619899385131359
* recall: 0.9602008554956295 | {"language": "tr", "widget": [{"text": "Almanya, koronavir\u00fcs a\u015f\u0131s\u0131n\u0131 geli\u015ftiren Dr. \u00d6zlem T\u00fcreci ve e\u015fi Prof. Dr. U\u011fur \u015eahin'e liyakat ni\u015fan\u0131 verdi"}]} | akdeniz27/convbert-base-turkish-cased-ner | null | [
"transformers",
"pytorch",
"onnx",
"safetensors",
"convbert",
"token-classification",
"tr",
"arxiv:2008.02496",
"doi:10.57967/hf/0015",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers | # DeBERTa v2 XLarge Model fine-tuned with CUAD dataset
This model is the fine-tuned version of "DeBERTa v2 XLarge"
using CUAD dataset https://huggingface.co/datasets/cuad
Link for model checkpoint: https://github.com/TheAtticusProject/cuad
For the use of the model with CUAD: https://github.com/marshmellow77/cuad-demo
and https://huggingface.co/spaces/akdeniz27/contract-understanding-atticus-dataset-demo | {"language": "en", "datasets": ["cuad"]} | akdeniz27/deberta-v2-xlarge-cuad | null | [
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"question-answering",
"en",
"dataset:cuad",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers | # Turkish Named Entity Recognition (NER) Model
This model is the fine-tuned version of "microsoft/mDeBERTa-v3-base"
(a multilingual version of DeBERTa V3)
using a reviewed version of well known Turkish NER dataset
(https://github.com/stefan-it/turkish-bert/files/4558187/nerdata.txt).
# Fine-tuning parameters:
```
task = "ner"
model_checkpoint = "microsoft/mdeberta-v3-base"
batch_size = 8
label_list = ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']
max_length = 512
learning_rate = 2e-5
num_train_epochs = 2
weight_decay = 0.01
```
# How to use:
```
model = AutoModelForTokenClassification.from_pretrained("akdeniz27/mDeBERTa-v3-base-turkish-ner")
tokenizer = AutoTokenizer.from_pretrained("akdeniz27/mDeBERTa-v3-base-turkish-ner")
ner = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
ner("<your text here>")
```
Pls refer "https://huggingface.co/transformers/_modules/transformers/pipelines/token_classification.html" for entity grouping with aggregation_strategy parameter.
# Reference test results:
* f1: 0.95
* precision: 0.94
* recall: 0.96 | {"language": "tr", "widget": [{"text": "Mustafa Kemal Atat\u00fcrk 19 May\u0131s 1919'da Samsun'a \u00e7\u0131kt\u0131."}]} | akdeniz27/mDeBERTa-v3-base-turkish-ner | null | [
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"token-classification",
"tr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers | # Albanian Named Entity Recognition (NER) Model
This model is the fine-tuned model of "bert-base-multilingual-cased"
using the famous WikiANN dataset presented
in the "Cross-lingual Name Tagging and Linking for 282 Languages" [paper](https://aclanthology.org/P17-1178.pdf).
# Fine-tuning parameters:
```
task = "ner"
model_checkpoint = "bert-base-multilingual-cased"
batch_size = 8
label_list = ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']
max_length = 512
learning_rate = 2e-5
num_train_epochs = 3
weight_decay = 0.01
```
# How to use:
```
model = AutoModelForTokenClassification.from_pretrained("akdeniz27/mbert-base-albanian-cased-ner")
tokenizer = AutoTokenizer.from_pretrained("akdeniz27/mbert-base-albanian-cased-ner")
ner = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="first")
ner("<your text here>")
```
Pls refer "https://huggingface.co/transformers/_modules/transformers/pipelines/token_classification.html" for entity grouping with aggregation_strategy parameter.
# Reference test results:
* accuracy: 0.9719268816143276
* f1: 0.9192366826444787
* precision: 0.9171629669734704
* recall: 0.9213197969543148
| {"language": "sq", "widget": [{"text": "Varianti AY.4.2 \u00ebsht\u00eb m\u00eb i leht\u00eb p\u00ebr t'u transmetuar, thot\u00eb Francois Balu, drejtor i Institutit t\u00eb Gjenetik\u00ebs n\u00eb Lond\u00ebr."}]} | akdeniz27/mbert-base-albanian-cased-ner | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"sq",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
# RoBERTa Base Model fine-tuned with CUAD dataset
This model is the fine-tuned version of "RoBERTa Base"
using CUAD dataset https://huggingface.co/datasets/cuad
Link for model checkpoint: https://github.com/TheAtticusProject/cuad
For the use of the model with CUAD: https://github.com/marshmellow77/cuad-demo
and https://huggingface.co/spaces/akdeniz27/contract-understanding-atticus-dataset-demo | {"language": "en", "datasets": ["cuad"]} | akdeniz27/roberta-base-cuad | null | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"question-answering",
"en",
"dataset:cuad",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers |
# Model Card for RoBERTa Large Model fine-tuned with CUAD dataset
This model is the fine-tuned version of "RoBERTa Large" using CUAD dataset
# Model Details
## Model Description
The [Contract Understanding Atticus Dataset (CUAD)](https://www.atticusprojectai.org/cuad), pronounced "kwad", a dataset for legal contract review curated by the Atticus Project.
Contract review is a task about "finding needles in a haystack."
We find that Transformer models have nascent performance on CUAD, but that this performance is strongly influenced by model design and training dataset size. Despite some promising results, there is still substantial room for improvement. As one of the only large, specialized NLP benchmarks annotated by experts, CUAD can serve as a challenging research benchmark for the broader NLP community.
- **Developed by:** TheAtticusProject
- **Shared by [Optional]:** HuggingFace
- **Model type:** Language model
- **Language(s) (NLP):** en
- **License:** More information needed
- **Related Models:** RoBERTA
- **Parent Model:**RoBERTA Large
- **Resources for more information:**
- [GitHub Repo](https://github.com/TheAtticusProject/cuad)
- [Associated Paper](https://arxiv.org/abs/2103.06268)
# Uses
## Direct Use
Legal contract review
## Downstream Use [Optional]
More information needed
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recomendations.
# Training Details
## Training Data
See [cuad dataset card](https://huggingface.co/datasets/cuad) for further details
## Training Procedure
More information needed
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
#### Extra Data
Researchers may be interested in several gigabytes of unlabeled contract pretraining data, which is available [here](https://drive.google.com/file/d/1of37X0hAhECQ3BN_004D8gm6V88tgZaB/view?usp=sharing).
### Factors
More information needed
### Metrics
More information needed
## Results
We [provide checkpoints](https://zenodo.org/record/4599830) for three of the best models fine-tuned on CUAD: RoBERTa-base (~100M parameters), RoBERTa-large (~300M parameters), and DeBERTa-xlarge (~900M parameters).
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
The HuggingFace [Transformers](https://huggingface.co/transformers) library. It was tested with Python 3.8, PyTorch 1.7, and Transformers 4.3/4.4.
# Citation
**BibTeX:**
@article{hendrycks2021cuad,
title={CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review},
author={Dan Hendrycks and Collin Burns and Anya Chen and Spencer Ball},
journal={NeurIPS},
year={2021}
}
# Glossary [optional]
More information needed
# More Information [optional]
For more details about CUAD and legal contract review, see the [Atticus Project website](https://www.atticusprojectai.org/cuad).
# Model Card Authors [optional]
TheAtticusProject
# Model Card Contact
[TheAtticusProject](https://www.atticusprojectai.org/), in collaboration with the Ezi Ozoani and the HuggingFace Team
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("akdeniz27/roberta-large-cuad")
model = AutoModelForQuestionAnswering.from_pretrained("akdeniz27/roberta-large-cuad")
```
</details>
| {"language": "en", "datasets": ["cuad"]} | akdeniz27/roberta-large-cuad | null | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"question-answering",
"en",
"dataset:cuad",
"arxiv:2103.06268",
"arxiv:1910.09700",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers | # Turkish Named Entity Recognition (NER) Model
This model is the fine-tuned version of "xlm-roberta-base"
(a multilingual version of RoBERTa)
using a reviewed version of well known Turkish NER dataset
(https://github.com/stefan-it/turkish-bert/files/4558187/nerdata.txt).
# Fine-tuning parameters:
```
task = "ner"
model_checkpoint = "xlm-roberta-base"
batch_size = 8
label_list = ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']
max_length = 512
learning_rate = 2e-5
num_train_epochs = 2
weight_decay = 0.01
```
# How to use:
```
model = AutoModelForTokenClassification.from_pretrained("akdeniz27/xlm-roberta-base-turkish-ner")
tokenizer = AutoTokenizer.from_pretrained("akdeniz27/xlm-roberta-base-turkish-ner")
ner = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
ner("<your text here>")
```
Pls refer "https://huggingface.co/transformers/_modules/transformers/pipelines/token_classification.html" for entity grouping with aggregation_strategy parameter.
# Reference test results:
* accuracy: 0.9919343118732742
* f1: 0.9492100796448622
* precision: 0.9407349896480332
* recall: 0.9578392621870883
| {"language": "tr", "widget": [{"text": "Mustafa Kemal Atat\u00fcrk 19 May\u0131s 1919'da Samsun'a \u00e7\u0131kt\u0131."}]} | akdeniz27/xlm-roberta-base-turkish-ner | null | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"token-classification",
"tr",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | akera/sunbird-en-mul | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | akey3/DialoGPT-small-Rick | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | akhaliq/AnimeGANv2-ONNX | null | [
"onnx",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | akhaliq/AnimeGANv2-pytorch | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | akhaliq/ArcaneGANv0.2 | null | [
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | akhaliq/ArcaneGANv0.3 | null | [
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | akhaliq/ArcaneGANv0.4 | null | [
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | akhaliq/BlendGan | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | akhaliq/CLIP-prefix-captioning-COCO-weights | null | [
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | akhaliq/CLIP-prefix-captioning-conceptual-weights | null | [
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | akhaliq/CLIPasso | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {"license": "mit"} | akhaliq/Demucs | null | [
"license:mit",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | akhaliq/JoJoGAN-jojo | null | [
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | akhaliq/JoJoGAN_e4e_ffhq_encode | null | [
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | akhaliq/Omnivore | null | [
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | akhaliq/Stylegan-ffhq-vintage | null | [
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
object-detection | null |
<div align="left">
## You Only Look Once for Panoptic Driving Perception
> [**You Only Look at Once for Panoptic driving Perception**](https://arxiv.org/abs/2108.11250)
>
> by Dong Wu, Manwen Liao, Weitian Zhang, [Xinggang Wang](https://xinggangw.info/) [*School of EIC, HUST*](http://eic.hust.edu.cn/English/Home.htm)
>
> *arXiv technical report ([arXiv 2108.11250](https://arxiv.org/abs/2108.11250))*
---
### The Illustration of YOLOP

### Contributions
* We put forward an efficient multi-task network that can jointly handle three crucial tasks in autonomous driving: object detection, drivable area segmentation and lane detection to save computational costs, reduce inference time as well as improve the performance of each task. Our work is the first to reach real-time on embedded devices while maintaining state-of-the-art level performance on the `BDD100K `dataset.
* We design the ablative experiments to verify the effectiveness of our multi-tasking scheme. It is proved that the three tasks can be learned jointly without tedious alternating optimization.
### Results
#### Traffic Object Detection Result
| Model | Recall(%) | mAP50(%) | Speed(fps) |
| -------------- | --------- | -------- | ---------- |
| `Multinet` | 81.3 | 60.2 | 8.6 |
| `DLT-Net` | 89.4 | 68.4 | 9.3 |
| `Faster R-CNN` | 77.2 | 55.6 | 5.3 |
| `YOLOv5s` | 86.8 | 77.2 | 82 |
| `YOLOP(ours)` | 89.2 | 76.5 | 41 |
#### Drivable Area Segmentation Result
| Model | mIOU(%) | Speed(fps) |
| ------------- | ------- | ---------- |
| `Multinet` | 71.6 | 8.6 |
| `DLT-Net` | 71.3 | 9.3 |
| `PSPNet` | 89.6 | 11.1 |
| `YOLOP(ours)` | 91.5 | 41 |
#### Lane Detection Result:
| Model | mIOU(%) | IOU(%) |
| ------------- | ------- | ------ |
| `ENet` | 34.12 | 14.64 |
| `SCNN` | 35.79 | 15.84 |
| `ENet-SAD` | 36.56 | 16.02 |
| `YOLOP(ours)` | 70.50 | 26.20 |
#### Ablation Studies 1: End-to-end v.s. Step-by-step:
| Training_method | Recall(%) | AP(%) | mIoU(%) | Accuracy(%) | IoU(%) |
| --------------- | --------- | ----- | ------- | ----------- | ------ |
| `ES-W` | 87.0 | 75.3 | 90.4 | 66.8 | 26.2 |
| `ED-W` | 87.3 | 76.0 | 91.6 | 71.2 | 26.1 |
| `ES-D-W` | 87.0 | 75.1 | 91.7 | 68.6 | 27.0 |
| `ED-S-W` | 87.5 | 76.1 | 91.6 | 68.0 | 26.8 |
| `End-to-end` | 89.2 | 76.5 | 91.5 | 70.5 | 26.2 |
#### Ablation Studies 2: Multi-task v.s. Single task:
| Training_method | Recall(%) | AP(%) | mIoU(%) | Accuracy(%) | IoU(%) | Speed(ms/frame) |
| --------------- | --------- | ----- | ------- | ----------- | ------ | --------------- |
| `Det(only)` | 88.2 | 76.9 | - | - | - | 15.7 |
| `Da-Seg(only)` | - | - | 92.0 | - | - | 14.8 |
| `Ll-Seg(only)` | - | - | - | 79.6 | 27.9 | 14.8 |
| `Multitask` | 89.2 | 76.5 | 91.5 | 70.5 | 26.2 | 24.4 |
**Notes**:
- The works we has use for reference including `Multinet` ([paper](https://arxiv.org/pdf/1612.07695.pdf?utm_campaign=affiliate-ir-Optimise%20media%28%20South%20East%20Asia%29%20Pte.%20ltd._156_-99_national_R_all_ACQ_cpa_en&utm_content=&utm_source=%20388939),[code](https://github.com/MarvinTeichmann/MultiNet)),`DLT-Net` ([paper](https://ieeexplore.ieee.org/abstract/document/8937825)),`Faster R-CNN` ([paper](https://proceedings.neurips.cc/paper/2015/file/14bfa6bb14875e45bba028a21ed38046-Paper.pdf),[code](https://github.com/ShaoqingRen/faster_rcnn)),`YOLOv5s`([code](https://github.com/ultralytics/yolov5)) ,`PSPNet`([paper](https://openaccess.thecvf.com/content_cvpr_2017/papers/Zhao_Pyramid_Scene_Parsing_CVPR_2017_paper.pdf),[code](https://github.com/hszhao/PSPNet)) ,`ENet`([paper](https://arxiv.org/pdf/1606.02147.pdf),[code](https://github.com/osmr/imgclsmob)) `SCNN`([paper](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/download/16802/16322),[code](https://github.com/XingangPan/SCNN)) `SAD-ENet`([paper](https://openaccess.thecvf.com/content_ICCV_2019/papers/Hou_Learning_Lightweight_Lane_Detection_CNNs_by_Self_Attention_Distillation_ICCV_2019_paper.pdf),[code](https://github.com/cardwing/Codes-for-Lane-Detection)). Thanks for their wonderful works.
- In table 4, E, D, S and W refer to Encoder, Detect head, two Segment heads and whole network. So the Algorithm (First, we only train Encoder and Detect head. Then we freeze the Encoder and Detect head as well as train two Segmentation heads. Finally, the entire network is trained jointly for all three tasks.) can be marked as ED-S-W, and the same for others.
---
### Visualization
#### Traffic Object Detection Result

#### Drivable Area Segmentation Result

#### Lane Detection Result

**Notes**:
- The visualization of lane detection result has been post processed by quadratic fitting.
---
### Project Structure
```python
├─inference
│ ├─images # inference images
│ ├─output # inference result
├─lib
│ ├─config/default # configuration of training and validation
│ ├─core
│ │ ├─activations.py # activation function
│ │ ├─evaluate.py # calculation of metric
│ │ ├─function.py # training and validation of model
│ │ ├─general.py #calculation of metric、nms、conversion of data-format、visualization
│ │ ├─loss.py # loss function
│ │ ├─postprocess.py # postprocess(refine da-seg and ll-seg, unrelated to paper)
│ ├─dataset
│ │ ├─AutoDriveDataset.py # Superclass dataset,general function
│ │ ├─bdd.py # Subclass dataset,specific function
│ │ ├─hust.py # Subclass dataset(Campus scene, unrelated to paper)
│ │ ├─convect.py
│ │ ├─DemoDataset.py # demo dataset(image, video and stream)
│ ├─models
│ │ ├─YOLOP.py # Setup and Configuration of model
│ │ ├─light.py # Model lightweight(unrelated to paper, zwt)
│ │ ├─commom.py # calculation module
│ ├─utils
│ │ ├─augmentations.py # data augumentation
│ │ ├─autoanchor.py # auto anchor(k-means)
│ │ ├─split_dataset.py # (Campus scene, unrelated to paper)
│ │ ├─utils.py # logging、device_select、time_measure、optimizer_select、model_save&initialize 、Distributed training
│ ├─run
│ │ ├─dataset/training time # Visualization, logging and model_save
├─tools
│ │ ├─demo.py # demo(folder、camera)
│ │ ├─test.py
│ │ ├─train.py
├─toolkits
│ │ ├─depoly # Deployment of model
├─weights # Pretraining model
```
---
### Requirement
This codebase has been developed with python version 3.7, PyTorch 1.7+ and torchvision 0.8+:
```
conda install pytorch==1.7.0 torchvision==0.8.0 cudatoolkit=10.2 -c pytorch
```
See `requirements.txt` for additional dependencies and version requirements.
```setup
pip install -r requirements.txt
```
### Data preparation
#### Download
- Download the images from [images](https://bdd-data.berkeley.edu/).
- Download the annotations of detection from [det_annotations](https://drive.google.com/file/d/1Ge-R8NTxG1eqd4zbryFo-1Uonuh0Nxyl/view?usp=sharing).
- Download the annotations of drivable area segmentation from [da_seg_annotations](https://drive.google.com/file/d/1xy_DhUZRHR8yrZG3OwTQAHhYTnXn7URv/view?usp=sharing).
- Download the annotations of lane line segmentation from [ll_seg_annotations](https://drive.google.com/file/d/1lDNTPIQj_YLNZVkksKM25CvCHuquJ8AP/view?usp=sharing).
We recommend the dataset directory structure to be the following:
```
# The id represent the correspondence relation
├─dataset root
│ ├─images
│ │ ├─train
│ │ ├─val
│ ├─det_annotations
│ │ ├─train
│ │ ├─val
│ ├─da_seg_annotations
│ │ ├─train
│ │ ├─val
│ ├─ll_seg_annotations
│ │ ├─train
│ │ ├─val
```
Update the your dataset path in the `./lib/config/default.py`.
### Training
You can set the training configuration in the `./lib/config/default.py`. (Including: the loading of preliminary model, loss, data augmentation, optimizer, warm-up and cosine annealing, auto-anchor, training epochs, batch_size).
If you want try alternating optimization or train model for single task, please modify the corresponding configuration in `./lib/config/default.py` to `True`. (As following, all configurations is `False`, which means training multiple tasks end to end).
```python
# Alternating optimization
_C.TRAIN.SEG_ONLY = False # Only train two segmentation branchs
_C.TRAIN.DET_ONLY = False # Only train detection branch
_C.TRAIN.ENC_SEG_ONLY = False # Only train encoder and two segmentation branchs
_C.TRAIN.ENC_DET_ONLY = False # Only train encoder and detection branch
# Single task
_C.TRAIN.DRIVABLE_ONLY = False # Only train da_segmentation task
_C.TRAIN.LANE_ONLY = False # Only train ll_segmentation task
_C.TRAIN.DET_ONLY = False # Only train detection task
```
Start training:
```shell
python tools/train.py
```
### Evaluation
You can set the evaluation configuration in the `./lib/config/default.py`. (Including: batch_size and threshold value for nms).
Start evaluating:
```shell
python tools/test.py --weights weights/End-to-end.pth
```
### Demo Test
We provide two testing method.
#### Folder
You can store the image or video in `--source`, and then save the reasoning result to `--save-dir`
```shell
python tools/demo --source inference/images
```
#### Camera
If there are any camera connected to your computer, you can set the `source` as the camera number(The default is 0).
```shell
python tools/demo --source 0
```
### Deployment
Our model can reason in real-time on `Jetson Tx2`, with `Zed Camera` to capture image. We use `TensorRT` tool for speeding up. We provide code for deployment and reasoning of model in `./toolkits/deploy`.
## Citation
If you find our paper and code useful for your research, please consider giving a star and citation:
```BibTeX
@misc{2108.11250,
Author = {Dong Wu and Manwen Liao and Weitian Zhang and Xinggang Wang},
Title = {YOLOP: You Only Look Once for Panoptic Driving Perception},
Year = {2021},
Eprint = {arXiv:2108.11250},
}
```
| {"tags": ["object-detection"]} | akhaliq/YOLOP | null | [
"object-detection",
"arxiv:2108.11250",
"arxiv:1612.07695",
"arxiv:1606.02147",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | akhaliq/frame-interpolation-film-imagenet-vgg-verydeep-19 | null | [
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | keras | {} | akhaliq/frame-interpolation-film-style | null | [
"keras",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | akhaliq/jojo-gan-art | null | [
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | akhaliq/jojo-gan-jinx | null | [
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | akhaliq/jojo-gan-spiderverse | null | [
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | akhaliq/jojogan-arcane | null | [
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | akhaliq/jojogan-disney | null | [
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {"license": "mit"} | akhaliq/jojogan-sketch | null | [
"license:mit",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | akhaliq/jojogan-stylegan2-ffhq-config-f | null | [
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | akhaliq/jojogan_dlib | null | [
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | akhilsr/fake_news_detection | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers | {} | akhooli/gpt2-ar-poetry-aub | null | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers | {} | akhooli/gpt2-ar-poetry-aub_m | null | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.