modelId
stringlengths 4
112
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 21
values | files
list | publishedBy
stringlengths 2
37
| downloads_last_month
int32 0
9.44M
| library
stringclasses 15
values | modelCard
large_stringlengths 0
100k
|
---|---|---|---|---|---|---|---|---|
absa/classifier-rest-0.2.1 | 2021-05-19T11:37:38.000Z | [
"tf",
"bert",
"transformers"
]
| [
".gitattributes",
"callbacks.bin",
"config.json",
"experiment.log",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| absa | 16 | transformers | ||
absa/classifier-rest-0.2 | 2021-05-19T11:37:54.000Z | [
"tf",
"bert",
"transformers"
]
| [
".gitattributes",
"callbacks.bin",
"config.json",
"experiment.log",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| absa | 6,153 | transformers | ||
acoadmarmon/un-ner | 2021-05-25T00:22:33.000Z | []
| [
".gitattributes"
]
| acoadmarmon | 0 | |||
activebus/BERT-DK_laptop | 2021-05-18T23:00:58.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| activebus | 32 | transformers | # ReviewBERT
BERT (post-)trained from review corpus to understand sentiment, options and various e-commence aspects.
`BERT-DK_laptop` is trained from 100MB laptop corpus under `Electronics/Computers & Accessories/Laptops`.
## Model Description
The original model is from `BERT-base-uncased` trained from Wikipedia+BookCorpus.
Models are post-trained from [Amazon Dataset](http://jmcauley.ucsd.edu/data/amazon/) and [Yelp Dataset](https://www.yelp.com/dataset/challenge/).
`BERT-DK_laptop` is trained from 100MB laptop corpus under `Electronics/Computers & Accessories/Laptops`.
## Instructions
Loading the post-trained weights are as simple as, e.g.,
```python
import torch
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("activebus/BERT-DK_laptop")
model = AutoModel.from_pretrained("activebus/BERT-DK_laptop")
```
## Evaluation Results
Check our [NAACL paper](https://www.aclweb.org/anthology/N19-1242.pdf)
## Citation
If you find this work useful, please cite as following.
```
@inproceedings{xu_bert2019,
title = "BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis",
author = "Xu, Hu and Liu, Bing and Shu, Lei and Yu, Philip S.",
booktitle = "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics",
month = "jun",
year = "2019",
}
```
|
activebus/BERT-DK_rest | 2021-05-18T23:02:24.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| activebus | 30 | transformers | # ReviewBERT
BERT (post-)trained from review corpus to understand sentiment, options and various e-commence aspects.
`BERT-DK_rest` is trained from 1G (19 types) restaurants from Yelp.
## Model Description
The original model is from `BERT-base-uncased` trained from Wikipedia+BookCorpus.
Models are post-trained from [Amazon Dataset](http://jmcauley.ucsd.edu/data/amazon/) and [Yelp Dataset](https://www.yelp.com/dataset/challenge/).
## Instructions
Loading the post-trained weights are as simple as, e.g.,
```python
import torch
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("activebus/BERT-DK_rest")
model = AutoModel.from_pretrained("activebus/BERT-DK_rest")
```
## Evaluation Results
Check our [NAACL paper](https://www.aclweb.org/anthology/N19-1242.pdf)
## Citation
If you find this work useful, please cite as following.
```
@inproceedings{xu_bert2019,
title = "BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis",
author = "Xu, Hu and Liu, Bing and Shu, Lei and Yu, Philip S.",
booktitle = "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics",
month = "jun",
year = "2019",
}
```
|
activebus/BERT-PT_laptop | 2021-05-18T23:03:36.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| activebus | 38 | transformers | # ReviewBERT
BERT (post-)trained from review corpus to understand sentiment, options and various e-commence aspects.
`BERT-DK_laptop` is trained from 100MB laptop corpus under `Electronics/Computers & Accessories/Laptops`.
`BERT-PT_*` addtionally uses SQuAD 1.1.
## Model Description
The original model is from `BERT-base-uncased` trained from Wikipedia+BookCorpus.
Models are post-trained from [Amazon Dataset](http://jmcauley.ucsd.edu/data/amazon/) and [Yelp Dataset](https://www.yelp.com/dataset/challenge/).
## Instructions
Loading the post-trained weights are as simple as, e.g.,
```python
import torch
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("activebus/BERT-PT_laptop")
model = AutoModel.from_pretrained("activebus/BERT-PT_laptop")
```
## Evaluation Results
Check our [NAACL paper](https://www.aclweb.org/anthology/N19-1242.pdf)
## Citation
If you find this work useful, please cite as following.
```
@inproceedings{xu_bert2019,
title = "BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis",
author = "Xu, Hu and Liu, Bing and Shu, Lei and Yu, Philip S.",
booktitle = "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics",
month = "jun",
year = "2019",
}
```
|
activebus/BERT-PT_rest | 2021-05-18T23:04:31.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| activebus | 20 | transformers | # ReviewBERT
BERT (post-)trained from review corpus to understand sentiment, options and various e-commence aspects.
`BERT-DK_rest` is trained from 1G (19 types) restaurants from Yelp.
`BERT-PT_*` addtionally uses SQuAD 1.1.
## Model Description
The original model is from `BERT-base-uncased` trained from Wikipedia+BookCorpus.
Models are post-trained from [Amazon Dataset](http://jmcauley.ucsd.edu/data/amazon/) and [Yelp Dataset](https://www.yelp.com/dataset/challenge/).
## Instructions
Loading the post-trained weights are as simple as, e.g.,
```python
import torch
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("activebus/BERT-PT_rest")
model = AutoModel.from_pretrained("activebus/BERT-PT_rest")
```
## Evaluation Results
Check our [NAACL paper](https://www.aclweb.org/anthology/N19-1242.pdf)
## Citation
If you find this work useful, please cite as following.
```
@inproceedings{xu_bert2019,
title = "BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis",
author = "Xu, Hu and Liu, Bing and Shu, Lei and Yu, Philip S.",
booktitle = "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics",
month = "jun",
year = "2019",
}
```
|
activebus/BERT-XD_Review | 2021-05-19T11:38:28.000Z | [
"pytorch",
"bert",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| activebus | 108 | transformers | # ReviewBERT
BERT (post-)trained from review corpus to understand sentiment, options and various e-commence aspects.
Please visit https://github.com/howardhsu/BERT-for-RRC-ABSA for details.
`BERT-XD_Review` is a cross-domain (beyond just `laptop` and `restaurant`) language model, where each example is from a single product / restaurant with the same rating, post-trained (fine-tuned) on a combination of 5-core Amazon reviews and all Yelp data, expected to be 22 G in total. It is trained for 4 epochs on `bert-base-uncased`.
The preprocessing code [here](https://github.com/howardhsu/BERT-for-RRC-ABSA/transformers).
## Model Description
The original model is from `BERT-base-uncased`.
Models are post-trained from [Amazon Dataset](http://jmcauley.ucsd.edu/data/amazon/) and [Yelp Dataset](https://www.yelp.com/dataset/challenge/).
## Instructions
Loading the post-trained weights are as simple as, e.g.,
```python
import torch
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("activebus/BERT-XD_Review")
model = AutoModel.from_pretrained("activebus/BERT-XD_Review")
```
## Evaluation Results
Check our [NAACL paper](https://www.aclweb.org/anthology/N19-1242.pdf)
`BERT_Review` is expected to have similar performance on domain-specific tasks (such as aspect extraction) as `BERT-DK`, but much better on general tasks such as aspect sentiment classification (different domains mostly share similar sentiment words).
## Citation
If you find this work useful, please cite as following.
```
@inproceedings{xu_bert2019,
title = "BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis",
author = "Xu, Hu and Liu, Bing and Shu, Lei and Yu, Philip S.",
booktitle = "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics",
month = "jun",
year = "2019",
}
```
|
|
activebus/BERT_Review | 2021-05-18T23:05:54.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| activebus | 541 | transformers | # ReviewBERT
BERT (post-)trained from review corpus to understand sentiment, options and various e-commence aspects.
`BERT_Review` is cross-domain (beyond just `laptop` and `restaurant`) language model with one example from randomly mixed domains, post-trained (fine-tuned) on a combination of 5-core Amazon reviews and all Yelp data, expected to be 22 G in total. It is trained for 4 epochs on `bert-base-uncased`.
The preprocessing code [here](https://github.com/howardhsu/BERT-for-RRC-ABSA/transformers).
## Model Description
The original model is from `BERT-base-uncased` trained from Wikipedia+BookCorpus.
Models are post-trained from [Amazon Dataset](http://jmcauley.ucsd.edu/data/amazon/) and [Yelp Dataset](https://www.yelp.com/dataset/challenge/).
## Instructions
Loading the post-trained weights are as simple as, e.g.,
```python
import torch
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("activebus/BERT_Review")
model = AutoModel.from_pretrained("activebus/BERT_Review")
```
## Evaluation Results
Check our [NAACL paper](https://www.aclweb.org/anthology/N19-1242.pdf)
`BERT_Review` is expected to have similar performance on domain-specific tasks (such as aspect extraction) as `BERT-DK`, but much better on general tasks such as aspect sentiment classification (different domains mostly share similar sentiment words).
## Citation
If you find this work useful, please cite as following.
```
@inproceedings{xu_bert2019,
title = "BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis",
author = "Xu, Hu and Liu, Bing and Shu, Lei and Yu, Philip S.",
booktitle = "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics",
month = "jun",
year = "2019",
}
```
|
adalbertojunior/PTT5-SMALL-SUM | 2020-12-11T21:31:35.000Z | [
"pytorch",
"t5",
"seq2seq",
"pt",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| adalbertojunior | 30 | transformers | ---
language: pt
---
# PTT5-SMALL-SUM
## Model description
This model was trained to summarize texts in portuguese
based on ```unicamp-dl/ptt5-small-portuguese-vocab```
#### How to use
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained('adalbertojunior/PTT5-SMALL-SUM')
t5 = T5ForConditionalGeneration.from_pretrained('adalbertojunior/PTT5-SMALL-SUM')
text="Esse é um exemplo de sumarização."
input_ids = tokenizer.encode(text, return_tensors="pt", add_special_tokens=True)
generated_ids = t5.generate(
input_ids=input_ids,
num_beams=1,
max_length=40,
#repetition_penalty=2.5
).squeeze()
predicted_span = tokenizer.decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
```
|
adalbertojunior/bert-prompt-sim-pt | 2021-05-18T23:07:02.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"sentence_bert_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| adalbertojunior | 6 | transformers | ||
adalbertojunior/bert_regression | 2021-05-19T11:38:41.000Z | [
"pytorch",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| adalbertojunior | 15 | transformers | ||
adamlin/ClinicalBert_all_notes | 2019-12-25T17:08:00.000Z | [
"pytorch",
"transformers"
]
| [
".gitattributes",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| adamlin | 19 | transformers | ||
adamlin/ClinicalBert_disch | 2019-12-25T17:08:32.000Z | [
"pytorch",
"transformers"
]
| [
".gitattributes",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| adamlin | 19 | transformers | ||
adamlin/NCBI_BERT_pubmed_mimic_uncased_base_transformers | 2019-12-25T17:05:13.000Z | [
"pytorch",
"transformers"
]
| [
".gitattributes",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| adamlin | 19 | transformers | ||
adamlin/NCBI_BERT_pubmed_mimic_uncased_large_transformers | 2019-12-25T17:08:38.000Z | [
"pytorch",
"transformers"
]
| [
".gitattributes",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| adamlin | 13 | transformers | ||
adamlin/bert-distil-chinese | 2021-05-19T11:39:14.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
]
| [
".gitattributes",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| adamlin | 72 | transformers | ||
adamlin/csp | 2021-06-02T08:10:21.000Z | [
"pytorch",
"mt5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"optimizer.pt",
"pytorch_model.bin",
"rng_state.pth",
"scheduler.pt",
"special_tokens_map.json",
"spiece.model",
"tokenizer.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin"
]
| adamlin | 14 | transformers | |
adamlin/cup | 2021-06-02T08:03:44.000Z | []
| [
".gitattributes"
]
| adamlin | 0 | |||
adamlin/distilbert-base-cased-sgd_qa-step5000 | 2021-02-09T15:02:35.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"config.json",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.txt"
]
| adamlin | 6 | transformers | |
adamlin/tmp | 2021-06-04T06:29:41.000Z | []
| [
".gitattributes"
]
| adamlin | 0 | |||
adamlin/tmpjstpbdt1 | 2021-04-29T15:38:24.000Z | []
| [
".gitattributes"
]
| adamlin | 0 | |||
adamlin/tmppdzei5bc | 2021-05-09T06:56:14.000Z | []
| [
".gitattributes"
]
| adamlin | 0 | |||
adamlin/tus_21-delex_5000 | 2021-04-08T14:25:30.000Z | [
"pytorch",
"bart",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"merges.txt",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.json"
]
| adamlin | 7 | transformers | |
adelevie/distilbert-gsa-eula-opp | 2020-08-20T13:31:35.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| adelevie | 13 | transformers | |
adilism/wav2vec2-large-xlsr-kazakh | 2021-04-01T09:55:48.000Z | [
"pytorch",
"wav2vec2",
"kk",
"dataset:kazakh_speech_corpus",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"utils.py",
"vocab.json"
]
| adilism | 8 | transformers | ---
language: kk
datasets:
- kazakh_speech_corpus
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Wav2Vec2-XLSR-53 Kazakh by adilism
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Kazakh Speech Corpus v1.1
type: kazakh_speech_corpus
args: kk
metrics:
- name: Test WER
type: wer
value: 19.65
---
# Wav2Vec2-Large-XLSR-53-Kazakh
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for Kazakh ASR using the [Kazakh Speech Corpus v1.1](https://issai.nu.edu.kz/kz-speech-corpus/?version=1.1)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from utils import get_test_dataset
test_dataset = get_test_dataset("ISSAI_KSC_335RS_v1.1")
processor = Wav2Vec2Processor.from_pretrained("wav2vec2-large-xlsr-kazakh")
model = Wav2Vec2ForCTC.from_pretrained("wav2vec2-large-xlsr-kazakh")
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = torchaudio.transforms.Resample(sampling_rate, 16_000)(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the test set of [Kazakh Speech Corpus v1.1](https://issai.nu.edu.kz/kz-speech-corpus/?version=1.1). To evaluate, download the [archive](https://www.openslr.org/resources/102/ISSAI_KSC_335RS_v1.1_flac.tar.gz), untar and pass the path to data to `get_test_dataset` as below:
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
from utils import get_test_dataset
test_dataset = get_test_dataset("ISSAI_KSC_335RS_v1.1")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("adilism/wav2vec2-large-xlsr-kazakh")
model = Wav2Vec2ForCTC.from_pretrained("adilism/wav2vec2-large-xlsr-kazakh")
model.to("cuda")
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = torchaudio.transforms.Resample(sampling_rate, 16_000)(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
def evaluate(batch):
inputs = processor(batch["text"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 19.65%
## Training
The Kazakh Speech Corpus v1.1 `train` dataset was used for training. |
adilism/wav2vec2-large-xlsr-kyrgyz | 2021-03-28T21:46:55.000Z | [
"pytorch",
"wav2vec2",
"ky",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| adilism | 7 | transformers | ---
language: ky
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: {Wav2Vec2-XLSR-53 Kyrgyz by adilism}
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ky
type: common_voice
args: ky
metrics:
- name: Test WER
type: wer
value: 34.08
---
# Wav2Vec2-Large-XLSR-53-Kyrgyz
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Kyrgyz using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ky", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("adilism/wav2vec2-large-xlsr-kyrgyz")
model = Wav2Vec2ForCTC.from_pretrained("adilism/wav2vec2-large-xlsr-kyrgyz")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Kyrgyz test data of Common Voice:
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ky", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("adilism/wav2vec2-large-xlsr-kyrgyz")
model = Wav2Vec2ForCTC.from_pretrained("adilism/wav2vec2-large-xlsr-kyrgyz")
model.to("cuda")
chars_to_ignore = [",", "?", ".", "!", "-", ";", ":", "—", "–", "”"]
chars_to_ignore_regex = f'[{"".join(chars_to_ignore)}]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 34.08 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
aditeyabaral/Yashi-33k-small | 2021-06-08T07:54:45.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"conversational",
"text-generation"
]
| conversational | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
]
| aditeyabaral | 20 | transformers | ---
tags:
- conversational
---
# Model trained on WhatsApp conversations with Yashi |
aditeyabaral/Yashi-40k-small | 2021-06-08T07:54:58.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"conversational",
"text-generation"
]
| conversational | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
]
| aditeyabaral | 11 | transformers | ---
tags:
- conversational
---
# Model trained on WhatsApp conversations with Yashi |
aditeyabaral/Yashi-50k-small | 2021-06-08T04:55:21.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"conversational",
"text-generation"
]
| conversational | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
]
| aditeyabaral | 79 | transformers | ---
tags:
- conversational
---
# Model trained on WhatsApp conversations with Yashi |
aditeyabaral/Yashi-IG-aditeyabaral-main-pvt | 2021-06-09T16:26:46.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"conversational",
"text-generation"
]
| conversational | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
]
| aditeyabaral | 36 | transformers | ---
tags:
- conversational
---
# Model trained on Instagram conversations with Yashi from my account |
aditeyabaral/Yashi-IG-aditeyabaral-main | 2021-06-09T17:21:56.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"conversational",
"text-generation"
]
| conversational | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
]
| aditeyabaral | 18 | transformers | ---
tags:
- conversational
---
# Model trained on Instagram conversations with Yashi from my account |
adnankhawaja/RomanUBerta | 2021-05-29T20:40:45.000Z | []
| [
".gitattributes"
]
| adnankhawaja | 0 | |||
adresgezgini/Finetuned-SentiBERtr-Pos-Neg-Reviews | 2021-05-18T23:09:04.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"model_args.json",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| adresgezgini | 22 | transformers | |
adresgezgini/Turkish-GPT-2-Finetuned_digital_ads | 2021-05-21T11:52:06.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"all_results.json",
"config.json",
"eval_results.json",
"flax_model.msgpack",
"merges.docx",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"train_results.json",
"trainer_state.json",
"training_args.bin",
"vocab.json"
]
| adresgezgini | 39 | transformers | |
adresgezgini/turkish-gpt-2 | 2021-05-21T11:53:09.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
]
| adresgezgini | 207 | transformers | AdresGezgini Inc. R&D Center Turkish GPT-2 Model Trained with Turkish Wiki Corpus for 10 Epochs
|
adresgezgini/wav2vec-tr-lite-AG | 2021-03-30T06:10:16.000Z | [
"pytorch",
"wav2vec2",
"tr",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"license:apache-2.0"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"optimizer-002.pt",
"preprocessor_config.json",
"pytorch_model.bin",
"scheduler.pt",
"trainer_state.json",
"training_args.bin",
"vocab.json"
]
| adresgezgini | 8 | transformers | ---
language: tr
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Turkish by Davut Emre TASAR
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice tr
type: common_voice
args: tr
metrics:
- name: Test WER
type: wer
---
# wav2vec-tr-lite-AG
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "tr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("emre/wav2vec-tr-lite-AG")
model = Wav2Vec2ForCTC.from_pretrained("emre/wav2vec-tr-lite-AG")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
**Test Result**: 27.30 %
[here](https://adresgezgini.com)
|
adriansyahdr/adrBert-base-p1 | 2021-05-18T23:10:07.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"training_args.bin",
"vocab.txt"
]
| adriansyahdr | 8 | transformers | |
adriansyahdr/adrBert-base-p2 | 2021-05-18T23:11:14.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"pretraining",
"transformers"
]
| [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"training_args.bin",
"vocab.txt"
]
| adriansyahdr | 8 | transformers | ||
adzcodez/TokenClassificationTest | 2021-03-16T14:18:09.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| adzcodez | 6 | transformers | distilbert-base-uncased finetuned on the conll2003 dataset for NER. |
aerkanc/electra-base-turkish-cased-discriminator | 2020-11-23T19:09:58.000Z | []
| [
".gitattributes"
]
| aerkanc | 0 | |||
af-ai-center/bert-base-swedish-uncased | 2021-05-18T23:12:14.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"vocab.txt"
]
| af-ai-center | 700 | transformers | |
af-ai-center/bert-large-swedish-uncased | 2021-05-18T23:14:05.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"vocab.txt"
]
| af-ai-center | 185 | transformers | |
aga11313/test | 2021-03-18T12:38:00.000Z | []
| [
".gitattributes"
]
| aga11313 | 0 | |||
agiagoulas/bert-pss | 2021-05-18T23:16:17.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin"
]
| agiagoulas | 21 | transformers | bert-base-uncased model trained on the tobacco800 dataset for the task of page-stream-segmentation.
[Link](https://github.com/agiagoulas/page-stream-segmentation) to the GitHub Repo with the model implementation. |
aheba31/blablabal | 2021-02-10T09:52:13.000Z | []
| [
".gitattributes"
]
| aheba31 | 0 | |||
ahmedattia143/roberta_squadv1_base | 2021-05-30T11:42:11.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| ahmedattia143 | 298 | transformers | |
ahmednasserswe/sentence_distilbert | 2020-06-09T09:02:24.000Z | [
"pytorch",
"distilbert",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentence_distilbert_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| ahmednasserswe | 10 | transformers | ||
ahotrod/albert_xxlargev1_squad2_512 | 2020-12-11T21:31:38.000Z | [
"pytorch",
"tf",
"albert",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"README.md",
"albert_xxlargev1_sqd2_512.sh",
"config.json",
"loss_tensorboard.png",
"lrate_tensorboard.png",
"nbest_predictions_.json",
"null_odds_.json",
"nvidia-smi.png",
"predictions_.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tf_model.h5",
"tokenizer_config.json",
"training_args.bin"
]
| ahotrod | 41,794 | transformers | ## Albert xxlarge version 1 language model fine-tuned on SQuAD2.0
### (updated 30Sept2020) with the following results:
```
exact: 86.11134506864315
f1: 89.35371214945009
total': 11873
HasAns_exact': 83.56950067476383
HasAns_f1': 90.06353312254078
HasAns_total': 5928
NoAns_exact': 88.64592094196804
NoAns_f1': 88.64592094196804
NoAns_total': 5945
best_exact': 86.11134506864315
best_exact_thresh': 0.0
best_f1': 89.35371214944985
best_f1_thresh': 0.0
```
### from script:
```
python ${EXAMPLES}/run_squad.py \
--model_type albert \
--model_name_or_path albert-xxlarge-v1 \
--do_train \
--do_eval \
--train_file ${SQUAD}/train-v2.0.json \
--predict_file ${SQUAD}/dev-v2.0.json \
--version_2_with_negative \
--do_lower_case \
--num_train_epochs 3 \
--max_steps 8144 \
--warmup_steps 814 \
--learning_rate 3e-5 \
--max_seq_length 512 \
--doc_stride 128 \
--per_gpu_train_batch_size 6 \
--gradient_accumulation_steps 8 \
--per_gpu_eval_batch_size 48 \
--fp16 \
--fp16_opt_level O1 \
--threads 12 \
--logging_steps 50 \
--save_steps 3000 \
--overwrite_output_dir \
--output_dir ${MODEL_PATH}
```
### using the following software & system:
```
Transformers: 3.1.0
PyTorch: 1.6.0
TensorFlow: 2.3.1
Python: 3.8.1
OS: Linux-5.4.0-48-generic-x86_64-with-glibc2.10
CPU/GPU: Intel i9-9900K / NVIDIA Titan RTX 24GB
```
|
ahotrod/electra_large_discriminator_squad2_512 | 2020-12-11T21:31:42.000Z | [
"pytorch",
"tf",
"electra",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"README.md",
"config.json",
"electra_large_squad2_512.sh",
"nvidia-smi.png",
"pytorch_model.bin",
"special_tokens_map.json",
"tensorboard_learning_rate.png",
"tensorboard_loss.png",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| ahotrod | 5,221 | transformers | ## ELECTRA_large_discriminator language model fine-tuned on SQuAD2.0
### with the following results:
```
"exact": 87.09677419354838,
"f1": 89.98343832723452,
"total": 11873,
"HasAns_exact": 84.66599190283401,
"HasAns_f1": 90.44759839056285,
"HasAns_total": 5928,
"NoAns_exact": 89.52060555088309,
"NoAns_f1": 89.52060555088309,
"NoAns_total": 5945,
"best_exact": 87.09677419354838,
"best_exact_thresh": 0.0,
"best_f1": 89.98343832723432,
"best_f1_thresh": 0.0
```
### from script:
```
python ${EXAMPLES}/run_squad.py \
--model_type electra \
--model_name_or_path google/electra-large-discriminator \
--do_train \
--do_eval \
--train_file ${SQUAD}/train-v2.0.json \
--predict_file ${SQUAD}/dev-v2.0.json \
--version_2_with_negative \
--do_lower_case \
--num_train_epochs 3 \
--warmup_steps 306 \
--weight_decay 0.01 \
--learning_rate 3e-5 \
--max_grad_norm 0.5 \
--adam_epsilon 1e-6 \
--max_seq_length 512 \
--doc_stride 128 \
--per_gpu_train_batch_size 8 \
--gradient_accumulation_steps 16 \
--per_gpu_eval_batch_size 128 \
--fp16 \
--fp16_opt_level O1 \
--threads 12 \
--logging_steps 50 \
--save_steps 1000 \
--overwrite_output_dir \
--output_dir ${MODEL_PATH}
```
### using the following system & software:
```
Transformers: 2.11.0
PyTorch: 1.5.0
TensorFlow: 2.2.0
Python: 3.8.1
OS/Platform: Linux-5.3.0-59-generic-x86_64-with-glibc2.10
CPU/GPU: Intel i9-9900K / NVIDIA Titan RTX 24GB
```
|
ahotrod/roberta_large_squad2 | 2021-05-20T12:48:52.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"roberta_TRex_nvidia-smi.png",
"roberta_large_squad2.sh",
"special_tokens_map.json",
"tensorboard_loss.png",
"tensorboard_lrate.png",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
]
| ahotrod | 191 | transformers | ## RoBERTa-large language model fine-tuned on SQuAD2.0
### with the following results:
```
"exact": 84.46896319380106,
"f1": 87.85388093408943,
"total": 11873,
"HasAns_exact": 81.37651821862349,
"HasAns_f1": 88.1560607844881,
"HasAns_total": 5928,
"NoAns_exact": 87.55256518082422,
"NoAns_f1": 87.55256518082422,
"NoAns_total": 5945,
"best_exact": 84.46896319380106,
"best_exact_thresh": 0.0,
"best_f1": 87.85388093408929,
"best_f1_thresh": 0.0
```
### from script:
```
python ${EXAMPLES}/run_squad.py \
--model_type roberta \
--model_name_or_path roberta-large \
--do_train \
--do_eval \
--train_file ${SQUAD}/train-v2.0.json \
--predict_file ${SQUAD}/dev-v2.0.json \
--version_2_with_negative \
--do_lower_case \
--num_train_epochs 3 \
--warmup_steps 1642 \
--weight_decay 0.01 \
--learning_rate 3e-5 \
--adam_epsilon 1e-6 \
--max_seq_length 512 \
--doc_stride 128 \
--per_gpu_train_batch_size 8 \
--gradient_accumulation_steps 6 \
--per_gpu_eval_batch_size 48 \
--threads 12 \
--logging_steps 50 \
--save_steps 2000 \
--overwrite_output_dir \
--output_dir ${MODEL_PATH}
$@
```
### using the following system & software:
```
Transformers: 2.7.0
PyTorch: 1.4.0
TensorFlow: 2.1.0
Python: 3.7.7
OS/Platform: Linux-5.3.0-46-generic-x86_64-with-debian-buster-sid
CPU/GPU: Intel i9-9900K / NVIDIA Titan RTX 24GB
```
|
ai4bharat/indic-bert | 2021-04-12T09:06:47.000Z | [
"pytorch",
"albert",
"en",
"dataset:AI4Bharat IndicNLP Corpora",
"transformers",
"license:mit"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model",
"spiece.vocab",
"tf_model.ckpt.data-00000-of-00001",
"tf_model.ckpt.index",
"tf_model.ckpt.meta"
]
| ai4bharat | 2,272 | transformers | ---
language: en
license: mit
datasets:
- AI4Bharat IndicNLP Corpora
---
# IndicBERT
IndicBERT is a multilingual ALBERT model pretrained exclusively on 12 major Indian languages. It is pre-trained on our novel monolingual corpus of around 9 billion tokens and subsequently evaluated on a set of diverse tasks. IndicBERT has much fewer parameters than other multilingual models (mBERT, XLM-R etc.) while it also achieves a performance on-par or better than these models.
The 12 languages covered by IndicBERT are: Assamese, Bengali, English, Gujarati, Hindi, Kannada, Malayalam, Marathi, Oriya, Punjabi, Tamil, Telugu.
The code can be found [here](https://github.com/divkakwani/indic-bert). For more information, checkout our [project page](https://indicnlp.ai4bharat.org/) or our [paper](https://indicnlp.ai4bharat.org/papers/arxiv2020_indicnlp_corpus.pdf).
## Pretraining Corpus
We pre-trained indic-bert on AI4Bharat's monolingual corpus. The corpus has the following distribution of languages:
| Language | as | bn | en | gu | hi | kn | |
| ----------------- | ------ | ------ | ------ | ------ | ------ | ------ | ------- |
| **No. of Tokens** | 36.9M | 815M | 1.34B | 724M | 1.84B | 712M | |
| **Language** | **ml** | **mr** | **or** | **pa** | **ta** | **te** | **all** |
| **No. of Tokens** | 767M | 560M | 104M | 814M | 549M | 671M | 8.9B |
## Evaluation Results
IndicBERT is evaluated on IndicGLUE and some additional tasks. The results are summarized below. For more details about the tasks, refer our [official repo](https://github.com/divkakwani/indic-bert)
#### IndicGLUE
Task | mBERT | XLM-R | IndicBERT
-----| ----- | ----- | ------
News Article Headline Prediction | 89.58 | 95.52 | **95.87**
Wikipedia Section Title Prediction| **73.66** | 66.33 | 73.31
Cloze-style multiple-choice QA | 39.16 | 27.98 | **41.87**
Article Genre Classification | 90.63 | 97.03 | **97.34**
Named Entity Recognition (F1-score) | **73.24** | 65.93 | 64.47
Cross-Lingual Sentence Retrieval Task | 21.46 | 13.74 | **27.12**
Average | 64.62 | 61.09 | **66.66**
#### Additional Tasks
Task | Task Type | mBERT | XLM-R | IndicBERT
-----| ----- | ----- | ------ | -----
BBC News Classification | Genre Classification | 60.55 | **75.52** | 74.60
IIT Product Reviews | Sentiment Analysis | 74.57 | **78.97** | 71.32
IITP Movie Reviews | Sentiment Analaysis | 56.77 | **61.61** | 59.03
Soham News Article | Genre Classification | 80.23 | **87.6** | 78.45
Midas Discourse | Discourse Analysis | 71.20 | **79.94** | 78.44
iNLTK Headlines Classification | Genre Classification | 87.95 | 93.38 | **94.52**
ACTSA Sentiment Analysis | Sentiment Analysis | 48.53 | 59.33 | **61.18**
Winograd NLI | Natural Language Inference | 56.34 | 55.87 | **56.34**
Choice of Plausible Alternative (COPA) | Natural Language Inference | 54.92 | 51.13 | **58.33**
Amrita Exact Paraphrase | Paraphrase Detection | **93.81** | 93.02 | 93.75
Amrita Rough Paraphrase | Paraphrase Detection | 83.38 | 82.20 | **84.33**
Average | | 69.84 | **74.42** | 73.66
\* Note: all models have been restricted to a max_seq_length of 128.
## Downloads
The model can be downloaded [here](https://storage.googleapis.com/ai4bharat-public-indic-nlp-corpora/models/indic-bert-v1.tar.gz). Both tf checkpoints and pytorch binaries are included in the archive. Alternatively, you can also download it from [Huggingface](https://huggingface.co/ai4bharat/indic-bert).
## Citing
If you are using any of the resources, please cite the following article:
```
@inproceedings{kakwani2020indicnlpsuite,
title={{IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for Indian Languages}},
author={Divyanshu Kakwani and Anoop Kunchukuttan and Satish Golla and Gokul N.C. and Avik Bhattacharyya and Mitesh M. Khapra and Pratyush Kumar},
year={2020},
booktitle={Findings of EMNLP},
}
```
We would like to hear from you if:
- You are using our resources. Please let us know how you are putting these resources to use.
- You have any feedback on these resources.
## License
The IndicBERT code (and models) are released under the MIT License.
## Contributors
- Divyanshu Kakwani
- Anoop Kunchukuttan
- Gokul NC
- Satish Golla
- Avik Bhattacharyya
- Mitesh Khapra
- Pratyush Kumar
This work is the outcome of a volunteer effort as part of [AI4Bharat initiative](https://ai4bharat.org).
## Contact
- Anoop Kunchukuttan ([[email protected]](mailto:[email protected]))
- Mitesh Khapra ([[email protected]](mailto:[email protected]))
- Pratyush Kumar ([[email protected]](mailto:[email protected]))
|
|
aicast/bert_finetuning_test | 2021-05-18T23:17:12.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"eval_results_mrpc.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json"
]
| aicast | 21 | transformers | |
aidan-plenert-macdonald/gpt2-lv | 2021-05-21T11:53:49.000Z | [
"tf",
"gpt2",
"transformers"
]
| [
".gitattributes",
"config.json",
"merges.txt",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
]
| aidan-plenert-macdonald | 11 | transformers | ||
aidan-plenert-macdonald/model_lv_custom | 2021-05-21T11:54:18.000Z | [
"tf",
"gpt2",
"transformers"
]
| [
".gitattributes",
"config.json",
"merges.txt",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
]
| aidan-plenert-macdonald | 11 | transformers | ||
aidenz/bert | 2021-04-14T02:06:52.000Z | []
| [
".gitattributes"
]
| aidenz | 0 | |||
aimiekhe/yummv1 | 2021-06-06T02:38:56.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"conversational",
"text-generation"
]
| conversational | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
]
| aimiekhe | 770 | transformers | ---
tags:
- conversational
---
# My Awesome Model |
aimiekhe/yummv2 | 2021-06-06T03:04:24.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"conversational",
"text-generation"
]
| conversational | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
]
| aimiekhe | 352 | transformers | ---
tags:
- conversational
---
# My Awesome Model |
aing/demon-slayer-mugen-train-movie | 2021-04-30T14:12:50.000Z | []
| [
".gitattributes",
"README.md"
]
| aing | 0 | |||
aing/demon-slayer-mugen-train | 2021-04-29T17:59:15.000Z | []
| [
".gitattributes",
"README.md"
]
| aing | 0 | |||
aing/demon-slayer-the-movie-mugen-train | 2021-04-25T17:42:46.000Z | []
| [
".gitattributes",
"README.md"
]
| aing | 0 | |||
aing/demon-slayer | 2021-04-30T10:06:47.000Z | []
| [
".gitattributes",
"README.md"
]
| aing | 0 | |||
aing/full-play-demon-slayer-the-movie-mugen-train | 2021-04-29T16:54:39.000Z | []
| [
".gitattributes",
"README.md"
]
| aing | 0 | |||
aing/full-stream-demon-slayer-the-movie-mugen-train | 2021-04-28T14:29:12.000Z | []
| [
".gitattributes",
"README.md"
]
| aing | 0 | |||
aing/hd-movie-demon-slayer-the-movie-mugen-train | 2021-04-27T06:37:35.000Z | []
| [
".gitattributes",
"README.md"
]
| aing | 0 | |||
aing/jedini-izlaz | 2021-05-01T18:24:50.000Z | []
| [
".gitattributes",
"README.md"
]
| aing | 0 | |||
aing/pejwan | 2021-04-24T17:43:32.000Z | []
| [
".gitattributes",
"README.md"
]
| aing | 0 | |||
aing/watch-demon-slayer-the-movie-mugen-train-2021 | 2021-04-27T14:35:54.000Z | []
| [
".gitattributes",
"README.md"
]
| aing | 0 | |||
ainize/GPT2-futurama-script | 2021-05-21T11:58:18.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| ainize | 10 | transformers | |
ainize/gpt2-mcu-script-large | 2021-05-21T12:03:49.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| ainize | 14 | transformers | |
ainize/gpt2-rnm-with-only-rick | 2021-05-21T12:06:44.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| ainize | 17 | transformers | ### Model information
Fine tuning data 1: https://www.kaggle.com/andradaolteanu/rickmorty-scripts
Base model: e-tony/gpt2-rnm
Epoch: 1
Train runtime: 3.4982 secs
Loss: 3.0894
Training notebook: [Colab](https://colab.research.google.com/drive/1RawVxulLETFicWMY0YANUdP-H-e7Eeyc)
### ===Teachable NLP=== ###
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp)
Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp)
|
ainize/gpt2-rnm-with-season-1 | 2021-05-21T12:08:00.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| ainize | 9 | transformers | ### Model information
Fine tuning data 1: https://www.kaggle.com/andradaolteanu/rickmorty-scripts
Base model: e-tony/gpt2-rnm
Epoch: 3
Train runtime: 7.1779 secs
Loss: 2.5694
Training notebook: [Colab](https://colab.research.google.com/drive/12NvO1SIZevF8ybJqfN9O21I3i9bU1dOO#scrollTo=KUsyn02WWmf5)
### ===Teachable NLP=== ###
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp)
Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp)
|
ainize/gpt2-rnm-with-spongebob | 2021-05-21T12:09:02.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| ainize | 118 | transformers | ### Model information
Fine tuning data 1: https://www.kaggle.com/andradaolteanu/rickmorty-scripts
Fine tuning data 2: https://www.kaggle.com/mikhailgaerlan/spongebob-squarepants-completed-transcripts
Base model: e-tony/gpt2-rnm
Epoch: 2
Train runtime: 790.0612 secs
Loss: 2.8569
API page: [Ainize](https://ainize.ai/fpem123/GPT2-Rick-N-Morty-with-SpongeBob?branch=master)
Demo page: [End-point](https://master-gpt2-rick-n-morty-with-sponge-bob-fpem123.endpoint.ainize.ai/)
### ===Teachable NLP=== ###
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp)
Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp)
|
ainize/gpt2-simpsons-script-large | 2021-05-21T12:13:28.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| ainize | 11 | transformers | |
ainize/gpt2-spongebob-script-large | 2021-05-21T12:18:42.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| ainize | 8 | transformers | ### Model information
Fine tuning data: https://www.kaggle.com/mikhailgaerlan/spongebob-squarepants-completed-transcripts
License: CC-BY-SA
Base model: gpt-2 large
Epoch: 50
Train runtime: 14723.0716 secs
Loss: 0.0268
API page: [Ainize](https://ainize.ai/fpem123/GPT2-Spongebob?branch=master)
Demo page: [End-point](https://master-gpt2-spongebob-fpem123.endpoint.ainize.ai/)
### ===Teachable NLP=== ###
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp)
Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp) |
airKlizz/bart-large-cnn-multi-en-wiki-news | 2020-06-10T08:13:05.000Z | [
"pytorch",
"bart",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"eval_results.txt",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| airKlizz | 109 | transformers | |
airKlizz/bart-large-multi-combine-wiki-news | 2020-06-11T10:57:33.000Z | [
"pytorch",
"bart",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| airKlizz | 20 | transformers | |
airKlizz/bart-large-multi-de-wiki-news | 2020-06-10T11:38:23.000Z | [
"pytorch",
"bart",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"eval_results.txt",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| airKlizz | 31 | transformers | |
airKlizz/bart-large-multi-en-wiki-news | 2020-06-09T14:41:16.000Z | [
"pytorch",
"bart",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"eval_results.txt",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| airKlizz | 18 | transformers | |
airKlizz/bart-large-multi-fr-wiki-news | 2020-06-10T08:43:35.000Z | [
"pytorch",
"bart",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| airKlizz | 20 | transformers | |
airKlizz/bert2bert-multi-de-wiki-news | 2020-06-10T08:36:47.000Z | [
"pytorch",
"encoder-decoder",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"eval_results.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| airKlizz | 22 | transformers | |
airKlizz/bert2bert-multi-en-wiki-news | 2020-08-11T09:05:53.000Z | [
"pytorch",
"encoder-decoder",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"eval_results.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| airKlizz | 10 | transformers | |
airKlizz/bert2bert-multi-fr-wiki-news | 2020-08-11T09:05:55.000Z | [
"pytorch",
"encoder-decoder",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"eval_results.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| airKlizz | 19 | transformers | |
airKlizz/distilbart-12-3-multi-combine-wiki-news | 2020-08-26T10:25:17.000Z | [
"pytorch",
"bart",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"eval_results.txt",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| airKlizz | 20 | transformers | |
airKlizz/distilbart-12-6-multi-combine-wiki-news | 2020-08-21T07:35:00.000Z | [
"pytorch",
"bart",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"eval_results.txt",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| airKlizz | 26 | transformers | |
airKlizz/distilbart-3-3-multi-combine-wiki-news | 2020-08-21T12:24:19.000Z | [
"pytorch",
"bart",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"eval_results.txt",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| airKlizz | 20 | transformers | |
airKlizz/distilbart-6-12-multi-combine-wiki-news | 2020-08-22T07:50:42.000Z | [
"pytorch",
"bart",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"eval_results.txt",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| airKlizz | 19 | transformers | |
airKlizz/distilbart-6-6-multi-combine-wiki-news | 2020-08-22T07:53:04.000Z | [
"pytorch",
"bart",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"eval_results.txt",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| airKlizz | 17 | transformers | |
airKlizz/distilbart-multi-combine-wiki-news | 2020-07-03T09:57:18.000Z | [
"pytorch",
"bart",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"eval_results.txt",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| airKlizz | 22 | transformers | |
airKlizz/t5-base-multi-combine-wiki-news | 2020-06-10T18:34:41.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"eval_results.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
]
| airKlizz | 13 | transformers | |
airKlizz/t5-base-multi-de-wiki-news | 2020-06-10T13:06:37.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"eval_results.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
]
| airKlizz | 16 | transformers | |
airKlizz/t5-base-multi-en-wiki-news | 2020-06-10T08:14:46.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"eval_results.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
]
| airKlizz | 16 | transformers | |
airKlizz/t5-base-multi-fr-wiki-news | 2020-06-10T08:26:38.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"eval_results.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
]
| airKlizz | 22 | transformers | |
airKlizz/t5-base-with-title-multi-de-wiki-news | 2020-06-10T08:40:52.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"eval_results.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
]
| airKlizz | 25 | transformers | |
airKlizz/t5-base-with-title-multi-en-wiki-news | 2020-06-10T08:16:44.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"eval_results.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
]
| airKlizz | 19 | transformers | |
airKlizz/t5-base-with-title-multi-fr-wiki-news | 2020-06-10T08:28:43.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"eval_results.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
]
| airKlizz | 18 | transformers | |
airKlizz/t5-small-multi-combine-wiki-news | 2020-07-04T14:25:03.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"eval_results.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
]
| airKlizz | 19 | transformers | |
airesearch/bert-base-multilingual-cased-finetuned | 2021-05-19T11:39:44.000Z | [
"bert",
"masked-lm",
"arxiv:1810.04805",
"arxiv:2101.09635",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json"
]
| airesearch | 23 | transformers | # Finetuend `bert-base-multilignual-cased` model on Thai sequence and token classification datasets
<br>
Finetuned XLM Roberta BASE model on Thai sequence and token classification datasets
The script and documentation can be found at [this repository](https://github.com/vistec-AI/thai2transformers).
<br>
## Model description
<br>
We use the pretrained cross-lingual BERT model (mBERT) as proposed by [[Devlin et al., 2018]](https://arxiv.org/abs/1810.04805). We download the pretrained PyTorch model via HuggingFace's Model Hub (https://huggingface.co/bert-base-multilignual-cased)
<br>
## Intended uses & limitations
<br>
You can use the finetuned models for multiclass/multilabel text classification and token classification task.
<br>
**Multiclass text classification**
- `wisesight_sentiment`
4-class text classification task (`positive`, `neutral`, `negative`, and `question`) based on social media posts and tweets.
- `wongnai_reivews`
Users' review rating classification task (scale is ranging from 1 to 5)
- `generated_reviews_enth` : (`review_star` as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
**Multilabel text classification**
- `prachathai67k`
Thai topic classification with 12 labels based on news article corpus from prachathai.com. The detail is described in this [page](https://huggingface.co/datasets/prachathai67k).
**Token classification**
- `thainer`
Named-entity recognition tagging with 13 named-entities as descibed in this [page](https://huggingface.co/datasets/thainer).
- `lst20` : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this [page](https://huggingface.co/datasets/lst20).
<br>
## How to use
<br>
The example notebook demonstrating how to use finetuned model for inference can be found at this [Colab notebook](https://colab.research.google.com/drive/1Kbk6sBspZLwcnOE61adAQo30xxqOQ9ko)
<br>
**BibTeX entry and citation info**
```
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
airesearch/wangchanberta-base-att-spm-uncased | 2021-03-26T08:59:22.000Z | [
"pytorch",
"camembert",
"masked-lm",
"arxiv:1907.11692",
"arxiv:1801.06146",
"arxiv:1808.06226",
"arxiv:2101.09635",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"sentencepiece.bpe.vocab",
"tokenizer_config.json"
]
| airesearch | 22,332 | transformers | ---
widget:
- text: "ผู้ใช้งานท่าอากาศยานนานาชาติ<mask>มีกว่าสามล้านคน<pad>"
---
# WangchanBERTa base model: `wangchanberta-base-att-spm-uncased`
<br>
Pretrained RoBERTa BASE model on assorted Thai texts (78.5 GB).
The script and documentation can be found at [this reposiryory](https://github.com/vistec-AI/thai2transformers).
<br>
## Model description
<br>
The architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692).
<br>
## Intended uses & limitations
<br>
You can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.
<br>
**Multiclass text classification**
- `wisesight_sentiment`
4-class text classification task (`positive`, `neutral`, `negative`, and `question`) based on social media posts and tweets.
- `wongnai_reivews`
Users' review rating classification task (scale is ranging from 1 to 5)
- `generated_reviews_enth` : (`review_star` as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
**Multilabel text classification**
- `prachathai67k`
Thai topic classification with 12 labels based on news article corpus from prachathai.com. The detail is described in this [page](https://huggingface.co/datasets/prachathai67k).
**Token classification**
- `thainer`
Named-entity recognition tagging with 13 named-entities as descibed in this [page](https://huggingface.co/datasets/thainer).
- `lst20` : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this [page](https://huggingface.co/datasets/lst20).
<br>
## How to use
<br>
The getting started notebook of WangchanBERTa model can be found at this [Colab notebook](https://colab.research.google.com/drive/1Kbk6sBspZLwcnOE61adAQo30xxqOQ9ko)
<br>
## Training data
`wangchanberta-base-att-spm-uncased` model was pretrained on assorted Thai text dataset. The total size of uncompressed text is 78.5GB.
### Preprocessing
Texts are preprocessed with the following rules:
- Replace HTML forms of characters with the actual characters such asnbsp;with a space and \\\\\\\\\\\\\\\\<br /> with a line break [[Howard and Ruder, 2018]](https://arxiv.org/abs/1801.06146).
- Remove empty brackets ((), {}, and []) than sometimes come up as a result of text extraction such as from Wikipedia.
- Replace line breaks with spaces.
- Replace more than one spaces with a single space
- Remove more than 3 repetitive characters such as ดีมากกก to ดีมาก [Howard and Ruder, 2018]](https://arxiv.org/abs/1801.06146).
- Word-level tokenization using [[Phatthiyaphaibun et al., 2020]](https://zenodo.org/record/4319685#.YA4xEGQzaDU) ’s `newmm` dictionary-based maximal matching tokenizer.
- Replace repetitive words; this is done post-tokenization unlike [[Howard and Ruder, 2018]](https://arxiv.org/abs/1801.06146). since there is no delimitation by space in Thai as in English.
- Replace spaces with <\\\\\\\\\\\\\\\\_>. The SentencePiece tokenizer combines the spaces with other tokens. Since spaces serve as punctuation in Thai such as sentence boundaries similar to periods in English, combining it with other tokens will omit an important feature for tasks such as word tokenization and sentence breaking. Therefore, we opt to explicitly mark spaces with <\\\\\\\\\\\\\\\\_>.
<br>
Regarding the vocabulary, we use SentencePiece [[Kudo, 2018]](https://arxiv.org/abs/1808.06226) to train SentencePiece unigram model.
The tokenizer has a vocabulary size of 25,000 subwords, trained on 15M sentences sampled from the training set.
The length of each sequence is limited up to 416 subword tokens.
Regarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.
<br>
**Train/Val/Test splits**
After preprocessing and deduplication, we have a training set of 381,034,638 unique,mostly Thai sentences with sequence length of 5 to 300 words (78.5GB). The training set has a total of 16,957,775,412 words as tokenized by dictionary-based maximal matching [[Phatthiyaphaibun et al., 2020]](https://zenodo.org/record/4319685#.YA4xEGQzaDU), 8,680,485,067 subwords astokenized by SentencePiece tokenizer, and 53,035,823,287 characters.
<br>
**Pretraining**
The model was trained on 8 V100 GPUs for 500,000 steps with the batch size of 4,096 (32 sequences per device with 16 accumulation steps) and a sequence length of 416 tokens. The optimizer we used is Adam with the learning rate of $3e-4$, $\\\\\\\\\\\\\\\\beta_1 = 0.9$, $\\\\\\\\\\\\\\\\beta_2= 0.999$ and $\\\\\\\\\\\\\\\\epsilon = 1e-6$. The learning rate is warmed up for the first 24,000 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint.
As of Sun 24 Jan 2021, we release the model from the checkpoint @360,000 steps due to the model pretraining has not yet been completed
<br>
**BibTeX entry and citation info**
```
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
airesearch/wangchanberta-base-wiki-newmm | 2021-05-20T12:51:04.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"arxiv:1907.11692",
"arxiv:2101.09635",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"newmm.json",
"pytorch_model.bin"
]
| airesearch | 511 | transformers | # WangchanBERTa base model: `wangchanberta-base-wiki-newmm`
<br>
Pretrained RoBERTa BASE model on Thai Wikipedia corpus.
The script and documentation can be found at [this reposiryory](https://github.com/vistec-AI/thai2transformers).
<br>
## Model description
<br>
The architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692).
<br>
## Intended uses & limitations
<br>
You can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.
<br>
**Multiclass text classification**
- `wisesight_sentiment`
4-class text classification task (`positive`, `neutral`, `negative`, and `question`) based on social media posts and tweets.
- `wongnai_reivews`
Users' review rating classification task (scale is ranging from 1 to 5)
- `generated_reviews_enth` : (`review_star` as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
**Multilabel text classification**
- `prachathai67k`
Thai topic classification with 12 labels based on news article corpus from prachathai.com. The detail is described in this [page](https://huggingface.co/datasets/prachathai67k).
**Token classification**
- `thainer`
Named-entity recognition tagging with 13 named-entities as descibed in this [page](https://huggingface.co/datasets/thainer).
- `lst20` : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this [page](https://huggingface.co/datasets/lst20).
<br>
## How to use
<br>
The getting started notebook of WangchanBERTa model can be found at this [Colab notebook](https://colab.research.google.com/drive/1Kbk6sBspZLwcnOE61adAQo30xxqOQ9ko)
<br>
## Training data
`wangchanberta-base-wiki-newmm` model was pretrained on Thai Wikipedia. Specifically, we use the Wikipedia dump articles on 20 August 2020 (dumps.wikimedia.org/thwiki/20200820/). We opt out lists, and tables.
### Preprocessing
Texts are preprocessed with the following rules:
- Replace non-breaking space, zero-width non-breaking space, and soft hyphen with spaces.
- Remove an empty parenthesis that occur right after the title of the first paragraph.
- Replace spaces wtth <_>.
<br>
Regarding the vocabulary, we use wordl-level token from [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)'s dictionary-based tokenizer namedly `newmm`. The total number of word-level tokens in the vocabulary is 97,982.
We sample sentences contigously to have the length of at most 512 tokens. For some sentences that overlap the boundary of 512 tokens, we split such sentence with an additional token as document separator. This is the same approach as proposed by [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692) (called "FULL-SENTENCES").
Regarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.
<br>
**Train/Val/Test splits**
We split sequencially 944,782 sentences for training set, 24,863 sentences for validation set and 24,862 sentences for test set.
<br>
**Pretraining**
The model was trained on 32 V100 GPUs for 31,250 steps with the batch size of 8,192 (16 sequences per device with 16 accumulation steps) and a sequence length of 512 tokens. The optimizer we used is Adam with the learning rate of $7e-4$, $\beta_1 = 0.9$, $\beta_2= 0.98$ and $\epsilon = 1e-6$. The learning rate is warmed up for the first 1250 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint.
<br>
**BibTeX entry and citation info**
```
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.