modelId
stringlengths 4
112
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 21
values | files
list | publishedBy
stringlengths 2
37
| downloads_last_month
int32 0
9.44M
| library
stringclasses 15
values | modelCard
large_stringlengths 0
100k
|
---|---|---|---|---|---|---|---|---|
mitra-mir/ALBERT-Persian-Poetry | 2021-04-27T06:55:48.000Z | [
"pytorch",
"tf",
"albert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tf_model.h5",
"tokenizer_config.json"
]
| mitra-mir | 11 | transformers | A Transformer-based Persian Language Model Further Pretrained on Persian Poetry
ALBERT was first introduced by [Hooshvare](https://huggingface.co/HooshvareLab/albert-fa-zwnj-base-v2?text=%D8%B2+%D8%A2%D9%86+%D8%AF%D8%B1%D8%AF%D8%B4+%5BMASK%5D+%D9%85%DB%8C+%D8%B3%D9%88%D8%AE%D8%AA+%D8%AF%D8%B1+%D8%A8%D8%B1) with 30,000 vocabulary size as lite BERT for self-supervised learning of language representations for the Persian language. Here we wanted to utilize its capabilities by pretraining it on a large corpse of Persian poetry. This model has been post-trained on 80 percent of poetry verses of the Persian poetry dataset - Ganjoor- and has been evaluated on the other 20 percent. |
mitra-mir/BERT-Persian-Poetry | 2021-05-19T23:34:26.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| mitra-mir | 27 | transformers | BERT Language Model Further Pre-trained on Persian Poetry |
mkhalifa/gpt2-biographies | 2021-05-23T09:37:00.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| mkhalifa | 136 | transformers | |
mkrigba/FreeTextSIG | 2021-04-02T21:32:16.000Z | []
| [
".gitattributes",
"README.md"
]
| mkrigba | 0 | Frequency Distribution of Free Text SIGs from medication orders in Allscripts |
||
ml6team/gpt-2-medium-conditional-quote-generator | 2021-05-23T09:38:59.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| ml6team | 168 | transformers | This model has been finetuned on the [`Quotes-500K`](https://github.com/ShivaliGoel/Quotes-500K) dataset to generate quotes based on given topics. To generate a quote, use the following input prompt:
`Given Topics: topic 1 | topic 2 | ... | topic n. Related Quote: ` |
ml6team/gpt-2-small-conditional-quote-generator | 2021-05-23T09:40:50.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| ml6team | 21 | transformers | |
ml6team/gpt2-medium-dutch-finetune-oscar | 2021-05-23T09:42:53.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"nl",
"transformers",
"adaption",
"recycled",
"gpt2-medium",
"text-generation",
"pipeline_tag:text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| ml6team | 1,026 | transformers | ---
language: nl
widget:
- text: "De regering heeft beslist dat"
tags:
- adaption
- recycled
- gpt2-medium
- gpt2
pipeline_tag: text-generation
---
# Dutch finetuned GPT2 |
ml6team/gpt2-medium-german-finetune-oscar | 2021-05-23T09:45:30.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"de",
"transformers",
"adaption",
"recycled",
"gpt2-medium",
"text-generation",
"pipeline_tag:text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| ml6team | 37 | transformers | ---
language: de
widget:
- text: "es wird entschieden, dass es"
tags:
- adaption
- recycled
- gpt2-medium
- gpt2
pipeline_tag: text-generation
---
# German finetuned GPT2 |
ml6team/gpt2-small-dutch-finetune-oscar | 2021-05-23T09:47:18.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"nl",
"transformers",
"adaption",
"recycled",
"gpt2-small",
"text-generation",
"pipeline_tag:text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| ml6team | 986 | transformers | ---
language: nl
widget:
- text: "De regering heeft beslist dat"
tags:
- adaption
- recycled
- gpt2-small
pipeline_tag: text-generation
---
# Dutch finetuned GPT2
|
ml6team/gpt2-small-german-finetune-oscar | 2021-05-23T09:48:35.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"de",
"transformers",
"adaption",
"recycled",
"gpt2-small",
"text-generation",
"pipeline_tag:text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| ml6team | 20 | transformers | ---
language: de
widget:
- text: "es wird entschieden, dass es"
tags:
- adaption
- recycled
- gpt2-small
pipeline_tag: text-generation
---
# German finetuned GPT2 |
ml6team/mt5-small-german-finetune-mlsum | 2021-01-28T13:23:32.000Z | [
"pytorch",
"tf",
"t5",
"seq2seq",
"de",
"dataset:mlsum",
"transformers",
"summarization",
"text2text-generation"
]
| summarization | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tf_model.h5",
"tokenizer_config.json",
"training_args.bin"
]
| ml6team | 258 | transformers | ---
language: de
tags:
- summarization
datasets:
- mlsum
---
# mT5-small fine-tuned on German MLSUM
This model was finetuned for 3 epochs with a max_len (input) of 768 tokens and target_max_len of 192 tokens.
It was fine-tuned on all German articles present in the train split of the [MLSUM dataset](https://huggingface.co/datasets/mlsum) having less than 384 "words" after splitting on whitespace, which resulted in 80249 articles.
The exact expression to filter the dataset was the following:
```python
dataset = dataset.filter(lambda e: len(e['text'].split()) < 384)
```
## Evaluation results
The fine-tuned model was evaluated on 2000 random articles from the validation set.
Mean [f1 ROUGE scores](https://github.com/pltrdy/rouge) were calculated for both the fine-tuned model and the lead-3 baseline (which simply produces the leading three sentences of the document) and are presented in the following table.
| Model | Rouge-1 | Rouge-2 | Rouge-L |
| ------------- |:-------:| --------:| -------:|
| mt5-small | 0.399 | 0.318 | 0.392 |
| lead-3 | 0.343 | 0.263 | 0.341 | |
mlcorelib/deberta-base-uncased | 2021-05-01T12:33:45.000Z | [
"pytorch",
"tf",
"jax",
"rust",
"bert",
"masked-lm",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"exbert",
"license:apache-2.0",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"rust_model.ot",
"tf_model.h5",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
]
| mlcorelib | 53 | transformers | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
'score': 0.1073106899857521,
'token': 4827,
'token_str': 'fashion'},
{'sequence': "[CLS] hello i'm a role model. [SEP]",
'score': 0.08774490654468536,
'token': 2535,
'token_str': 'role'},
{'sequence': "[CLS] hello i'm a new model. [SEP]",
'score': 0.05338378623127937,
'token': 2047,
'token_str': 'new'},
{'sequence': "[CLS] hello i'm a super model. [SEP]",
'score': 0.04667217284440994,
'token': 3565,
'token_str': 'super'},
{'sequence': "[CLS] hello i'm a fine model. [SEP]",
'score': 0.027095865458250046,
'token': 2986,
'token_str': 'fine'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = TFBertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("The man worked as a [MASK].")
[{'sequence': '[CLS] the man worked as a carpenter. [SEP]',
'score': 0.09747550636529922,
'token': 10533,
'token_str': 'carpenter'},
{'sequence': '[CLS] the man worked as a waiter. [SEP]',
'score': 0.0523831807076931,
'token': 15610,
'token_str': 'waiter'},
{'sequence': '[CLS] the man worked as a barber. [SEP]',
'score': 0.04962705448269844,
'token': 13362,
'token_str': 'barber'},
{'sequence': '[CLS] the man worked as a mechanic. [SEP]',
'score': 0.03788609802722931,
'token': 15893,
'token_str': 'mechanic'},
{'sequence': '[CLS] the man worked as a salesman. [SEP]',
'score': 0.037680890411138535,
'token': 18968,
'token_str': 'salesman'}]
>>> unmasker("The woman worked as a [MASK].")
[{'sequence': '[CLS] the woman worked as a nurse. [SEP]',
'score': 0.21981462836265564,
'token': 6821,
'token_str': 'nurse'},
{'sequence': '[CLS] the woman worked as a waitress. [SEP]',
'score': 0.1597415804862976,
'token': 13877,
'token_str': 'waitress'},
{'sequence': '[CLS] the woman worked as a maid. [SEP]',
'score': 0.1154729500412941,
'token': 10850,
'token_str': 'maid'},
{'sequence': '[CLS] the woman worked as a prostitute. [SEP]',
'score': 0.037968918681144714,
'token': 19215,
'token_str': 'prostitute'},
{'sequence': '[CLS] the woman worked as a cook. [SEP]',
'score': 0.03042375110089779,
'token': 5660,
'token_str': 'cook'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
| | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
mlcorelib/debertav2-base-uncased | 2021-05-01T12:53:51.000Z | [
"pytorch",
"tf",
"jax",
"rust",
"bert",
"masked-lm",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"exbert",
"license:apache-2.0",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"rust_model.ot",
"tf_model.h5",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
]
| mlcorelib | 32 | transformers | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
'score': 0.1073106899857521,
'token': 4827,
'token_str': 'fashion'},
{'sequence': "[CLS] hello i'm a role model. [SEP]",
'score': 0.08774490654468536,
'token': 2535,
'token_str': 'role'},
{'sequence': "[CLS] hello i'm a new model. [SEP]",
'score': 0.05338378623127937,
'token': 2047,
'token_str': 'new'},
{'sequence': "[CLS] hello i'm a super model. [SEP]",
'score': 0.04667217284440994,
'token': 3565,
'token_str': 'super'},
{'sequence': "[CLS] hello i'm a fine model. [SEP]",
'score': 0.027095865458250046,
'token': 2986,
'token_str': 'fine'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = TFBertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("The man worked as a [MASK].")
[{'sequence': '[CLS] the man worked as a carpenter. [SEP]',
'score': 0.09747550636529922,
'token': 10533,
'token_str': 'carpenter'},
{'sequence': '[CLS] the man worked as a waiter. [SEP]',
'score': 0.0523831807076931,
'token': 15610,
'token_str': 'waiter'},
{'sequence': '[CLS] the man worked as a barber. [SEP]',
'score': 0.04962705448269844,
'token': 13362,
'token_str': 'barber'},
{'sequence': '[CLS] the man worked as a mechanic. [SEP]',
'score': 0.03788609802722931,
'token': 15893,
'token_str': 'mechanic'},
{'sequence': '[CLS] the man worked as a salesman. [SEP]',
'score': 0.037680890411138535,
'token': 18968,
'token_str': 'salesman'}]
>>> unmasker("The woman worked as a [MASK].")
[{'sequence': '[CLS] the woman worked as a nurse. [SEP]',
'score': 0.21981462836265564,
'token': 6821,
'token_str': 'nurse'},
{'sequence': '[CLS] the woman worked as a waitress. [SEP]',
'score': 0.1597415804862976,
'token': 13877,
'token_str': 'waitress'},
{'sequence': '[CLS] the woman worked as a maid. [SEP]',
'score': 0.1154729500412941,
'token': 10850,
'token_str': 'maid'},
{'sequence': '[CLS] the woman worked as a prostitute. [SEP]',
'score': 0.037968918681144714,
'token': 19215,
'token_str': 'prostitute'},
{'sequence': '[CLS] the woman worked as a cook. [SEP]',
'score': 0.03042375110089779,
'token': 5660,
'token_str': 'cook'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
| | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
mldmm/GlassBERTa | 2021-06-18T16:24:23.000Z | [
"pytorch",
"roberta",
"masked-lm",
"transformers",
"license:mit",
"fill-mask",
"alloys",
"metallurgy"
]
| fill-mask | [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
]
| mldmm | 103 | transformers | ---
license: mit
tags :
- fill-mask
- alloys
- metallurgy
widget:
- text: "O 6 3 3 ,<mask> 6 7 , Ge 3 0 0 , Na2O 1 0 0 , GeO2 9 0 0"
---
# GlassBERTa
## Language Modelling as Unsupervised Pre-Training for Glass Alloys
### Abstract:
Alloy Property Prediction is a task under the sub field of Alloy Material Science wherein Machine Learning has been applied rigorously. This is modeled as a Supervised Task wherein Alloy Composition is provided for the Model to predict a desired property. Efficiency of tasks such as *Alloy Property Prediction*, Alloy Synthesis can be modeled additionally with an Unsupervised Pre-training Task. We describe the idea of Pre-training using Language Modelling kind of approach in terms of Alloy Compositions.We specifically inspect that random masking proposed in is not suitable for modelling Alloys. We further go on proposing two types of masking strategies that are used to train GlassBERTa to encompass the properties of an Alloy Composition. The results suggest that Pre-training is an important field of direction in this field of research for further improvement.
### Authors:
Reshinth Adithyan, Aditya TS, Roakesh, Jothikrishna, Kalaiselvan Baskaran
### Footnote:
Work done via [MLDMM Lab](https://sites.google.com/view/mldmm-lab/home)

|
mm/roberta-base-mld | 2021-05-20T17:54:53.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| mm | 11 | transformers | # roberta-base-mld
This is a pretrained roberta-base model for machine learning domain documents.
|
|
mm/roberta-large-mld | 2021-05-20T17:56:43.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
]
| mm | 31 | transformers | # roberta-large-mld
This is a pretrained roberta-large model for machine learning domain documents.
|
|
mmm-da/anekdot_funny1_rugpt3Small | 2021-05-23T09:49:50.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| mmm-da | 17 | transformers | |
mmm-da/anekdot_funny2_rugpt3Small | 2021-05-23T09:51:06.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| mmm-da | 17 | transformers | |
mmmarchio/Testmodel | 2021-03-25T15:16:46.000Z | []
| [
".gitattributes",
"adasd.txt"
]
| mmmarchio | 0 | |||
mnaylor/bigbird-base-mimic-mortality | 2021-05-14T15:32:04.000Z | [
"pytorch",
"big_bird",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| mnaylor | 143 | transformers | # BigBird for Mortality Prediction
Starting with Google's base BigBird model, we fine-tuned on binary mortality prediction in MIMIC admission notes. This
model seeks to predict whether a certain patient will expire within a given ICU stay, based on the text available upon
admission. Data prepared for this task as described in [this project](https://github.com/bvanaken/clinical-outcome-prediction),
using the simulated admission notes (taken from discharge summaries). This model will be used in an upcoming submission for
IMLH at ICML 2021.
### References
* Van Aken, et al., 2021: [Clinical Outcome Prediction from Admission Notes using Self-Supervised Knowledge Integration](https://www.aclweb.org/anthology/2021.eacl-main.75/)
* Zaheer, et al., 2020: [Big Bird: Transformers for Longer Sequences](https://papers.nips.cc/paper/2020/hash/c8512d142a2d849725f31a9a7a361ab9-Abstract.html) |
mofawzy/gpt-2-medium-ar | 2021-05-23T09:53:17.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"all_results.json",
"config.json",
"eval_results.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_results.json",
"trainer_state.json",
"training_args.bin",
"vocab.json"
]
| mofawzy | 7 | transformers | ### Generate Arabic reviews sentences with model GPT-2 Medium.
#### Load model
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("mofawzy/gpt-2-medium-ar")
model = AutoModelWithLMHead.from_pretrained("mofawzy/gpt-2-medium-ar")
```
### Eval:
```
***** eval metrics *****
epoch = 20.0
eval_loss = 1.7798
eval_mem_cpu_alloc_delta = 3MB
eval_mem_cpu_peaked_delta = 0MB
eval_mem_gpu_alloc_delta = 0MB
eval_mem_gpu_peaked_delta = 7044MB
eval_runtime = 0:03:03.37
eval_samples = 527
eval_samples_per_second = 2.874
perplexity = 5.9285
```
#### Notebook:
https://colab.research.google.com/drive/1P0Raqrq0iBLNH87DyN9j0SwWg4C2HubV?usp=sharing
|
mofawzy/gpt-2-negative-reviews | 2021-05-23T09:55:19.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
]
| mofawzy | 32 | transformers | |
mofawzy/gpt2-arabic-sentence-generator | 2021-05-23T09:56:22.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
]
| mofawzy | 46 | transformers | ### GPT-2 Arabic Sentence Generator
Generate Reviews Sentences for Arabic.
language: "Arabic"
tags:
- Arabic
- generate text
- generate reviews
datasets:
- Large-scale book reviews Arabic LABR dataset.
#### Load Model
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("mofawzy/gpt2-arabic-sentence-generator")
model = AutoModelWithLMHead.from_pretrained("mofawzy/gpt2-arabic-sentence-generator")
|
moha/arabert_arabic_covid19 | 2021-04-20T06:15:12.000Z | [
"ar",
"arxiv:2004.04315"
]
| [
".gitattributes",
"README.md"
]
| moha | 1 | ---
language: ar
widget:
- text: "للوقايه من عدم انتشار [MASK]"
---
# arabert_c19: An Arabert model pretrained on 1.5 million COVID-19 multi-dialect Arabic tweets
**ARABERT COVID-19** is a pretrained (fine-tuned) version of the AraBERT v2 model (https://huggingface.co/aubmindlab/bert-base-arabertv02). The pretraining was done using 1.5 million multi-dialect Arabic tweets regarding the COVID-19 pandemic from the “Large Arabic Twitter Dataset on COVID-19” (https://arxiv.org/abs/2004.04315).
The model can achieve better results for the tasks that deal with multi-dialect Arabic tweets in relation to the COVID-19 pandemic.
# Classification results for multiple tasks including fake-news and hate speech detection when using arabert_c19 and mbert_ar_c19:
For more details refer to the paper (link)
| | arabert | mbert | distilbert multi | arabert Covid-19 | mbert Covid-19 |
|------------------------------------|----------|----------|------------------|------------------|----------------|
| Contains hate (Binary) | 0.8346 | 0.6675 | 0.7145 | `0.8649` | 0.8492 |
| Talk about a cure (Binary) | 0.8193 | 0.7406 | 0.7127 | 0.9055 | `0.9176` |
| News or opinion (Binary) | 0.8987 | 0.8332 | 0.8099 | `0.9163` | 0.9116 |
| Contains fake information (Binary) | 0.6415 | 0.5428 | 0.4743 | `0.7739` | 0.7228 |
# Preprocessing
```python
from arabert.preprocess import ArabertPreprocessor
model_name="moha/arabert_c19"
arabert_prep = ArabertPreprocessor(model_name=model_name)
text = "للوقايه من عدم انتشار كورونا عليك اولا غسل اليدين بالماء والصابون وتكون عملية الغسل دقيقه تشمل راحة اليد الأصابع التركيز على الإبهام"
arabert_prep.preprocess(text)
```
# Contacts
**Hadj Ameur**: [Github](https://github.com/MohamedHadjAmeur) | <[email protected]> | <[email protected]>
|
||
moha/arabert_c19 | 2021-05-19T23:35:40.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"ar",
"arxiv:2105.03143",
"arxiv:2004.04315",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| moha | 489 | transformers | ---
language: ar
widget:
- text: "لكي نتجنب فيروس [MASK]"
---
# arabert_c19: An Arabert model pretrained on 1.5 million COVID-19 multi-dialect Arabic tweets
**ARABERT COVID-19** [Arxiv URL](https://arxiv.org/pdf/2105.03143.pdf) is a pretrained (fine-tuned) version of the AraBERT v2 model (https://huggingface.co/aubmindlab/bert-base-arabertv02). The pretraining was done using 1.5 million multi-dialect Arabic tweets regarding the COVID-19 pandemic from the “Large Arabic Twitter Dataset on COVID-19” (https://arxiv.org/abs/2004.04315).
The model can achieve better results for the tasks that deal with multi-dialect Arabic tweets in relation to the COVID-19 pandemic.
# Classification results for multiple tasks including fake-news and hate speech detection when using arabert_c19 and mbert_ar_c19:
For more details refer to the paper (link)
| | arabert | mbert | distilbert multi | arabert Covid-19 | mbert Covid-19 |
|------------------------------------|----------|----------|------------------|------------------|----------------|
| Contains hate (Binary) | 0.8346 | 0.6675 | 0.7145 | `0.8649` | 0.8492 |
| Talk about a cure (Binary) | 0.8193 | 0.7406 | 0.7127 | 0.9055 | `0.9176` |
| News or opinion (Binary) | 0.8987 | 0.8332 | 0.8099 | `0.9163` | 0.9116 |
| Contains fake information (Binary) | 0.6415 | 0.5428 | 0.4743 | `0.7739` | 0.7228 |
# Preprocessing
```python
from arabert.preprocess import ArabertPreprocessor
model_name="moha/arabert_c19"
arabert_prep = ArabertPreprocessor(model_name=model_name)
text = "للوقايه من عدم انتشار كورونا عليك اولا غسل اليدين بالماء والصابون وتكون عملية الغسل دقيقه تشمل راحة اليد الأصابع التركيز على الإبهام"
arabert_prep.preprocess(text)
```
# Citation
Please cite as:
``` bibtex
@misc{ameur2021aracovid19mfh,
title={AraCOVID19-MFH: Arabic COVID-19 Multi-label Fake News and Hate Speech Detection Dataset},
author={Mohamed Seghir Hadj Ameur and Hassina Aliane},
year={2021},
eprint={2105.03143},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
# Contacts
**Hadj Ameur**: [Github](https://github.com/MohamedHadjAmeur) | <[email protected]> | <[email protected]>
|
moha/mbert_ar_c19 | 2021-05-19T23:38:34.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"ar",
"arxiv:2105.03143",
"arxiv:2004.04315",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| moha | 10 | transformers | ---
language: ar
widget:
- text: "للوقايه من انتشار [MASK]"
---
# mbert_c19: An mbert model pretrained on 1.5 million COVID-19 multi-dialect Arabic tweets
**mBERT COVID-19** [Arxiv URL](https://arxiv.org/pdf/2105.03143.pdf) is a pretrained (fine-tuned) version of the mBERT model (https://huggingface.co/bert-base-multilingual-cased). The pretraining was done using 1.5 million multi-dialect Arabic tweets regarding the COVID-19 pandemic from the “Large Arabic Twitter Dataset on COVID-19” (https://arxiv.org/abs/2004.04315).
The model can achieve better results for the tasks that deal with multi-dialect Arabic tweets in relation to the COVID-19 pandemic.
# Classification results for multiple tasks including fake-news and hate speech detection when using arabert_c19 and mbert_ar_c19:
For more details refer to the paper (link)
| | arabert | mbert | distilbert multi | arabert Covid-19 | mbert Covid-19 |
|------------------------------------|----------|----------|------------------|------------------|----------------|
| Contains hate (Binary) | 0.8346 | 0.6675 | 0.7145 | `0.8649` | 0.8492 |
| Talk about a cure (Binary) | 0.8193 | 0.7406 | 0.7127 | 0.9055 | `0.9176` |
| News or opinion (Binary) | 0.8987 | 0.8332 | 0.8099 | `0.9163` | 0.9116 |
| Contains fake information (Binary) | 0.6415 | 0.5428 | 0.4743 | `0.7739` | 0.7228 |
# Preprocessing
```python
from arabert.preprocess import ArabertPreprocessor
model_name="moha/mbert_ar_c19"
arabert_prep = ArabertPreprocessor(model_name=model_name)
text = "للوقايه من عدم انتشار كورونا عليك اولا غسل اليدين بالماء والصابون وتكون عملية الغسل دقيقه تشمل راحة اليد الأصابع التركيز على الإبهام"
arabert_prep.preprocess(text)
```
# Citation
Please cite as:
``` bibtex
@misc{ameur2021aracovid19mfh,
title={AraCOVID19-MFH: Arabic COVID-19 Multi-label Fake News and Hate Speech Detection Dataset},
author={Mohamed Seghir Hadj Ameur and Hassina Aliane},
year={2021},
eprint={2105.03143},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
# Contacts
**Hadj Ameur**: [Github](https://github.com/MohamedHadjAmeur) | <[email protected]> | <[email protected]> |
moha/mbert_arabic_covid19 | 2021-04-19T10:22:33.000Z | []
| [
".gitattributes"
]
| moha | 0 | |||
mohadz/arabert_arabic_covid19 | 2021-05-19T23:38:59.000Z | [
"bert",
"masked-lm",
"ar",
"arxiv:2004.04315",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| mohadz | 14 | transformers | ---
language: ar
widget:
- text: "للوقايه من عدم انتشار [MASK]"
---
# arabert_c19: An Arabert model pretrained on 1.5 million COVID-19 multi-dialect Arabic tweets
**ARABERT COVID-19** is a pretrained (fine-tuned) version of the AraBERT v2 model (https://huggingface.co/aubmindlab/bert-base-arabertv02). The pretraining was done using 1.5 million multi-dialect Arabic tweets regarding the COVID-19 pandemic from the “Large Arabic Twitter Dataset on COVID-19” (https://arxiv.org/abs/2004.04315).
The model can achieve better results for the tasks that deal with multi-dialect Arabic tweets in relation to the COVID-19 pandemic.
# Classification results for multiple tasks including fake-news and hate speech detection when using arabert_c19 and mbert_ar_c19:
For more details refer to the paper (link)
| | arabert | mbert | distilbert multi | arabert Covid-19 | mbert Covid-19 |
|------------------------------------|----------|----------|------------------|------------------|----------------|
| Contains hate (Binary) | 0.8346 | 0.6675 | 0.7145 | `0.8649` | 0.8492 |
| Talk about a cure (Binary) | 0.8193 | 0.7406 | 0.7127 | 0.9055 | `0.9176` |
| News or opinion (Binary) | 0.8987 | 0.8332 | 0.8099 | `0.9163` | 0.9116 |
| Contains fake information (Binary) | 0.6415 | 0.5428 | 0.4743 | `0.7739` | 0.7228 |
# Preprocessing
```python
from arabert.preprocess import ArabertPreprocessor
model_name="moha/arabert_c19"
arabert_prep = ArabertPreprocessor(model_name=model_name)
text = "للوقايه من عدم انتشار كورونا عليك اولا غسل اليدين بالماء والصابون وتكون عملية الغسل دقيقه تشمل راحة اليد الأصابع التركيز على الإبهام"
arabert_prep.preprocess(text)
```
# Contacts
**Hadj Ameur**: [Github](https://github.com/MohamedHadjAmeur) | <[email protected]> | <[email protected]>
|
mohammed/ar | 2021-04-26T01:50:05.000Z | [
"pytorch",
"wav2vec2",
"ar",
"dataset:common_voice",
"dataset:arabic_speech_corpus",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| mohammed | 127 | transformers | ---
language: ar
datasets:
- common_voice
- arabic_speech_corpus
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Mohammed XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ar
type: common_voice
args: ar
metrics:
- name: Test WER
type: wer
value: 36.69
- name: Validation WER
type: wer
value: 36.69
---
# Wav2Vec2-Large-XLSR-53-Arabic
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on Arabic using the `train` splits of [Common Voice](https://huggingface.co/datasets/common_voice)
and [Arabic Speech Corpus](https://huggingface.co/datasets/arabic_speech_corpus).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
%%capture
!pip install datasets
!pip install transformers==4.4.0
!pip install torchaudio
!pip install jiwer
!pip install tnkeeh
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ar", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("mohammed/ar")
model = Wav2Vec2ForCTC.from_pretrained("mohammed/ar")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("The predicted sentence is: ", processor.batch_decode(predicted_ids))
print("The original sentence is:", test_dataset["sentence"][:2])
```
The output is:
```
The predicted sentence is : ['ألديك قلم', 'ليست نارك مكسافة على هذه الأرض أبعد من يوم أمس']
The original sentence is: ['ألديك قلم ؟', 'ليست هناك مسافة على هذه الأرض أبعد من يوم أمس.']
```
## Evaluation
The model can be evaluated as follows on the Arabic test data of Common Voice:
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
# creating a dictionary with all diacritics
dict = {
'ِ': '',
'ُ': '',
'ٓ': '',
'ٰ': '',
'ْ': '',
'ٌ': '',
'ٍ': '',
'ً': '',
'ّ': '',
'َ': '',
'~': '',
',': '',
'ـ': '',
'—': '',
'.': '',
'!': '',
'-': '',
';': '',
':': '',
'\'': '',
'"': '',
'☭': '',
'«': '',
'»': '',
'؛': '',
'ـ': '',
'_': '',
'،': '',
'“': '',
'%': '',
'‘': '',
'”': '',
'�': '',
'_': '',
',': '',
'?': '',
'#': '',
'‘': '',
'.': '',
'؛': '',
'get': '',
'؟': '',
' ': ' ',
'\'ۖ ': '',
'\'': '',
'\'ۚ' : '',
' \'': '',
'31': '',
'24': '',
'39': ''
}
# replacing multiple diacritics using dictionary (stackoverflow is amazing)
def remove_special_characters(batch):
# Create a regular expression from the dictionary keys
regex = re.compile("(%s)" % "|".join(map(re.escape, dict.keys())))
# For each match, look-up corresponding value in dictionary
batch["sentence"] = regex.sub(lambda mo: dict[mo.string[mo.start():mo.end()]], batch["sentence"])
return batch
test_dataset = load_dataset("common_voice", "ar", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("mohammed/ar")
model = Wav2Vec2ForCTC.from_pretrained("mohammed/ar")
model.to("cuda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
test_dataset = test_dataset.map(remove_special_characters)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 36.69%
## Future Work
One can use *data augmentation*, *transliteration*, or *attention_mask* to increase the accuracy.
|
mohammed/wav2vec2-large-xlsr-arabic | 2021-04-26T02:18:41.000Z | [
"pytorch",
"wav2vec2",
"ar",
"dataset:common_voice",
"dataset:arabic_speech_corpus",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| mohammed | 24 | transformers | ---
language: ar
datasets:
- common_voice
- arabic_speech_corpus
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Mohammed XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ar
type: common_voice
args: ar
metrics:
- name: Test WER
type: wer
value: 36.699
- name: Validation WER
type: wer
value: 36.699
---
# Wav2Vec2-Large-XLSR-53-Arabic
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on Arabic using the `train` splits of [Common Voice](https://huggingface.co/datasets/common_voice)
and [Arabic Speech Corpus](https://huggingface.co/datasets/arabic_speech_corpus).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
%%capture
!pip install datasets
!pip install transformers==4.4.0
!pip install torchaudio
!pip install jiwer
!pip install tnkeeh
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ar", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("mohammed/wav2vec2-large-xlsr-arabic")
model = Wav2Vec2ForCTC.from_pretrained("mohammed/wav2vec2-large-xlsr-arabic")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("The predicted sentence is: ", processor.batch_decode(predicted_ids))
print("The original sentence is:", test_dataset["sentence"][:2])
```
The output is:
```
The predicted sentence is : ['ألديك قلم', 'ليست نارك مكسافة على هذه الأرض أبعد من يوم أمس']
The original sentence is: ['ألديك قلم ؟', 'ليست هناك مسافة على هذه الأرض أبعد من يوم أمس.']
```
## Evaluation
The model can be evaluated as follows on the Arabic test data of Common Voice:
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
# creating a dictionary with all diacritics
dict = {
'ِ': '',
'ُ': '',
'ٓ': '',
'ٰ': '',
'ْ': '',
'ٌ': '',
'ٍ': '',
'ً': '',
'ّ': '',
'َ': '',
'~': '',
',': '',
'ـ': '',
'—': '',
'.': '',
'!': '',
'-': '',
';': '',
':': '',
'\'': '',
'"': '',
'☭': '',
'«': '',
'»': '',
'؛': '',
'ـ': '',
'_': '',
'،': '',
'“': '',
'%': '',
'‘': '',
'”': '',
'�': '',
'_': '',
',': '',
'?': '',
'#': '',
'‘': '',
'.': '',
'؛': '',
'get': '',
'؟': '',
' ': ' ',
'\'ۖ ': '',
'\'': '',
'\'ۚ' : '',
' \'': '',
'31': '',
'24': '',
'39': ''
}
# replacing multiple diacritics using dictionary (stackoverflow is amazing)
def remove_special_characters(batch):
# Create a regular expression from the dictionary keys
regex = re.compile("(%s)" % "|".join(map(re.escape, dict.keys())))
# For each match, look-up corresponding value in dictionary
batch["sentence"] = regex.sub(lambda mo: dict[mo.string[mo.start():mo.end()]], batch["sentence"])
return batch
test_dataset = load_dataset("common_voice", "ar", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("mohammed/wav2vec2-large-xlsr-arabic")
model = Wav2Vec2ForCTC.from_pretrained("mohammed/wav2vec2-large-xlsr-arabic")
model.to("cuda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
test_dataset = test_dataset.map(remove_special_characters)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 36.699%
## Future Work
One can use *data augmentation*, *transliteration*, or *attention_mask* to increase the accuracy.
|
mohsenfayyaz/BERT_Warmup | 2021-03-15T10:54:28.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| mohsenfayyaz | 12 | transformers | |
mohsenfayyaz/albert-base-v2-offenseval2019-downsample | 2021-05-03T13:32:38.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
]
| mohsenfayyaz | 51 | transformers | |
mohsenfayyaz/albert-base-v2-toxicity | 2021-04-19T15:03:51.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
]
| mohsenfayyaz | 97 | transformers | |
mohsenfayyaz/albert-large-v2-toxicity | 2021-04-25T18:04:39.000Z | []
| [
".gitattributes"
]
| mohsenfayyaz | 0 | |||
mohsenfayyaz/bert-base-cased-toxicity | 2021-05-19T23:39:41.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| mohsenfayyaz | 9 | transformers | |
mohsenfayyaz/bert-base-uncased-jigsaw | 2021-05-14T20:08:50.000Z | []
| [
".gitattributes"
]
| mohsenfayyaz | 0 | |||
mohsenfayyaz/bert-base-uncased-offenseval2019-downsample | 2021-05-19T23:40:38.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| mohsenfayyaz | 87 | transformers | |
mohsenfayyaz/bert-base-uncased-offenseval2019-unbalanced | 2021-05-19T23:41:35.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| mohsenfayyaz | 8 | transformers | |
mohsenfayyaz/bert-base-uncased-offenseval2019-upsample | 2021-05-19T23:42:32.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| mohsenfayyaz | 7 | transformers | |
mohsenfayyaz/bert-base-uncased-offenseval2019 | 2021-05-19T23:43:36.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| mohsenfayyaz | 9 | transformers | |
mohsenfayyaz/bert-base-uncased-toxicity-a | 2021-05-19T23:44:37.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| mohsenfayyaz | 17 | transformers | |
mohsenfayyaz/bert-base-uncased-toxicity | 2021-05-19T23:45:38.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| mohsenfayyaz | 108 | transformers | |
mohsenfayyaz/distilbert-fa-description-classifier | 2021-06-11T18:58:50.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| mohsenfayyaz | 5 | transformers | |
mohsenfayyaz/electra-base-discriminator-offenseval2019-downsample | 2021-05-04T14:15:44.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| mohsenfayyaz | 34 | transformers | |
mohsenfayyaz/roberta-base-toxicity | 2021-05-20T17:59:08.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| mohsenfayyaz | 15 | transformers | |
mohsenfayyaz/toxicity-classifier | 2021-05-19T23:46:31.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| mohsenfayyaz | 135 | transformers | [BERT base model (uncased)](https://huggingface.co/bert-base-uncased) fine tuned on [Jigsaw Unintended Bias in Toxicity Classification](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification) |
mohsenfayyaz/xlnet-base-cased-offenseval2019-downsample | 2021-05-04T13:58:20.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
]
| mohsenfayyaz | 30 | transformers | |
mohsenfayyaz/xlnet-base-cased-toxicity | 2021-04-18T10:22:12.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
]
| mohsenfayyaz | 10 | transformers | |
moja/EN-XLSR-Wav2Vec2 | 2021-03-25T06:30:25.000Z | [
"xlsr-fine-tuning-week"
]
| [
".gitattributes",
"README.md"
]
| moja | 0 | ---
tags:
- xlsr-fine-tuning-week
---
# Wav2Vec2-Large-XLSR-53 |
||
molly-hayward/bioelectra-base-discriminator | 2021-04-17T16:59:46.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| molly-hayward | 14 | transformers | To produce BioELECTRA, we pretrain ELECTRA on a corpus of over 20 million abstracts from PubMed.
How to use the discriminator in transformers:
from transformers import ElectraForPreTraining, ElectraTokenizerFast
import torch
discriminator = ElectraForPreTraining.from_pretrained("molly-hayward/bioelectra-base-discriminator")
tokenizer = ElectraTokenizerFast.from_pretrained("molly-hayward/bioelectra-base-discriminator") |
|
molly-hayward/bioelectra-base-generator | 2021-04-17T16:59:28.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| molly-hayward | 10 | transformers | To produce BioELECTRA, we pretrain ELECTRA on a corpus of over 20 million abstracts from PubMed.
How to use the generator in transformers:
from transformers import ElectraForMaskedLM, ElectraTokenizerFast
import torch
generator = ElectraForMaskedLM.from_pretrained("molly-hayward/bioelectra-base-generator")
tokenizer = ElectraTokenizerFast.from_pretrained("molly-hayward/bioelectra-base-generator") |
|
molly-hayward/bioelectra-small-discriminator | 2021-04-17T16:58:44.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| molly-hayward | 14 | transformers | To produce BioELECTRA, we pretrain ELECTRA on a corpus of over 20 million abstracts from PubMed.
How to use the discriminator in transformers:
from transformers import ElectraForPreTraining, ElectraTokenizerFast
import torch
discriminator = ElectraForPreTraining.from_pretrained("molly-hayward/bioelectra-small-discriminator")
tokenizer = ElectraTokenizerFast.from_pretrained("molly-hayward/bioelectra-small-discriminator") |
|
molly-hayward/bioelectra-small-generator | 2021-04-17T16:58:15.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| molly-hayward | 7 | transformers | To produce BioELECTRA, we pretrain ELECTRA on a corpus of over 20 million abstracts from PubMed.
How to use the generator in transformers:
from transformers import ElectraForMaskedLM, ElectraTokenizerFast
import torch
generator = ElectraForMaskedLM.from_pretrained("molly-hayward/bioelectra-small-generator")
tokenizer = ElectraTokenizerFast.from_pretrained("molly-hayward/bioelectra-small-generator") |
|
monilouise/ner_pt_br | 2021-05-19T23:47:28.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"pt",
"arxiv:1909.10649",
"transformers",
"ner"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| monilouise | 277 | transformers | ---
language:
- pt
tags:
- ner
metrics:
- f1
- accuracy
- precision
- recall
---
# RiskData Brazilian Portuguese NER
## Model description
This is a finetunned version from [Neuralmind BERTimbau] (https://github.com/neuralmind-ai/portuguese-bert/blob/master/README.md) for Portuguese language.
For more details, please see, (https://github.com/SecexSaudeTCU/noticias_ner).
## Intended uses & limitations
#### How to use
```python
from transformers import BertForTokenClassification, DistilBertTokenizerFast, pipeline
model = BertForTokenClassification.from_pretrained('monilouise/ner_pt_br')
tokenizer = DistilBertTokenizerFast.from_pretrained('neuralmind/bert-base-portuguese-cased'
, model_max_length=512
, do_lower_case=False
)
nlp = pipeline('ner', model=model, tokenizer=tokenizer, grouped_entities=True)
result = nlp("O Tribunal de Contas da União é localizado em Brasília e foi fundado por Rui Barbosa.")
```
#### Limitations and bias
- The finetunned model was trained on a corpus with around 180 news articles crawled from Google News. The original project's purpose was to recognize named entities in news
related to fraud and corruption, classifying these entities in four classes: PERSON, ORGANIZATION, PUBLIC INSITUITION and LOCAL (PESSOA, ORGANIZAÇÃO, INSTITUIÇÃO PÚBLICA and LOCAL).
## Training data
The training data can be found at (https://github.com/SecexSaudeTCU/noticias_ner/blob/master/dados/labeled_4_labels.jsonl).
## Training procedure
## Eval results
accuracy: 0.98,
precision: 0.86
recall: 0.91
f1: 0.88
The score was calculated using this code:
```python
def align_predictions(predictions: np.ndarray, label_ids: np.ndarray) -> Tuple[List[int], List[int]]:
preds = np.argmax(predictions, axis=2)
batch_size, seq_len = preds.shape
out_label_list = [[] for _ in range(batch_size)]
preds_list = [[] for _ in range(batch_size)]
for i in range(batch_size):
for j in range(seq_len):
if label_ids[i, j] != nn.CrossEntropyLoss().ignore_index:
out_label_list[i].append(id2tag[label_ids[i][j]])
preds_list[i].append(id2tag[preds[i][j]])
return preds_list, out_label_list
def compute_metrics(p: EvalPrediction) -> Dict:
preds_list, out_label_list = align_predictions(p.predictions, p.label_ids)
return {
"accuracy_score": accuracy_score(out_label_list, preds_list),
"precision": precision_score(out_label_list, preds_list),
"recall": recall_score(out_label_list, preds_list),
"f1": f1_score(out_label_list, preds_list),
}
```
### BibTeX entry and citation info
For further information about BERTimbau language model:
```bibtex
@inproceedings{souza2020bertimbau,
author = {Souza, F{\'a}bio and Nogueira, Rodrigo and Lotufo, Roberto},
title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese},
booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)},
year = {2020}
}
@article{souza2019portuguese,
title={Portuguese Named Entity Recognition using BERT-CRF},
author={Souza, F{\'a}bio and Nogueira, Rodrigo and Lotufo, Roberto},
journal={arXiv preprint arXiv:1909.10649},
url={http://arxiv.org/abs/1909.10649},
year={2019}
}
```
|
monologg/bert-base-cased-goemotions-ekman | 2021-05-19T23:47:57.000Z | [
"pytorch",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 267 | transformers | ||
monologg/bert-base-cased-goemotions-group | 2021-05-19T23:48:19.000Z | [
"pytorch",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 393 | transformers | ||
monologg/bert-base-cased-goemotions-original | 2021-05-19T23:48:33.000Z | [
"pytorch",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 13,027 | transformers | ||
monologg/bert-tfv2-test | 2021-05-28T03:52:56.000Z | [
"pytorch",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 25 | transformers | ||
monologg/biobert_v1.0_pubmed_pmc | 2021-05-19T23:49:24.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 307 | transformers | ||
monologg/biobert_v1.1_pubmed | 2021-05-19T23:50:54.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 2,946 | transformers | |
monologg/distilkobert | 2020-05-13T03:37:29.000Z | [
"pytorch",
"distilbert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tokenizer_78b3253a26.model",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 908 | transformers | |
monologg/electra-small-finetuned-imdb | 2020-05-23T09:20:27.000Z | [
"pytorch",
"tflite",
"electra",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"imdb_small.pt",
"imdb_small.tflite",
"imdb_small_8bits.tflite",
"imdb_small_fp16.tflite",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 90 | transformers | |
monologg/kb-electra-base-char-v3-ner | 2021-04-27T06:05:28.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 31 | transformers | |
monologg/kobert-lm | 2021-05-19T23:51:48.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tokenizer_78b3253a26.model",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 7,038 | transformers | |
monologg/kobert | 2021-05-19T23:52:30.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tokenizer_78b3253a26.model",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 12,529 | transformers | ||
monologg/kocharelectra-base-discriminator | 2020-05-27T17:34:11.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 330 | transformers | ||
monologg/kocharelectra-base-finetuned-goemotions | 2020-05-29T12:52:27.000Z | [
"pytorch",
"electra",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 38 | transformers | ||
monologg/kocharelectra-base-generator | 2020-05-27T17:35:59.000Z | [
"pytorch",
"electra",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 18 | transformers | |
monologg/kocharelectra-base-kmounlp-ner | 2020-12-02T15:28:07.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 45 | transformers | |
monologg/kocharelectra-base-modu-ner-all | 2020-12-08T17:56:17.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 93 | transformers | |
monologg/kocharelectra-base-modu-ner-nx | 2020-12-07T07:48:11.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 12 | transformers | |
monologg/kocharelectra-base-modu-ner-sx | 2020-12-02T23:49:27.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 10 | transformers | |
monologg/kocharelectra-small-discriminator | 2020-05-27T17:37:41.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 41 | transformers | ||
monologg/kocharelectra-small-finetuned-goemotions | 2020-05-29T12:56:37.000Z | [
"pytorch",
"electra",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 16 | transformers | ||
monologg/kocharelectra-small-generator | 2020-05-27T17:38:43.000Z | [
"pytorch",
"electra",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 26 | transformers | |
monologg/koelectra-base-bias | 2021-01-07T14:13:10.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 8 | transformers | |
monologg/koelectra-base-discriminator | 2021-05-26T01:49:22.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"ko",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 1,000 | transformers | ---
language: ko
---
# KoELECTRA (Base Discriminator)
Pretrained ELECTRA Language Model for Korean (`koelectra-base-discriminator`)
For more detail, please see [original repository](https://github.com/monologg/KoELECTRA/blob/master/README_EN.md).
## Usage
### Load model and tokenizer
```python
>>> from transformers import ElectraModel, ElectraTokenizer
>>> model = ElectraModel.from_pretrained("monologg/koelectra-base-discriminator")
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-discriminator")
```
### Tokenizer example
```python
>>> from transformers import ElectraTokenizer
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-discriminator")
>>> tokenizer.tokenize("[CLS] 한국어 ELECTRA를 공유합니다. [SEP]")
['[CLS]', '한국어', 'E', '##L', '##EC', '##T', '##RA', '##를', '공유', '##합니다', '.', '[SEP]']
>>> tokenizer.convert_tokens_to_ids(['[CLS]', '한국어', 'E', '##L', '##EC', '##T', '##RA', '##를', '공유', '##합니다', '.', '[SEP]'])
[2, 18429, 41, 6240, 15229, 6204, 20894, 5689, 12622, 10690, 18, 3]
```
## Example using ElectraForPreTraining
```python
import torch
from transformers import ElectraForPreTraining, ElectraTokenizer
discriminator = ElectraForPreTraining.from_pretrained("monologg/koelectra-base-discriminator")
tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-discriminator")
sentence = "나는 방금 밥을 먹었다."
fake_sentence = "나는 내일 밥을 먹었다."
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
print(list(zip(fake_tokens, predictions.tolist()[1:-1])))
```
|
|
monologg/koelectra-base-finetuned-goemotions | 2020-05-18T20:19:16.000Z | [
"pytorch",
"electra",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 22 | transformers | ||
monologg/koelectra-base-finetuned-naver-ner | 2020-05-13T03:51:43.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 30 | transformers | |
monologg/koelectra-base-finetuned-nsmc | 2020-08-18T18:41:06.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 193 | transformers | |
monologg/koelectra-base-finetuned-sentiment | 2020-05-14T02:30:04.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 905 | transformers | |
monologg/koelectra-base-gender-bias | 2021-01-07T14:10:56.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 13 | transformers | |
monologg/koelectra-base-generator | 2021-05-26T01:49:49.000Z | [
"pytorch",
"tf",
"electra",
"masked-lm",
"ko",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 371 | transformers | ---
language: ko
---
# KoELECTRA (Base Generator)
Pretrained ELECTRA Language Model for Korean (`koelectra-base-generator`)
For more detail, please see [original repository](https://github.com/monologg/KoELECTRA/blob/master/README_EN.md).
## Usage
### Load model and tokenizer
```python
>>> from transformers import ElectraModel, ElectraTokenizer
>>> model = ElectraModel.from_pretrained("monologg/koelectra-base-generator")
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-generator")
```
### Tokenizer example
```python
>>> from transformers import ElectraTokenizer
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-generator")
>>> tokenizer.tokenize("[CLS] 한국어 ELECTRA를 공유합니다. [SEP]")
['[CLS]', '한국어', 'E', '##L', '##EC', '##T', '##RA', '##를', '공유', '##합니다', '.', '[SEP]']
>>> tokenizer.convert_tokens_to_ids(['[CLS]', '한국어', 'E', '##L', '##EC', '##T', '##RA', '##를', '공유', '##합니다', '.', '[SEP]'])
[2, 18429, 41, 6240, 15229, 6204, 20894, 5689, 12622, 10690, 18, 3]
```
## Example using ElectraForMaskedLM
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="monologg/koelectra-base-generator",
tokenizer="monologg/koelectra-base-generator"
)
print(fill_mask("나는 {} 밥을 먹었다.".format(fill_mask.tokenizer.mask_token)))
```
|
monologg/koelectra-base-v1-goemotions | 2021-02-09T14:37:05.000Z | [
"pytorch",
"electra",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 8 | transformers | ||
monologg/koelectra-base-v2-discriminator | 2021-05-26T01:51:10.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 439 | transformers | ||
monologg/koelectra-base-v2-finetuned-korquad-384 | 2020-06-03T13:03:25.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| monologg | 17 | transformers | |
monologg/koelectra-base-v2-finetuned-korquad | 2020-06-03T03:32:20.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 35 | transformers | |
monologg/koelectra-base-v2-generator | 2021-05-26T01:51:13.000Z | [
"pytorch",
"tf",
"electra",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 76 | transformers | |
monologg/koelectra-base-v3-bias | 2021-01-07T14:18:11.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 8 | transformers | |
monologg/koelectra-base-v3-discriminator | 2021-05-26T01:52:31.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 54,485 | transformers | ||
monologg/koelectra-base-v3-finetuned-korquad | 2020-10-14T01:43:31.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 1,899 | transformers | |
monologg/koelectra-base-v3-gender-bias | 2021-01-07T11:22:16.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 11 | transformers | |
monologg/koelectra-base-v3-generator | 2021-05-26T01:52:33.000Z | [
"pytorch",
"tf",
"electra",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 128 | transformers | |
monologg/koelectra-base-v3-goemotions | 2021-02-09T14:40:17.000Z | [
"pytorch",
"electra",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 282 | transformers | ||
monologg/koelectra-base-v3-hate-speech | 2020-12-31T12:56:18.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 42 | transformers | |
monologg/koelectra-base-v3-naver-ner | 2020-11-30T11:55:35.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 147 | transformers | |
monologg/koelectra-small-discriminator | 2020-12-26T16:23:23.000Z | [
"pytorch",
"electra",
"pretraining",
"ko",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 171 | transformers | ---
language: ko
---
# KoELECTRA (Small Discriminator)
Pretrained ELECTRA Language Model for Korean (`koelectra-small-discriminator`)
For more detail, please see [original repository](https://github.com/monologg/KoELECTRA/blob/master/README_EN.md).
## Usage
### Load model and tokenizer
```python
>>> from transformers import ElectraModel, ElectraTokenizer
>>> model = ElectraModel.from_pretrained("monologg/koelectra-small-discriminator")
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-small-discriminator")
```
### Tokenizer example
```python
>>> from transformers import ElectraTokenizer
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-small-discriminator")
>>> tokenizer.tokenize("[CLS] 한국어 ELECTRA를 공유합니다. [SEP]")
['[CLS]', '한국어', 'E', '##L', '##EC', '##T', '##RA', '##를', '공유', '##합니다', '.', '[SEP]']
>>> tokenizer.convert_tokens_to_ids(['[CLS]', '한국어', 'E', '##L', '##EC', '##T', '##RA', '##를', '공유', '##합니다', '.', '[SEP]'])
[2, 18429, 41, 6240, 15229, 6204, 20894, 5689, 12622, 10690, 18, 3]
```
## Example using ElectraForPreTraining
```python
import torch
from transformers import ElectraForPreTraining, ElectraTokenizer
discriminator = ElectraForPreTraining.from_pretrained("monologg/koelectra-small-discriminator")
tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-small-discriminator")
sentence = "나는 방금 밥을 먹었다."
fake_sentence = "나는 내일 밥을 먹었다."
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
print(list(zip(fake_tokens, predictions.tolist()[1:-1])))
```
|
|
monologg/koelectra-small-finetuned-goemotions | 2020-05-18T21:39:13.000Z | [
"pytorch",
"electra",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 18 | transformers | ||
monologg/koelectra-small-finetuned-intent-cls | 2020-05-15T08:20:13.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"idx2label.txt",
"label2idx.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 37 | transformers | |
monologg/koelectra-small-finetuned-naver-ner | 2020-05-13T03:53:39.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| monologg | 71 | transformers |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.