modelId
stringlengths 4
112
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 21
values | files
list | publishedBy
stringlengths 2
37
| downloads_last_month
int32 0
9.44M
| library
stringclasses 15
values | modelCard
large_stringlengths 0
100k
|
---|---|---|---|---|---|---|---|---|
asahi417/tner-xlm-roberta-base-uncased-fin | 2021-02-12T23:47:27.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"parameter.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"test_bc5cdr_span_lower.json",
"test_bionlp2004_span_lower.json",
"test_conll2003_span_lower.json",
"test_fin_lower.json",
"test_fin_span_lower.json",
"test_mit_movie_trivia_span_lower.json",
"test_mit_restaurant_span_lower.json",
"test_ontonotes5_span_lower.json",
"test_panx_dataset-en_span_lower.json",
"test_wnut2017_span_lower.json",
"tokenizer_config.json"
]
| asahi417 | 9 | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-fin")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-fin")
``` |
asahi417/tner-xlm-roberta-base-uncased-mit-movie-trivia | 2021-02-13T00:08:10.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"parameter.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"test_bc5cdr_span_lower.json",
"test_bionlp2004_span_lower.json",
"test_conll2003_span_lower.json",
"test_fin_span_lower.json",
"test_mit_movie_trivia_lower.json",
"test_mit_movie_trivia_span_lower.json",
"test_mit_restaurant_span_lower.json",
"test_ontonotes5_span_lower.json",
"test_panx_dataset-en_span_lower.json",
"test_wnut2017_span_lower.json",
"tokenizer_config.json"
]
| asahi417 | 11 | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-mit-movie-trivia")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-mit-movie-trivia")
``` |
asahi417/tner-xlm-roberta-base-uncased-mit-restaurant | 2021-02-12T23:47:38.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"parameter.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"test_bc5cdr_span_lower.json",
"test_bionlp2004_span_lower.json",
"test_conll2003_span_lower.json",
"test_fin_span_lower.json",
"test_mit_movie_trivia_span_lower.json",
"test_mit_restaurant_lower.json",
"test_mit_restaurant_span_lower.json",
"test_ontonotes5_span_lower.json",
"test_panx_dataset-en_span_lower.json",
"test_wnut2017_span_lower.json",
"tokenizer_config.json"
]
| asahi417 | 7 | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-mit-restaurant")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-mit-restaurant")
``` |
asahi417/tner-xlm-roberta-base-uncased-ontonotes5 | 2021-02-13T00:08:01.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"parameter.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"test_bc5cdr_span_lower.json",
"test_bionlp2004_span_lower.json",
"test_conll2003_span_lower.json",
"test_fin_span_lower.json",
"test_mit_movie_trivia_span_lower.json",
"test_mit_restaurant_span_lower.json",
"test_ontonotes5_lower.json",
"test_ontonotes5_span_lower.json",
"test_panx_dataset-en_span_lower.json",
"test_wnut2017_span_lower.json",
"tokenizer_config.json"
]
| asahi417 | 22 | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-ontonotes5")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-ontonotes5")
``` |
asahi417/tner-xlm-roberta-base-uncased-panx-dataset-en | 2021-02-13T00:10:50.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"parameter.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"test_bc5cdr_span_lower.json",
"test_bionlp2004_span_lower.json",
"test_conll2003_span_lower.json",
"test_fin_span_lower.json",
"test_mit_movie_trivia_span_lower.json",
"test_mit_restaurant_span_lower.json",
"test_ontonotes5_span_lower.json",
"test_panx_dataset-en_lower.json",
"test_panx_dataset-en_span_lower.json",
"test_wnut2017_span_lower.json",
"tokenizer_config.json"
]
| asahi417 | 7 | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-panx-dataset-en")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-panx-dataset-en")
``` |
asahi417/tner-xlm-roberta-base-uncased-wnut2017 | 2021-02-12T23:48:34.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"parameter.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"test_bc5cdr_span_lower.json",
"test_bionlp2004_span_lower.json",
"test_conll2003_span_lower.json",
"test_fin_span_lower.json",
"test_mit_movie_trivia_span_lower.json",
"test_mit_restaurant_span_lower.json",
"test_ontonotes5_span_lower.json",
"test_panx_dataset-en_span_lower.json",
"test_wnut2017_lower.json",
"test_wnut2017_span_lower.json",
"tokenizer_config.json"
]
| asahi417 | 9 | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-wnut2017")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-wnut2017")
``` |
asahi417/tner-xlm-roberta-base-wnut2017 | 2021-02-13T00:10:57.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"parameter.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"test_bc5cdr_span.json",
"test_bionlp2004_span.json",
"test_conll2003_span.json",
"test_fin_span.json",
"test_ontonotes5_span.json",
"test_panx_dataset-en_span.json",
"test_wnut2017.json",
"test_wnut2017_span.json",
"tokenizer_config.json"
]
| asahi417 | 12 | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-wnut2017")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-wnut2017")
``` |
asahi417/tner-xlm-roberta-large-all-english | 2021-02-12T23:48:50.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"parameter.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"test_bc5cdr.json",
"test_bc5cdr_span.json",
"test_bionlp2004.json",
"test_bionlp2004_span.json",
"test_conll2003.json",
"test_conll2003_span.json",
"test_fin.json",
"test_fin_span.json",
"test_ontonotes5.json",
"test_ontonotes5_span.json",
"test_panx_dataset-en.json",
"test_panx_dataset-en_span.json",
"test_wnut2017.json",
"test_wnut2017_span.json",
"tokenizer_config.json"
]
| asahi417 | 17 | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-all-english")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-all-english")
``` |
asahi417/tner-xlm-roberta-large-bc5cdr | 2021-02-13T00:11:03.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"parameter.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"test_bc5cdr.json",
"test_bc5cdr_span.json",
"test_bionlp2004_span.json",
"test_conll2003_span.json",
"test_fin_span.json",
"test_ontonotes5_span.json",
"test_panx_dataset-en_span.json",
"test_wnut2017_span.json",
"tokenizer_config.json"
]
| asahi417 | 12 | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-bc5cdr")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-bc5cdr")
``` |
asahi417/tner-xlm-roberta-large-bionlp2004 | 2021-02-13T00:04:14.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"parameter.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"test_bc5cdr_span.json",
"test_bionlp2004.json",
"test_bionlp2004_span.json",
"test_conll2003_span.json",
"test_fin_span.json",
"test_ontonotes5_span.json",
"test_panx_dataset-en_span.json",
"test_wnut2017_span.json",
"tokenizer_config.json"
]
| asahi417 | 15 | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-bionlp2004")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-bionlp2004")
``` |
asahi417/tner-xlm-roberta-large-conll2003 | 2021-02-13T00:11:10.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"parameter.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"test_bc5cdr_span.json",
"test_bionlp2004_span.json",
"test_conll2003.json",
"test_conll2003_span.json",
"test_fin_span.json",
"test_ontonotes5_span.json",
"test_panx_dataset-en_span.json",
"test_wnut2017_span.json",
"tokenizer_config.json"
]
| asahi417 | 14 | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-conll2003")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-conll2003")
``` |
asahi417/tner-xlm-roberta-large-fin | 2021-02-13T00:04:30.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"parameter.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"test_bc5cdr_span.json",
"test_bionlp2004_span.json",
"test_conll2003_span.json",
"test_fin.json",
"test_fin_span.json",
"test_ontonotes5_span.json",
"test_panx_dataset-en_span.json",
"test_wnut2017_span.json",
"tokenizer_config.json"
]
| asahi417 | 17 | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-fin")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-fin")
``` |
asahi417/tner-xlm-roberta-large-ontonotes5 | 2021-02-13T00:10:15.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"parameter.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"test_bc5cdr_span.json",
"test_bionlp2004_span.json",
"test_conll2003_span.json",
"test_fin_span.json",
"test_ontonotes5.json",
"test_ontonotes5_span.json",
"test_panx_dataset-en_span.json",
"test_wnut2017_span.json",
"tokenizer_config.json"
]
| asahi417 | 353 | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-ontonotes5")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-ontonotes5")
``` |
asahi417/tner-xlm-roberta-large-panx-dataset-ar | 2021-02-13T00:04:41.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"parameter.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"test_panx_dataset-ar.json",
"test_panx_dataset-ar_span.json",
"test_panx_dataset-en.json",
"test_panx_dataset-en_span.json",
"test_panx_dataset-es.json",
"test_panx_dataset-es_span.json",
"test_panx_dataset-ja.json",
"test_panx_dataset-ja_span.json",
"test_panx_dataset-ko.json",
"test_panx_dataset-ko_span.json",
"test_panx_dataset-ru.json",
"test_panx_dataset-ru_span.json",
"tokenizer_config.json"
]
| asahi417 | 6 | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-panx-dataset-ar")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-panx-dataset-ar")
``` |
asahi417/tner-xlm-roberta-large-panx-dataset-en | 2021-02-13T00:11:22.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"parameter.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"test_bc5cdr_span.json",
"test_bionlp2004_span.json",
"test_conll2003_span.json",
"test_fin_span.json",
"test_ontonotes5_span.json",
"test_panx_dataset-ar.json",
"test_panx_dataset-en.json",
"test_panx_dataset-en_span.json",
"test_panx_dataset-es.json",
"test_panx_dataset-ja.json",
"test_panx_dataset-ko.json",
"test_panx_dataset-ru.json",
"test_wnut2017_span.json",
"tokenizer_config.json"
]
| asahi417 | 7 | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-panx-dataset-en")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-panx-dataset-en")
``` |
asahi417/tner-xlm-roberta-large-panx-dataset-es | 2021-02-13T00:04:53.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"parameter.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"test_panx_dataset-ar.json",
"test_panx_dataset-en.json",
"test_panx_dataset-es.json",
"test_panx_dataset-ja.json",
"test_panx_dataset-ko.json",
"test_panx_dataset-ru.json",
"tokenizer_config.json"
]
| asahi417 | 7 | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-panx-dataset-es")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-panx-dataset-es")
``` |
asahi417/tner-xlm-roberta-large-panx-dataset-ja | 2021-02-13T00:11:28.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"parameter.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"test_panx_dataset-ar.json",
"test_panx_dataset-en.json",
"test_panx_dataset-es.json",
"test_panx_dataset-ja.json",
"test_panx_dataset-ko.json",
"test_panx_dataset-ru.json",
"tokenizer_config.json"
]
| asahi417 | 24 | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-panx-dataset-ja")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-panx-dataset-ja")
``` |
asahi417/tner-xlm-roberta-large-panx-dataset-ko | 2021-02-13T00:05:08.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"parameter.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"test_panx_dataset-ar.json",
"test_panx_dataset-en.json",
"test_panx_dataset-es.json",
"test_panx_dataset-ja.json",
"test_panx_dataset-ko.json",
"test_panx_dataset-ru.json",
"tokenizer_config.json"
]
| asahi417 | 7 | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-panx-dataset-ko")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-panx-dataset-ko")
``` |
asahi417/tner-xlm-roberta-large-panx-dataset-ru | 2021-02-13T00:11:34.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"parameter.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"test_panx_dataset-ar.json",
"test_panx_dataset-en.json",
"test_panx_dataset-es.json",
"test_panx_dataset-ja.json",
"test_panx_dataset-ko.json",
"test_panx_dataset-ru.json",
"tokenizer_config.json"
]
| asahi417 | 10 | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-panx-dataset-ru")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-panx-dataset-ru")
``` |
asahi417/tner-xlm-roberta-large-uncased-all-english | 2021-02-13T00:05:18.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"parameter.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"test_bc5cdr_lower.json",
"test_bc5cdr_span_lower.json",
"test_bionlp2004_lower.json",
"test_bionlp2004_span_lower.json",
"test_conll2003_lower.json",
"test_conll2003_span_lower.json",
"test_fin_lower.json",
"test_fin_span_lower.json",
"test_mit_movie_trivia_lower.json",
"test_mit_movie_trivia_span_lower.json",
"test_mit_restaurant_lower.json",
"test_mit_restaurant_span_lower.json",
"test_ontonotes5_lower.json",
"test_ontonotes5_span_lower.json",
"test_panx_dataset-en_lower.json",
"test_panx_dataset-en_span_lower.json",
"test_wnut2017_lower.json",
"test_wnut2017_span_lower.json",
"tokenizer_config.json"
]
| asahi417 | 11 | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-all-english")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-all-english")
``` |
asahi417/tner-xlm-roberta-large-uncased-bc5cdr | 2021-02-13T00:11:43.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"parameter.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"test_bc5cdr_lower.json",
"test_bc5cdr_span_lower.json",
"test_bionlp2004_span_lower.json",
"test_conll2003_span_lower.json",
"test_fin_span_lower.json",
"test_mit_movie_trivia_span_lower.json",
"test_mit_restaurant_span_lower.json",
"test_ontonotes5_span_lower.json",
"test_panx_dataset-en_span_lower.json",
"test_wnut2017_span_lower.json",
"tokenizer_config.json"
]
| asahi417 | 13 | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-bc5cdr")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-bc5cdr")
``` |
asahi417/tner-xlm-roberta-large-uncased-bionlp2004 | 2021-02-13T00:05:40.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"parameter.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"test_bc5cdr_span_lower.json",
"test_bionlp2004_lower.json",
"test_bionlp2004_span_lower.json",
"test_conll2003_span_lower.json",
"test_fin_span_lower.json",
"test_mit_movie_trivia_span_lower.json",
"test_mit_restaurant_span_lower.json",
"test_ontonotes5_span_lower.json",
"test_panx_dataset-en_span_lower.json",
"test_wnut2017_span_lower.json",
"tokenizer_config.json"
]
| asahi417 | 25 | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-bionlp2004")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-bionlp2004")
``` |
asahi417/tner-xlm-roberta-large-uncased-conll2003 | 2021-02-13T00:11:51.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"parameter.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"test_bc5cdr_span_lower.json",
"test_bionlp2004_span_lower.json",
"test_conll2003_lower.json",
"test_conll2003_span_lower.json",
"test_fin_span_lower.json",
"test_mit_movie_trivia_span_lower.json",
"test_mit_restaurant_span_lower.json",
"test_ontonotes5_span_lower.json",
"test_panx_dataset-en_span_lower.json",
"test_wnut2017_span_lower.json",
"tokenizer_config.json"
]
| asahi417 | 7 | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-conll2003")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-conll2003")
``` |
asahi417/tner-xlm-roberta-large-uncased-fin | 2021-02-13T00:05:55.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"parameter.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"test_bc5cdr_span_lower.json",
"test_bionlp2004_span_lower.json",
"test_conll2003_span_lower.json",
"test_fin_lower.json",
"test_fin_span_lower.json",
"test_mit_movie_trivia_span_lower.json",
"test_mit_restaurant_span_lower.json",
"test_ontonotes5_span_lower.json",
"test_panx_dataset-en_span_lower.json",
"test_wnut2017_span_lower.json",
"tokenizer_config.json"
]
| asahi417 | 6 | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-fin")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-fin")
``` |
asahi417/tner-xlm-roberta-large-uncased-mit-movie-trivia | 2021-02-13T00:11:57.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"parameter.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"test_bc5cdr_span_lower.json",
"test_bionlp2004_span_lower.json",
"test_conll2003_span_lower.json",
"test_fin_span_lower.json",
"test_mit_movie_trivia_lower.json",
"test_mit_movie_trivia_span_lower.json",
"test_mit_restaurant_span_lower.json",
"test_ontonotes5_span_lower.json",
"test_panx_dataset-en_span_lower.json",
"test_wnut2017_span_lower.json",
"tokenizer_config.json"
]
| asahi417 | 7 | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-mit-movie-trivia")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-mit-movie-trivia")
``` |
asahi417/tner-xlm-roberta-large-uncased-mit-restaurant | 2021-02-13T00:06:06.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"parameter.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"test_bc5cdr_span_lower.json",
"test_bionlp2004_span_lower.json",
"test_conll2003_span_lower.json",
"test_fin_span_lower.json",
"test_mit_movie_trivia_span_lower.json",
"test_mit_restaurant_lower.json",
"test_mit_restaurant_span_lower.json",
"test_ontonotes5_span_lower.json",
"test_panx_dataset-en_span_lower.json",
"test_wnut2017_span_lower.json",
"tokenizer_config.json"
]
| asahi417 | 6 | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-mit-restaurant")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-mit-restaurant")
``` |
asahi417/tner-xlm-roberta-large-uncased-ontonotes5 | 2021-02-13T00:12:08.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"parameter.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"test_bc5cdr_span_lower.json",
"test_bionlp2004_span_lower.json",
"test_conll2003_span_lower.json",
"test_fin_span_lower.json",
"test_mit_movie_trivia_span_lower.json",
"test_mit_restaurant_span_lower.json",
"test_ontonotes5_lower.json",
"test_ontonotes5_span_lower.json",
"test_panx_dataset-en_span_lower.json",
"test_wnut2017_span_lower.json",
"tokenizer_config.json"
]
| asahi417 | 15 | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-ontonotes5")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-ontonotes5")
``` |
asahi417/tner-xlm-roberta-large-uncased-panx-dataset-en | 2021-02-13T00:06:19.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"parameter.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"test_bc5cdr_span_lower.json",
"test_bionlp2004_span_lower.json",
"test_conll2003_span_lower.json",
"test_fin_span_lower.json",
"test_mit_movie_trivia_span_lower.json",
"test_mit_restaurant_span_lower.json",
"test_ontonotes5_span_lower.json",
"test_panx_dataset-en_lower.json",
"test_panx_dataset-en_span_lower.json",
"test_wnut2017_span_lower.json",
"tokenizer_config.json"
]
| asahi417 | 6 | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-panx-dataset-en")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-panx-dataset-en")
``` |
asahi417/tner-xlm-roberta-large-uncased-wnut2017 | 2021-02-13T00:12:33.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"parameter.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"test_bc5cdr_span_lower.json",
"test_bionlp2004_span_lower.json",
"test_conll2003_span_lower.json",
"test_fin_span_lower.json",
"test_mit_movie_trivia_span_lower.json",
"test_mit_restaurant_span_lower.json",
"test_ontonotes5_span_lower.json",
"test_panx_dataset-en_span_lower.json",
"test_wnut2017_lower.json",
"test_wnut2017_span_lower.json",
"tokenizer_config.json"
]
| asahi417 | 14 | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-wnut2017")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-wnut2017")
``` |
asahi417/tner-xlm-roberta-large-wnut2017 | 2021-02-13T00:06:30.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"parameter.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"test_bc5cdr_span.json",
"test_bionlp2004_span.json",
"test_conll2003_span.json",
"test_fin_span.json",
"test_ontonotes5_span.json",
"test_panx_dataset-en_span.json",
"test_wnut2017.json",
"test_wnut2017_span.json",
"tokenizer_config.json"
]
| asahi417 | 8 | transformers | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-wnut2017")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-wnut2017")
``` |
asbabazi/new | 2021-06-05T07:16:42.000Z | []
| [
".gitattributes",
"README.md"
]
| asbabazi | 0 | |||
asdfasdfa/ASD | 2021-01-15T07:21:38.000Z | []
| [
".gitattributes"
]
| asdfasdfa | 0 | |||
aseifert/distilbert-base-german-cased-comma-derstandard | 2021-04-07T19:18:06.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"config.json",
"eval_results.txt",
"model_args.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| aseifert | 13 | transformers | |
aseifert/distilbert-casing | 2020-10-29T09:44:22.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"config.json",
"eval_results.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| aseifert | 27 | transformers | |
aseifert/gelectra-large-comma | 2020-10-29T08:35:48.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| aseifert | 43 | transformers | |
ashish-shrivastava/dont-know-response | 2021-01-07T09:33:58.000Z | [
"pytorch",
"t5",
"seq2seq",
"arxiv:2012.01873",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| ashish-shrivastava | 156 | transformers | ## Natural Don't Know Response Model
Fine-tuned on [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) using a combination of a dependency-rule based data and [Quora Question Pairs(QQP)](https://huggingface.co/nlp/viewer/?dataset=quora) dataset for **Don't Know Response Generation** task.
Additional information about this model:
- Paper : [Saying No is An Art: Contextualized Fallback Responses for
Unanswerable Dialogue Queries](https://arxiv.org/pdf/2012.01873.pdf)
- Github Repo: https://github.com/kaustubhdhole/natural-dont-know
#### How to use
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
model_name = "ashish-shrivastava/dont-know-response"
model = T5ForConditionalGeneration.from_pretrained(model_name)
tokenizer = T5Tokenizer.from_pretrained(model_name)
input = "Where can I find good Italian food ?"
input_ids = tokenizer.encode(input, return_tensors="pt")
outputs = model.generate(input_ids)
decoded_output = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(decoded_output) # I'm not sure where you can get good quality Italian food.
```
#### Hyperparameters
```
n_epochs = 2
base_LM_model = "T5-base"
max_seq_len = 256
learning_rate = 3e-4
adam_epsilon = 1e-8
train_batch_size = 6
```
#### BibTeX entry and citation info
```bibtex
@misc{shrivastava2020saying,
title={Saying No is An Art: Contextualized Fallback Responses for Unanswerable Dialogue Queries},
author={Ashish Shrivastava and Kaustubh Dhole and Abhinav Bhatt and Sharvani Raghunath},
year={2020},
eprint={2012.01873},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
ashwani-tanwar/Gujarati-XLM-R-Base | 2020-12-11T21:34:15.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"masked-lm",
"gu",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"tf_model.h5",
"tokenizer.json"
]
| ashwani-tanwar | 13 | transformers | ---
language: gu
---
# Gujarati-XLM-R-Base
This model is finetuned over [XLM-RoBERTa](https://huggingface.co/xlm-roberta-base) (XLM-R) using its base variant with the Gujarati language using the [OSCAR](https://oscar-corpus.com/) monolingual dataset. We used the same masked language modelling (MLM) objective which was used for pretraining the XLM-R. As it is built over the pretrained XLM-R, we leveraged *Transfer Learning* by exploiting the knowledge from its parent model.
## Dataset
OSCAR corpus contains several diverse datasets for different languages. We followed the work of [CamemBERT](https://www.aclweb.org/anthology/2020.acl-main.645/) who reported better performance with this diverse dataset as compared to the other large homogenous datasets.
## Preprocessing and Training Procedure
Please visit [this link](https://github.com/ashwanitanwar/nmt-transfer-learning-xlm-r#6-finetuning-xlm-r) for the detailed procedure.
## Usage
- This model can be used for further finetuning for different NLP tasks using the Gujarati language.
- It can be used to generate contextualised word representations for the Gujarati words.
- It can be used for domain adaptation.
- It can be used to predict the missing words from the Gujarati sentences.
## Demo
### Using the model to predict missing words
```
from transformers import pipeline
unmasker = pipeline('fill-mask', model='ashwani-tanwar/Gujarati-XLM-R-Base')
pred_word = unmasker("અમદાવાદ એ ગુજરાતનું એક <mask> છે.")
print(pred_word)
```
```
[{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક શહેર છે.</s>', 'score': 0.9463568329811096, 'token': 85227, 'token_str': '▁શહેર'},
{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક ગામ છે.</s>', 'score': 0.013311690650880337, 'token': 66346, 'token_str': '▁ગામ'},
{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એકનગર છે.</s>', 'score': 0.012945962138473988, 'token': 69702, 'token_str': 'નગર'},
{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક સ્થળ છે.</s>', 'score': 0.0045941537246108055, 'token': 135436, 'token_str': '▁સ્થળ'},
{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક મહત્વ છે.</s>', 'score': 0.00402021361514926, 'token': 126763, 'token_str': '▁મહત્વ'}]
```
### Using the model to generate contextualised word representations
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("ashwani-tanwar/Gujarati-XLM-R-Base")
model = AutoModel.from_pretrained("ashwani-tanwar/Gujarati-XLM-R-Base")
sentence = "અમદાવાદ એ ગુજરાતનું એક શહેર છે."
encoded_sentence = tokenizer(sentence, return_tensors='pt')
context_word_rep = model(**encoded_sentence)
```
|
ashwani-tanwar/Gujarati-XLM-R-Large | 2020-12-12T01:39:10.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"masked-lm",
"gu",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"tf_model.h5",
"tokenizer.json"
]
| ashwani-tanwar | 10 | transformers | ---
language: gu
---
# Gujarati-XLM-R-Large
This model is finetuned over [XLM-RoBERTa](https://huggingface.co/xlm-roberta-large) (XLM-R) using its large variant with the Gujarati language using the [OSCAR](https://oscar-corpus.com/) monolingual dataset. We used the same masked language modelling (MLM) objective which was used for pretraining the XLM-R. As it is built over the pretrained XLM-R, we leveraged *Transfer Learning* by exploiting the knowledge from its parent model.
## Dataset
OSCAR corpus contains several diverse datasets for different languages. We followed the work of [CamemBERT](https://www.aclweb.org/anthology/2020.acl-main.645/) who reported better performance with this diverse dataset as compared to the other large homogenous datasets.
## Preprocessing and Training Procedure
Please visit [this link](https://github.com/ashwanitanwar/nmt-transfer-learning-xlm-r#6-finetuning-xlm-r) for the detailed procedure.
## Usage
- This model can be used for further finetuning for different NLP tasks using the Gujarati language.
- It can be used to generate contextualised word representations for the Gujarati words.
- It can be used for domain adaptation.
- It can be used to predict the missing words from the Gujarati sentences.
## Demo
### Using the model to predict missing words
```
from transformers import pipeline
unmasker = pipeline('fill-mask', model='ashwani-tanwar/Gujarati-XLM-R-Large')
pred_word = unmasker("અમદાવાદ એ ગુજરાતનું એક <mask> છે.")
print(pred_word)
```
```
[{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક શહેર છે.</s>', 'score': 0.9790881276130676, 'token': 85227, 'token_str': '▁શહેર'},
{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક રાજ્ય છે.</s>', 'score': 0.004246668424457312, 'token': 63678, 'token_str': '▁રાજ્ય'},
{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક ગામ છે.</s>', 'score': 0.0038021174259483814, 'token': 66346, 'token_str': '▁ગામ'},
{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક મહત્વ છે.</s>', 'score': 0.002798238070681691, 'token': 126763, 'token_str': '▁મહત્વ'},
{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક અમદાવાદ છે.</s>', 'score': 0.0021192911081016064, 'token': 69499, 'token_str': '▁અમદાવાદ'}]
```
### Using the model to generate contextualised word representations
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("ashwani-tanwar/Gujarati-XLM-R-Large")
model = AutoModel.from_pretrained("ashwani-tanwar/Gujarati-XLM-R-Large")
sentence = "અમદાવાદ એ ગુજરાતનું એક શહેર છે."
encoded_sentence = tokenizer(sentence, return_tensors='pt')
context_word_rep = model(**encoded_sentence)
```
|
ashwani-tanwar/Gujarati-in-Devanagari-XLM-R-Base | 2020-12-12T02:22:48.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"masked-lm",
"gu",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"tf_model.h5",
"tokenizer.json"
]
| ashwani-tanwar | 17 | transformers | ---
language: gu
---
# Gujarati-in-Devanagari-XLM-R-Base
This model is finetuned over [XLM-RoBERTa](https://huggingface.co/xlm-roberta-base) (XLM-R) using its base variant with the Gujarati language using the [OSCAR](https://oscar-corpus.com/) monolingual dataset. We converted the Gujarati script to the Devanagari using [Indic-NLP](https://github.com/anoopkunchukuttan/indic_nlp_library) library. For example, the sentence 'અમદાવાદ એ ગુજરાતનું એક શહેર છે.' was converted to 'अमदावाद ए गुजरातनुं एक शहेर छे.'. This helped to get better contextualised representations for some words as the XLM-R was pre-trained with several languages written in Devanagari script such as Hindi, Marathi, Sanskrit, and so on.
We used the same masked language modelling (MLM) objective which was used for pretraining the XLM-R. As it is built over the pretrained XLM-R, we leveraged *Transfer Learning* by exploiting the knowledge from its parent model.
## Dataset
OSCAR corpus contains several diverse datasets for different languages. We followed the work of [CamemBERT](https://www.aclweb.org/anthology/2020.acl-main.645/) who reported better performance with this diverse dataset as compared to the other large homogenous datasets.
## Preprocessing and Training Procedure
Please visit [this link](https://github.com/ashwanitanwar/nmt-transfer-learning-xlm-r#6-finetuning-xlm-r) for the detailed procedure.
## Usage
- This model can be used for further finetuning for different NLP tasks using the Gujarati language.
- It can be used to generate contextualised word representations for the Gujarati words.
- It can be used for domain adaptation.
- It can be used to predict the missing words from the Gujarati sentences.
## Demo
### Using the model to predict missing words
```
from transformers import pipeline
unmasker = pipeline('fill-mask', model='ashwani-tanwar/Gujarati-in-Devanagari-XLM-R-Base')
pred_word = unmasker("अमदावाद ए गुजरातनुं एक <mask> छे.")
print(pred_word)
```
```
[{'sequence': '<s> अमदावाद ए गुजरातनुं एक नगर छे.</s>', 'score': 0.24843722581863403, 'token': 18576, 'token_str': '▁नगर'},
{'sequence': '<s> अमदावाद ए गुजरातनुं एक महानगर छे.</s>', 'score': 0.21455222368240356, 'token': 122519, 'token_str': '▁महानगर'},
{'sequence': '<s> अमदावाद ए गुजरातनुं एक राज्य छे.</s>', 'score': 0.16832049190998077, 'token': 10665, 'token_str': '▁राज्य'},
{'sequence': '<s> अमदावाद ए गुजरातनुं एक जिल्ला छे.</s>', 'score': 0.06764694303274155, 'token': 20396, 'token_str': '▁जिल्ला'},
{'sequence': '<s> अमदावाद ए गुजरातनुं एक शहर छे.</s>', 'score': 0.05364946648478508, 'token': 22770, 'token_str': '▁शहर'}]
```
### Using the model to generate contextualised word representations
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("ashwani-tanwar/Gujarati-in-Devanagari-XLM-R-Base")
model = AutoModel.from_pretrained("ashwani-tanwar/Gujarati-in-Devanagari-XLM-R-Base")
sentence = "अमदावाद ए गुजरातनुं एक शहेर छे."
encoded_sentence = tokenizer(sentence, return_tensors='pt')
context_word_rep = model(**encoded_sentence)
```
|
ashwani-tanwar/Indo-Aryan-XLM-R-Base | 2020-12-12T02:52:59.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"masked-lm",
"gu",
"hi",
"mr",
"bn",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"tf_model.h5",
"tokenizer.json"
]
| ashwani-tanwar | 13 | transformers | ---
language:
- gu
- hi
- mr
- bn
---
# Indo-Aryan-XLM-R-Base
This model is finetuned over [XLM-RoBERTa](https://huggingface.co/xlm-roberta-base) (XLM-R) using its base variant with the Hindi, Gujarati, Marathi, and Bengali languages from the Indo-Aryan family using the [OSCAR](https://oscar-corpus.com/) monolingual datasets. As these languages had imbalanced datasets, we used resampling strategies as used in pretraining the XLM-R to balance the resulting dataset after combining these languages. We used the same masked language modelling (MLM) objective which was used for pretraining the XLM-R. As it is built over the pretrained XLM-R, we leveraged *Transfer Learning* by exploiting the knowledge from its parent model.
## Dataset
OSCAR corpus contains several diverse datasets for different languages. We followed the work of [CamemBERT](https://www.aclweb.org/anthology/2020.acl-main.645/) who reported better performance with this diverse dataset as compared to the other large homogenous datasets.
## Preprocessing and Training Procedure
Please visit [this link](https://github.com/ashwanitanwar/nmt-transfer-learning-xlm-r#6-finetuning-xlm-r) for the detailed procedure.
## Usage
- This model can be used for further finetuning for different NLP tasks using the Hindi, Gujarati, Marathi, and Bengali languages.
- It can be used to generate contextualised word representations for the words from the above languages.
- It can be used for domain adaptation.
- It can be used to predict the missing words from their sentences.
## Demo
### Using the model to predict missing words
```
from transformers import pipeline
unmasker = pipeline('fill-mask', model='ashwani-tanwar/Indo-Aryan-XLM-R-Base')
pred_word = unmasker("અમદાવાદ એ ગુજરાતનું એક <mask> છે.")
print(pred_word)
```
```
[{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક શહેર છે.</s>', 'score': 0.7811868786811829, 'token': 85227, 'token_str': '▁શહેર'},
{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક ગામ છે.</s>', 'score': 0.055032357573509216, 'token': 66346, 'token_str': '▁ગામ'},
{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક નામ છે.</s>', 'score': 0.0287721399217844, 'token': 29565, 'token_str': '▁નામ'},
{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક રાજ્ય છે.</s>', 'score': 0.02565067447721958, 'token': 63678, 'token_str': '▁રાજ્ય'},
{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એકનગર છે.</s>', 'score': 0.022877279669046402, 'token': 69702, 'token_str': 'નગર'}]
```
### Using the model to generate contextualised word representations
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("ashwani-tanwar/Indo-Aryan-XLM-R-Base")
model = AutoModel.from_pretrained("ashwani-tanwar/Indo-Aryan-XLM-R-Base")
sentence = "અમદાવાદ એ ગુજરાતનું એક શહેર છે."
encoded_sentence = tokenizer(sentence, return_tensors='pt')
context_word_rep = model(**encoded_sentence)
```
|
asi/gpt-fr-cased-base | 2021-05-21T13:28:43.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"fr",
"transformers",
"text-generation",
"license:apache-2.0"
]
| text-generation | [
".gitattributes",
"LICENCE",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"vocab.json"
]
| asi | 368 | transformers | ---
language:
- fr
thumbnail: https://raw.githubusercontent.com/AntoineSimoulin/gpt-fr/main/imgs/logo.png
tags:
- tf
- pytorch
- gpt2
- text-generation
license: apache-2.0
---
<img src="https://raw.githubusercontent.com/AntoineSimoulin/gpt-fr/main/imgs/logo.png" width="200">
## Model description
**GPT-fr** 🇫🇷 is a GPT model for French developped by [Quantmetry](https://www.quantmetry.com/) and the [Laboratoire de Linguistique Formelle (LLF)](http://www.llf.cnrs.fr/en). We train the model on a very large and heterogeneous French corpus. We release the weights for the following configurations:
| Model name | Number of layers | Attention Heads | Embedding Dimension | Total Parameters |
| :------: | :---: | :---: | :---: | :---: |
| `gpt-fr-cased-small` | 12 | 12 | 768 | 124 M |
| `gpt-fr-cased-base` | 24 | 14 | 1,792 | 1,017 B |
## Intended uses & limitations
The model can be leveraged for language generation tasks. Besides, many tasks may be formatted such that the output is directly generated in natural language. Such configuration may be used for tasks such as automatic summary or question answering. We do hope our model might be used for both academic and industrial applications.
#### How to use
The model might be used through the astonishing 🤗 `Transformers` librairie. We use the work from [Shoeybi et al., (2019)](#shoeybi-2019) and calibrate our model such that during pre-training or fine-tuning, the model can fit on a single NVIDIA V100 32GB GPU.
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel
# Load pretrained model and tokenizer
model = GPT2LMHeadModel.from_pretrained("asi/gpt-fr-cased-base")
tokenizer = GPT2Tokenizer.from_pretrained("asi/gpt-fr-cased-base")
# Generate a sample of text
model.eval()
input_sentence = "Longtemps je me suis couché de bonne heure."
input_ids = tokenizer.encode(input_sentence, return_tensors='pt')
beam_outputs = model.generate(
input_ids,
max_length=100,
do_sample=True,
top_k=50,
top_p=0.95,
num_return_sequences=1
)
print("Output:\n" + 100 * '-')
print(tokenizer.decode(beam_outputs[0], skip_special_tokens=True))
```
#### Limitations and bias
Large language models tend to replicate the biases found in pre-training datasets, such as gender discrimination or offensive content generation.
To limit exposition to too much explicit material, we carefully choose the sources beforehand. This process — detailed in our paper — aims to limit offensive content generation from the model without performing manual and arbitrary filtering.
However, some societal biases, contained in the data, might be reflected by the model. For example on gender equality, we generated the following sentence sequence "Ma femme/Mon mari vient d'obtenir un nouveau poste en tant \_\_\_\_\_\_\_". We used top-k random sampling strategy with k=50 and stopped at the first punctuation element.
The positions generated for the wife is '_que professeur de français._' while the position for the husband is '_que chef de projet._'. We do appreciate your feedback to better qualitatively and quantitatively assess such effects.
## Training data
We created a dedicated corpus to train our generative model. Indeed the model uses a fixed-length context size of 1,024 and require long documents to be trained. We aggregated existing corpora: [Wikipedia](https://dumps.wikimedia.org/frwiki/), [OpenSubtitle](http://opus.nlpl.eu/download.php?f=OpenSubtitles/v2016/mono/) ([Tiedemann, 2012](#tiedemann-2012)), [Gutenberg](http://www.gutenberg.org) and [Common Crawl](http://data.statmt.org/ngrams/deduped2017/) ([Li et al., 2019](li-2019)). Corpora are filtered and separated into sentences. Successive sentences are then concatenated within the limit of 1,024 tokens per document.
## Training procedure
We pre-trained the model on the new CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/eng/jean-zay/) supercomputer. We perform the training within a total of 140 hours of computation on Tesla V-100 hardware (TDP of 300W). The training was distributed on 4 compute nodes of 8 GPUs. We used data parallelization in order to divide each micro-batch on the computing units. We estimated the total emissions at 580.61 kgCO2eq, using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al., (2019)](lacoste-2019).
## Eval results
We packaged **GPT-fr** with a dedicated language model evaluation benchmark for French.
In line with the [WikiText](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/) benchmark in English, we collected over 70 million tokens from the set of verified [good](https://fr.wikipedia.org/wiki/Wikip%C3%A9dia:Articles_de_qualit%C3%A9) and [featured](https://fr.wikipedia.org/wiki/Wikip%C3%A9dia:Bons_articles) articles on Wikipedia. The model reaches a zero-shot perplexity of **12.9** on the test set.
### BibTeX entry and citation info
Along with the model hosted by HuggingFace transformers library, we maintain a [git repository](https://github.com/AntoineSimoulin/gpt-fr).
If you use **GPT-fr** for your scientific publications or your industrial applications, please cite the following paper:
```bibtex
@inproceedings{simoulin_2020_gptfr,
title = {Un modèle Transformer Génératif Pré-entraîné pour le ______ français},
author = {Simoulin, Antoine and Crabbé, Benoit},
year = {2021},
pubstate = {forthcoming},
}
```
### References
><div name="tiedemann-2012">Jörg Tiedemann: Parallel Data, Tools and Interfaces in OPUS. LREC 2012: 2214-2218</div>
><div name="li-2019">Xian Li, Paul Michel, Antonios Anastasopoulos, Yonatan Belinkov, Nadir Durrani, Orhan Firat, Philipp Koehn, Graham Neubig, Juan Pino, Hassan Sajjad: Findings of the First Shared Task on Machine Translation Robustness. WMT (2) 2019: 91-102</div>
><div name="shoeybi-2019">Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, Bryan Catanzaro: Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism. CoRR abs/1909.08053 (2019)</div>
><div name="lacoste-2019">Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, Thomas Dandres: Quantifying the Carbon Emissions of Machine Learning. CoRR abs/1910.09700 (2019)</div>
|
asi/gpt-fr-cased-small | 2021-05-21T13:32:16.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"fr",
"transformers",
"text-generation",
"license:apache-2.0"
]
| text-generation | [
".gitattributes",
"LICENCE",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"vocab.json"
]
| asi | 164 | transformers | ---
language:
- fr
tags:
- tf
- pytorch
- gpt2
- text-generation
license: apache-2.0
thumbnail: https://raw.githubusercontent.com/AntoineSimoulin/gpt-fr/main/imgs/logo.png
---
<img src="https://raw.githubusercontent.com/AntoineSimoulin/gpt-fr/main/imgs/logo.png" width="200">
## Model description
**GPT-fr** 🇫🇷 is a GPT model for French developped by [Quantmetry](https://www.quantmetry.com/) and the [Laboratoire de Linguistique Formelle (LLF)](http://www.llf.cnrs.fr/en). We train the model on a very large and heterogeneous French corpus. We release the weights for the following configurations:
| Model name | Number of layers | Attention Heads | Embedding Dimension | Total Parameters |
| :------: | :---: | :---: | :---: | :---: |
| `gpt-fr-cased-small` | 12 | 12 | 768 | 124 M |
| `gpt-fr-cased-base` | 24 | 14 | 1,792 | 1,017 B |
## Intended uses & limitations
The model can be leveraged for language generation tasks. Besides, many tasks may be formatted such that the output is directly generated in natural language. Such configuration may be used for tasks such as automatic summary or question answering. We do hope our model might be used for both academic and industrial applications.
#### How to use
The model might be used through the astonishing 🤗 `Transformers` librairie:
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel
# Load pretrained model and tokenizer
model = GPT2LMHeadModel.from_pretrained("asi/gpt-fr-cased-small")
tokenizer = GPT2Tokenizer.from_pretrained("asi/gpt-fr-cased-small")
# Generate a sample of text
model.eval()
input_sentence = "Longtemps je me suis couché de bonne heure."
input_ids = tokenizer.encode(input_sentence, return_tensors='pt')
beam_outputs = model.generate(
input_ids,
max_length=100,
do_sample=True,
top_k=50,
top_p=0.95,
num_return_sequences=1
)
print("Output:\n" + 100 * '-')
print(tokenizer.decode(beam_outputs[0], skip_special_tokens=True))
```
#### Limitations and bias
Large language models tend to replicate the biases found in pre-training datasets, such as gender discrimination or offensive content generation.
To limit exposition to too much explicit material, we carefully choose the sources beforehand. This process — detailed in our paper — aims to limit offensive content generation from the model without performing manual and arbitrary filtering.
However, some societal biases, contained in the data, might be reflected by the model. For example on gender equality, we generated the following sentence sequence "Ma femme/Mon mari vient d'obtenir un nouveau poste. A partir de demain elle/il sera \_\_\_\_\_\_\_" and observed the model generated distinct positions given the subject gender. We used top-k random sampling strategy with k=50 and stopped at the first punctuation element.
The positions generated for the wife is '_femme de ménage de la maison_' while the position for the husband is '_à la tête de la police_'. We do appreciate your feedback to better qualitatively and quantitatively assess such effects.
## Training data
We created a dedicated corpus to train our generative model. Indeed the model uses a fixed-length context size of 1,024 and require long documents to be trained. We aggregated existing corpora: [Wikipedia](https://dumps.wikimedia.org/frwiki/), [OpenSubtitle](http://opus.nlpl.eu/download.php?f=OpenSubtitles/v2016/mono/) ([Tiedemann, 2012](#tiedemann-2012)), [Gutenberg](http://www.gutenberg.org). Corpora are filtered and separated into sentences. Successive sentences are then concatenated within the limit of 1,024 tokens per document.
## Training procedure
We pre-trained the model on a TPU v2-8 using the amazing [Google Colab](https://colab.research.google.com) inter-server.
## Eval results
We packaged **GPT-fr** with a dedicated language model evaluation benchmark.
In line with the [WikiText](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/) benchmark in English, we collected over 70 million tokens from the set of verified [good](https://fr.wikipedia.org/wiki/Wikip%C3%A9dia:Articles_de_qualit%C3%A9) and [featured](https://fr.wikipedia.org/wiki/Wikip%C3%A9dia:Bons_articles) articles on French Wikipedia. The model reaches a zero-shot perplexity of **109.2** on the test set.
### BibTeX entry and citation info
Along with the model hosted by HuggingFace transformers library, we maintain a [git repository](https://github.com/AntoineSimoulin/gpt-fr).
If you use **GPT-fr** for your scientific publications or your industrial applications, please cite the following paper:
```bibtex
@inproceedings{simoulin_2020_gptfr,
title = {Un modèle Transformer Génératif Pré-entraîné pour le ______ français},
author = {Simoulin, Antoine and Crabbé, Benoit},
year = {2021},
pubstate = {forthcoming},
}
```
### References
><div name="tiedemann-2012">Jörg Tiedemann: Parallel Data, Tools and Interfaces in OPUS. LREC 2012: 2214-2218</div> |
assansanogo/AS | 2021-03-22T18:46:46.000Z | []
| [
".gitattributes"
]
| assansanogo | 0 | |||
assemblyai/bert-large-uncased-sst2 | 2021-06-14T22:04:39.000Z | [
"pytorch",
"bert",
"text-classification",
"arxiv:1810.04805",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| assemblyai | 16 | transformers | # BERT-Large-Uncased for Sentiment Analysis
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) originally released in ["BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"](https://arxiv.org/abs/1810.04805) and trained on the [Stanford Sentiment Treebank v2 (SST2)](https://nlp.stanford.edu/sentiment/); part of the [General Language Understanding Evaluation (GLUE)](https://gluebenchmark.com) benchmark. This model was fine-tuned by the team at [AssemblyAI](https://www.assemblyai.com) and is released with the [corresponding blog post]().
## Usage
To download and utilize this model for sentiment analysis please execute the following:
```python
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("assemblyai/bert-large-uncased-sst2")
model = AutoModelForSequenceClassification.from_pretrained("assemblyai/bert-large-uncased-sst2")
tokenized_segments = tokenizer(["AssemblyAI is the best speech-to-text API for modern developers with performance being second to none!"], return_tensors="pt", padding=True, truncation=True)
tokenized_segments_input_ids, tokenized_segments_attention_mask = tokenized_segments.input_ids, tokenized_segments.attention_mask
model_predictions = F.softmax(model(input_ids=tokenized_segments_input_ids, attention_mask=tokenized_segments_attention_mask)['logits'], dim=1)
print("Positive probability: "+str(model_predictions[0][1].item()*100)+"%")
print("Negative probability: "+str(model_predictions[0][0].item()*100)+"%")
```
For questions about how to use this model feel free to contact the team at [AssemblyAI](https://www.assemblyai.com)! |
assemblyai/distilbert-base-uncased-qqp | 2021-06-14T22:13:49.000Z | [
"pytorch",
"distilbert",
"text-classification",
"arxiv:1910.01108",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| assemblyai | 16 | transformers | # DistilBERT-Base-Uncased for Duplicate Question Detection
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) originally released in ["DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter"](https://arxiv.org/abs/1910.01108) and trained on the [Quora Question Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) dataset; part of the [General Language Understanding Evaluation (GLUE)](https://gluebenchmark.com) benchmark. This model was fine-tuned by the team at [AssemblyAI](https://www.assemblyai.com) and is released with the [corresponding blog post]().
## Usage
To download and utilize this model for duplicate question detection please execute the following:
```python
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("assemblyai/distilbert-base-uncased-qqp")
model = AutoModelForSequenceClassification.from_pretrained("assemblyai/distilbert-base-uncased-qqp")
tokenized_segments = tokenizer(["How many hours does it take to fly from California to New York?"], ["What is the flight time from New York to Seattle?"], return_tensors="pt", padding=True, truncation=True)
tokenized_segments_input_ids, tokenized_segments_attention_mask = tokenized_segments.input_ids, tokenized_segments.attention_mask
model_predictions = F.softmax(model(input_ids=tokenized_segments_input_ids, attention_mask=tokenized_segments_attention_mask)['logits'], dim=1)
print("Duplicate probability: "+str(model_predictions[0][1].item()*100)+"%")
print("Non-duplicate probability: "+str(model_predictions[0][0].item()*100)+"%")
```
For questions about how to use this model feel free to contact the team at [AssemblyAI](https://www.assemblyai.com)! |
assemblyai/distilbert-base-uncased-sst2 | 2021-06-14T22:04:03.000Z | [
"pytorch",
"distilbert",
"text-classification",
"arxiv:1910.01108",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| assemblyai | 18 | transformers | # DistilBERT-Base-Uncased for Sentiment Analysis
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) originally released in ["DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter"](https://arxiv.org/abs/1910.01108) and trained on the [Stanford Sentiment Treebank v2 (SST2)](https://nlp.stanford.edu/sentiment/); part of the [General Language Understanding Evaluation (GLUE)](https://gluebenchmark.com) benchmark. This model was fine-tuned by the team at [AssemblyAI](https://www.assemblyai.com) and is released with the [corresponding blog post]().
## Usage
To download and utilize this model for sentiment analysis please execute the following:
```python
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("assemblyai/distilbert-base-uncased-sst2")
model = AutoModelForSequenceClassification.from_pretrained("assemblyai/distilbert-base-uncased-sst2")
tokenized_segments = tokenizer(["AssemblyAI is the best speech-to-text API for modern developers with performance being second to none!"], return_tensors="pt", padding=True, truncation=True)
tokenized_segments_input_ids, tokenized_segments_attention_mask = tokenized_segments.input_ids, tokenized_segments.attention_mask
model_predictions = F.softmax(model(input_ids=tokenized_segments_input_ids, attention_mask=tokenized_segments_attention_mask)['logits'], dim=1)
print("Positive probability: "+str(model_predictions[0][1].item()*100)+"%")
print("Negative probability: "+str(model_predictions[0][0].item()*100)+"%")
```
For questions about how to use this model feel free to contact the team at [AssemblyAI](https://www.assemblyai.com)! |
astarostap/distilbert-cased-antisemitic-tweets | 2021-02-08T15:03:10.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"license:mit"
]
| text-classification | [
".gitattributes",
"README.md",
"README.md'",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| astarostap | 8 | transformers | This model takes a tweet with the word "jew" in it, and determines if it's antisemitic.
Labels:
1 for antisemitic
0 otherwise
NOTE: this model was trained on data that I labeled personally, with 4k tweets (~%50 antisemitic and ~50% not antisemitic).
|
atahmasb/tf-layoutlm-base-uncased | 2021-02-26T20:27:54.000Z | [
"tf",
"layoutlm",
"arxiv:1912.13318",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"tf_model.h5"
]
| atahmasb | 10 | transformers | # LayoutLM
## Model description
LayoutLM is a simple but effective pre-training method of text and layout for document image understanding and information extraction tasks, such as form understanding and receipt understanding. LayoutLM archives the SOTA results on multiple datasets. For more details, please refer to our paper:
[LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318)
Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou, [KDD 2020](https://www.kdd.org/kdd2020/accepted-papers)
## Training data
We pre-train LayoutLM on IIT-CDIP Test Collection 1.0\* dataset with two settings.
* LayoutLM-Base, Uncased (11M documents, 2 epochs): 12-layer, 768-hidden, 12-heads, 113M parameters **(This Model)**
* LayoutLM-Large, Uncased (11M documents, 2 epochs): 24-layer, 1024-hidden, 16-heads, 343M parameters
## Citation
If you find LayoutLM useful in your research, please cite the following paper:
``` latex
@misc{xu2019layoutlm,
title={LayoutLM: Pre-training of Text and Layout for Document Image Understanding},
author={Yiheng Xu and Minghao Li and Lei Cui and Shaohan Huang and Furu Wei and Ming Zhou},
year={2019},
eprint={1912.13318},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
|
atahmasb/tf-layoutlm-large-uncased | 2021-02-26T20:33:39.000Z | [
"tf",
"layoutlm",
"arxiv:1912.13318",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"tf_model.h5"
]
| atahmasb | 16 | transformers | # LayoutLM
## Model description
LayoutLM is a simple but effective pre-training method of text and layout for document image understanding and information extraction tasks, such as form understanding and receipt understanding. LayoutLM archives the SOTA results on multiple datasets. For more details, please refer to our paper:
[LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318)
Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou, [KDD 2020](https://www.kdd.org/kdd2020/accepted-papers)
## Training data
We pre-train LayoutLM on IIT-CDIP Test Collection 1.0\* dataset with two settings.
* LayoutLM-Base, Uncased (11M documents, 2 epochs): 12-layer, 768-hidden, 12-heads, 113M parameters
* LayoutLM-Large, Uncased (11M documents, 2 epochs): 24-layer, 1024-hidden, 16-heads, 343M parameters **(This Model)**
## Citation
If you find LayoutLM useful in your research, please cite the following paper:
``` latex
@misc{xu2019layoutlm,
title={LayoutLM: Pre-training of Text and Layout for Document Image Understanding},
author={Yiheng Xu and Minghao Li and Lei Cui and Shaohan Huang and Furu Wei and Ming Zhou},
year={2019},
eprint={1912.13318},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
|
atharvamundada99/bert-large-question-answering-finetuned-legal | 2021-05-24T15:10:08.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| atharvamundada99 | 842 | transformers | |
athipudr/huggy | 2021-01-13T11:34:43.000Z | []
| [
".gitattributes"
]
| athipudr | 0 | |||
athivvat/hello-hugging-face | 2021-04-19T20:42:58.000Z | []
| [
".gitattributes"
]
| athivvat | 0 | |||
atob/bert-base-uncased-squad2 | 2021-04-19T19:58:01.000Z | []
| [
".gitattributes"
]
| atob | 0 | |||
atob/unqover-distilbert-base-uncased-newsqa | 2021-04-19T19:40:52.000Z | []
| [
".gitattributes"
]
| atob | 0 | |||
atom/Ev3 | 2021-05-17T10:25:15.000Z | []
| [
".gitattributes"
]
| atom | 0 | |||
aubmindlab/araelectra-base-discriminator | 2021-04-18T21:14:23.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"ar",
"dataset:wikipedia",
"dataset:OSIAN",
"dataset:1.5B Arabic Corpus",
"dataset:OSCAR Arabic Unshuffled",
"arxiv:2012.15516",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf1_model.tar.gz",
"tf_model.h5",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
]
| aubmindlab | 1,008 | transformers | ---
language: ar
datasets:
- wikipedia
- OSIAN
- 1.5B Arabic Corpus
- OSCAR Arabic Unshuffled
---
# AraELECTRA
**ELECTRA** is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). AraELECTRA achieves state-of-the-art results on Arabic QA dataset.
For a detailed description, please refer to the AraELECTRA paper [AraELECTRA: Pre-Training Text Discriminators for Arabic Language Understanding](https://arxiv.org/abs/2012.15516).
## How to use the discriminator in `transformers`
```python
from transformers import ElectraForPreTraining, ElectraTokenizerFast
import torch
discriminator = ElectraForPreTraining.from_pretrained("aubmindlab/araelectra-base-discriminator")
tokenizer = ElectraTokenizerFast.from_pretrained("aubmindlab/araelectra-base-discriminator")
sentence = ""
fake_sentence = ""
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
[print("%7s" % token, end="") for token in fake_tokens]
[print("%7s" % int(prediction), end="") for prediction in predictions.tolist()]
```
# Model
Model | HuggingFace Model Name | Size (MB/Params)|
---|:---:|:---:
AraELECTRA-base-generator | [araelectra-base-generator](https://huggingface.co/aubmindlab/araelectra-base-generator) | 227MB/60M |
AraELECTRA-base-discriminator | [araelectra-base-discriminator](https://huggingface.co/aubmindlab/araelectra-base-discriminator) | 516MB/135M |
# Compute
Model | Hardware | num of examples (seq len = 512) | Batch Size | Num of Steps | Time (in days)
---|:---:|:---:|:---:|:---:|:---:
AraELECTRA-base | TPUv3-8 | - | 256 | 2M | 24
# Dataset
The pretraining data used for the new **AraELECTRA** model is also used for **AraGPT2 and AraBERTv2**.
The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation)
For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled:
- OSCAR unshuffled and filtered.
- [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01
- [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4)
- [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619)
- Assafir news articles. Huge thank you for Assafir for giving us the data
# Preprocessing
It is recommended to apply our preprocessing function before training/testing on any dataset.
**Install farasapy to segment text for AraBERT v1 & v2 `pip install farasapy`**
```python
from arabert.preprocess import ArabertPreprocessor
model_name="araelectra-base"
arabert_prep = ArabertPreprocessor(model_name=model_name)
text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري"
arabert_prep.preprocess(text)
```
# TensorFlow 1.x models
**You can find the PyTorch, TF2 and TF1 models in HuggingFace's Transformer Library under the ```aubmindlab``` username**
- `wget https://huggingface.co/aubmindlab/MODEL_NAME/resolve/main/tf1_model.tar.gz` where `MODEL_NAME` is any model under the `aubmindlab` name
# If you used this model please cite us as :
```
@inproceedings{antoun-etal-2021-araelectra,
title = "{A}ra{ELECTRA}: Pre-Training Text Discriminators for {A}rabic Language Understanding",
author = "Antoun, Wissam and
Baly, Fady and
Hajj, Hazem",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Virtual)",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.wanlp-1.20",
pages = "191--195",
}
```
# Acknowledgments
Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT.
# Contacts
**Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]>
**Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]> |
|
aubmindlab/araelectra-base-generator | 2021-04-18T21:12:59.000Z | [
"pytorch",
"tf",
"electra",
"masked-lm",
"ar",
"dataset:wikipedia",
"dataset:OSIAN",
"dataset:1.5B Arabic Corpus",
"dataset:OSCAR Arabic Unshuffled",
"arxiv:2012.15516",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf1_model.tar.gz",
"tf_model.h5",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
]
| aubmindlab | 353 | transformers | ---
language: ar
datasets:
- wikipedia
- OSIAN
- 1.5B Arabic Corpus
- OSCAR Arabic Unshuffled
widget:
- text: " عاصمة لبنان هي [MASK] ."
---
# AraELECTRA
**ELECTRA** is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). AraELECTRA achieves state-of-the-art results on Arabic QA dataset.
For a detailed description, please refer to the AraELECTRA paper [AraELECTRA: Pre-Training Text Discriminators for Arabic Language Understanding](https://arxiv.org/abs/2012.15516).
## How to use the generator in `transformers`
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="aubmindlab/araelectra-base-generator",
tokenizer="aubmindlab/araelectra-base-generator"
)
print(
fill_mask(" عاصمة لبنان هي [MASK] .)
)
```
# Preprocessing
It is recommended to apply our preprocessing function before training/testing on any dataset.
**Install farasapy to segment text for AraBERT v1 & v2 `pip install farasapy`**
```python
from arabert.preprocess import ArabertPreprocessor
model_name="araelectra-base"
arabert_prep = ArabertPreprocessor(model_name=model_name)
text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري"
arabert_prep.preprocess(text)
```
# Model
Model | HuggingFace Model Name | Size (MB/Params)|
---|:---:|:---:
AraELECTRA-base-generator | [araelectra-base-generator](https://huggingface.co/aubmindlab/araelectra-base-generator) | 227MB/60M |
AraELECTRA-base-discriminator | [araelectra-base-discriminator](https://huggingface.co/aubmindlab/araelectra-base-discriminator) | 516MB/135M |
# Compute
Model | Hardware | num of examples (seq len = 512) | Batch Size | Num of Steps | Time (in days)
---|:---:|:---:|:---:|:---:|:---:
AraELECTRA-base | TPUv3-8 | - | 256 | 2M | 24
# Dataset
The pretraining data used for the new AraELECTRA model is also used for **AraGPT2 and AraELECTRA**.
The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation)
For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled:
- OSCAR unshuffled and filtered.
- [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01
- [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4)
- [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619)
- Assafir news articles. Huge thank you for Assafir for giving us the data
# TensorFlow 1.x models
**You can find the PyTorch, TF2 and TF1 models in HuggingFace's Transformer Library under the ```aubmindlab``` username**
- `wget https://huggingface.co/aubmindlab/MODEL_NAME/resolve/main/tf1_model.tar.gz` where `MODEL_NAME` is any model under the `aubmindlab` name
# If you used this model please cite us as :
```
@inproceedings{antoun-etal-2021-araelectra,
title = "{A}ra{ELECTRA}: Pre-Training Text Discriminators for {A}rabic Language Understanding",
author = "Antoun, Wissam and
Baly, Fady and
Hajj, Hazem",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Virtual)",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.wanlp-1.20",
pages = "191--195",
}
```
# Acknowledgments
Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT.
# Contacts
**Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]>
**Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
|
aubmindlab/aragpt2-base | 2021-05-21T13:33:27.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"ar",
"dataset:wikipedia",
"dataset:OSIAN",
"dataset:1.5B Arabic Corpus",
"dataset:OSCAR Arabic Unshuffled",
"arxiv:2012.15520",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"tf1_model.tar.gz",
"tf_model.h5",
"tokenizer.json",
"vocab.json"
]
| aubmindlab | 824 | transformers | ---
language: ar
datasets:
- wikipedia
- OSIAN
- 1.5B Arabic Corpus
- OSCAR Arabic Unshuffled
widget:
- text: "يحكى أن مزارعا مخادعا قام ببيع بئر الماء الموجود في أرضه لجاره مقابل مبلغ كبير من المال"
- text: "القدس مدينة تاريخية، بناها الكنعانيون في"
- text: "كان يا ما كان في قديم الزمان"
---
# Arabic GPT2
You can find more information in our paper [AraGPT2](https://arxiv.org/abs/2012.15520)
The code in this repository was used to train all GPT2 variants. The code support training and fine-tuning GPT2 on GPUs and TPUs via the TPUEstimator API.
GPT2-base and medium uses the code from the `gpt2` folder and can trains models from the [minimaxir/gpt-2-simple](https://github.com/minimaxir/gpt-2-simple) repository.
These models were trained using the `lamb` optimizer and follow the same architecture as `gpt2` and are fully compatible with the `transformers` library.
GPT2-large and GPT2-mega were trained using the [imcaspar/gpt2-ml](https://github.com/imcaspar/gpt2-ml/) library, and follow the `grover` architecture. You can use the pytorch classes found in `grover/modeling_gpt2.py` as a direct replacement for classes in the `transformers` library (it should support version `v4.x` from `transformers`).
Both models are trained using the `adafactor` optimizer, since the `adam` and `lamb` optimizer use too much memory causing the model to not even fit 1 batch on a TPU core.
AraGPT2 is trained on the same large Arabic Dataset as AraBERTv2.
# Usage
## Testing the model using `transformers`:
```python
from transformers import GPT2TokenizerFast, pipeline
#for base and medium
from transformers import GPT2LMHeadModel
#for large and mega
from arabert.aragpt2.grover.modeling_gpt2 import GPT2LMHeadModel
from arabert.preprocess import ArabertPreprocessor
MODEL_NAME='aragpt2-base'
arabert_prep = ArabertPreprocessor(model_name=MODEL_NAME)
text=""
text_clean = arabert_prep.preprocess(text)
model = GPT2LMHeadModel.from_pretrained(MODEL_NAME)
tokenizer = GPT2TokenizerFast.from_pretrained(MODEL_NAME)
generation_pipeline = pipeline("text-generation",model=model,tokenizer=tokenizer)
#feel free to try different decodinn settings
generation_pipeline(text,
pad_token_id=tokenizer.eos_token_id,
num_beams=10,
max_length=200,
top_p=0.9,
repetition_penalty = 3.0,
no_repeat_ngram_size = 3)[0]['generated_text']
```
## Finetunning using `transformers`:
Follow the guide linked [here](https://towardsdatascience.com/fine-tuning-gpt2-on-colab-gpu-for-free-340468c92ed)
## Finetuning using our code with TF 1.15.4:
Create the Training TFRecords:
```bash
python create_pretraining_data.py
--input_file=<RAW TEXT FILE with documents/article sperated by an empty line>
--output_file=<OUTPUT TFRecord>
--tokenizer_dir=<Directory with the GPT2 Tokenizer files>
```
Finetuning:
```bash
python3 run_pretraining.py \
--input_file="gs://<GS_BUCKET>/pretraining_data/*" \
--output_dir="gs://<GS_BUCKET>/pretraining_model/" \
--config_file="config/small_hparams.json" \
--batch_size=128 \
--eval_batch_size=8 \
--num_train_steps= \
--num_warmup_steps= \
--learning_rate= \
--save_checkpoints_steps= \
--max_seq_length=1024 \
--max_eval_steps= \
--optimizer="lamb" \
--iterations_per_loop=5000 \
--keep_checkpoint_max=10 \
--use_tpu=True \
--tpu_name=<TPU NAME> \
--do_train=True \
--do_eval=False
```
# Model Sizes
Model | Optimizer | Context size | Embedding Size | Num of heads | Num of layers | Model Size / Num of Params |
---|:---:|:---:|:---:|:---:|:---:|:---:
AraGPT2-base | `lamb` | 1024 | 768 | 12 | 12 | 527MB / 135M |
AraGPT2-medium | `lamb` | 1024 | 1024 | 16 | 24 | 1.38G/370M |
AraGPT2-large | `adafactor` | 1024 | 1280 | 20 | 36 | 2.98GB/792M |
AraGPT2-mega | `adafactor` | 1024 | 1536 | 25 | 48 | 5.5GB/1.46B |
All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats.
## Compute
Model | Hardware | num of examples (seq len = 1024) | Batch Size | Num of Steps | Time (in days)
---|:---:|:---:|:---:|:---:|:---:
AraGPT2-base | TPUv3-128 | 9.7M | 1792 | 125K | 1.5
AraGPT2-medium | TPUv3-8 | 9.7M | 1152 | 85K | 1.5
AraGPT2-large | TPUv3-128 | 9.7M | 256 | 220k | 3
AraGPT2-mega | TPUv3-128 | 9.7M | 256 | 780K | 9
# Dataset
The pretraining data used for the new AraGPT2 model is also used for **AraBERTv2 and AraELECTRA**.
The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation)
For the new dataset we added the unshuffled OSCAR corpus after we thoroughly filter it, to the dataset used in AraBERTv1 but without the websites that we previously crawled:
- OSCAR unshuffled and filtered.
- [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01
- [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4)
- [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619)
- Assafir news articles. Huge thank you for Assafir for giving us the data
# Disclaimer
The text generated by AraGPT2 is automatically generated by a neural network model trained on a large amount of texts, which does not represent the authors' or their institutes' official attitudes and preferences. The text generated by AraGPT2 should only be used for research and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it.
# If you used this model please cite us as :
```
@inproceedings{antoun-etal-2021-aragpt2,
title = "{A}ra{GPT}2: Pre-Trained Transformer for {A}rabic Language Generation",
author = "Antoun, Wissam and
Baly, Fady and
Hajj, Hazem",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Virtual)",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.wanlp-1.21",
pages = "196--207",
}
```
# Acknowledgments
Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT.
# Contacts
**Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]>
**Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
|
aubmindlab/aragpt2-large | 2021-05-21T13:39:17.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"ar",
"dataset:wikipedia",
"dataset:OSIAN",
"dataset:1.5B Arabic Corpus",
"dataset:OSCAR Arabic Unshuffled",
"arxiv:2012.15520",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"tf1_model.tar.gz",
"tokenizer.json",
"vocab.json"
]
| aubmindlab | 230 | transformers | ---
language: ar
datasets:
- wikipedia
- OSIAN
- 1.5B Arabic Corpus
- OSCAR Arabic Unshuffled
inference: false
widget:
- text: "يحكى أن مزارعا مخادعا قام ببيع بئر الماء الموجود في أرضه لجاره مقابل مبلغ كبير من المال"
- text: "القدس مدينة تاريخية، بناها الكنعانيون في"
- text: "كان يا ما كان في قديم الزمان"
---
# Arabic GPT2
You can find more information in our paper [AraGPT2](https://arxiv.org/abs/2012.15520)
The code in this repository was used to train all GPT2 variants. The code support training and fine-tuning GPT2 on GPUs and TPUs via the TPUEstimator API.
GPT2-base and medium uses the code from the `gpt2` folder and can trains models from the [minimaxir/gpt-2-simple](https://github.com/minimaxir/gpt-2-simple) repository.
These models were trained using the `lamb` optimizer and follow the same architecture as `gpt2` and are fully compatible with the `transformers` library.
GPT2-large and GPT2-mega were trained using the [imcaspar/gpt2-ml](https://github.com/imcaspar/gpt2-ml/) library, and follow the `grover` architecture. You can use the pytorch classes found in `grover/modeling_gpt2.py` as a direct replacement for classes in the `transformers` library (it should support version `v4.x` from `transformers`).
Both models are trained using the `adafactor` optimizer, since the `adam` and `lamb` optimizer use too much memory causing the model to not even fit 1 batch on a TPU core.
AraGPT2 is trained on the same large Arabic Dataset as AraBERTv2.
# Usage
## Testing the model using `transformers`:
```python
from transformers import GPT2TokenizerFast, pipeline
#for base and medium
from transformers import GPT2LMHeadModel
#for large and mega
from arabert.aragpt2.grover.modeling_gpt2 import GPT2LMHeadModel
from arabert.preprocess import ArabertPreprocessor
MODEL_NAME='aragpt2-large'
arabert_prep = ArabertPreprocessor(model_name=MODEL_NAME)
text=""
text_clean = arabert_prep.preprocess(text)
model = GPT2LMHeadModel.from_pretrained(MODEL_NAME)
tokenizer = GPT2TokenizerFast.from_pretrained(MODEL_NAME)
generation_pipeline = pipeline("text-generation",model=model,tokenizer=tokenizer)
#feel free to try different decodinn settings
generation_pipeline(text,
pad_token_id=tokenizer.eos_token_id,
num_beams=10,
max_length=200,
top_p=0.9,
repetition_penalty = 3.0,
no_repeat_ngram_size = 3)[0]['generated_text']
>>>
```
## Finetunning using `transformers`:
Follow the guide linked [here](https://towardsdatascience.com/fine-tuning-gpt2-on-colab-gpu-for-free-340468c92ed)
## Finetuning using our code with TF 1.15.4:
Create the Training TFRecords:
```bash
python create_pretraining_data.py
--input_file=<RAW TEXT FILE with documents/article sperated by an empty line>
--output_file=<OUTPUT TFRecord>
--tokenizer_dir=<Directory with the GPT2 Tokenizer files>
```
Finetuning:
```bash
python3 run_pretraining.py \\
--input_file="gs://<GS_BUCKET>/pretraining_data/*" \\
--output_dir="gs://<GS_BUCKET>/pretraining_model/" \\
--config_file="config/small_hparams.json" \\
--batch_size=128 \\
--eval_batch_size=8 \\
--num_train_steps= \\
--num_warmup_steps= \\
--learning_rate= \\
--save_checkpoints_steps= \\
--max_seq_length=1024 \\
--max_eval_steps= \\
--optimizer="lamb" \\
--iterations_per_loop=5000 \\
--keep_checkpoint_max=10 \\
--use_tpu=True \\
--tpu_name=<TPU NAME> \\
--do_train=True \\
--do_eval=False
```
# Model Sizes
Model | Optimizer | Context size | Embedding Size | Num of heads | Num of layers | Model Size / Num of Params |
---|:---:|:---:|:---:|:---:|:---:|:---:
AraGPT2-base | `lamb` | 1024 | 768 | 12 | 12 | 527MB/135M |
AraGPT2-medium | `lamb` | 1024 | 1024 | 16 | 24 |1.38G/370M |
AraGPT2-large | `adafactor` | 1024 | 1280 | 20 | 36 | 2.98GB/792M |
AraGPT2-mega | `adafactor` | 1024 | 1536 | 25 | 48 | 5.5GB/1.46B |
All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats.
## Compute
For Dataset Source see the [Dataset Section](#Dataset)
Model | Hardware | num of examples (seq len = 1024) | Batch Size | Num of Steps | Time (in days)
---|:---:|:---:|:---:|:---:|:---:
AraGPT2-base | TPUv3-128 | 9.7M | 1792 | 125K | 1.5
AraGPT2-medium | TPUv3-8 | 9.7M | 1152 | 85K | 1.5
AraGPT2-large | TPUv3-128 | 9.7M | 256 | 220k | 3
AraGPT2-mega | TPUv3-128 | 9.7M | 256 | 780K | 9
# Dataset
The pretraining data used for the new AraBERT model is also used for **GPT2 and ELECTRA**.
The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation)
For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled:
- OSCAR unshuffled and filtered.
- [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01
- [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4)
- [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619)
- Assafir news articles. Huge thank you for Assafir for giving us the data
# Disclaimer
The text generated by GPT2 Arabic is automatically generated by a neural network model trained on a large amount of texts, which does not represent the authors' or their institutes' official attitudes and preferences. The text generated by GPT2 Arabic should only be used for research and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it.
# If you used this model please cite us as :
```
@inproceedings{antoun-etal-2021-aragpt2,
title = "{A}ra{GPT}2: Pre-Trained Transformer for {A}rabic Language Generation",
author = "Antoun, Wissam and
Baly, Fady and
Hajj, Hazem",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Virtual)",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.wanlp-1.21",
pages = "196--207",
}
```
# Acknowledgments
Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continuous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT.
# Contacts
**Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]>
**Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
|
aubmindlab/aragpt2-medium | 2021-05-21T13:44:22.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"ar",
"dataset:wikipedia",
"dataset:OSIAN",
"dataset:1.5B Arabic Corpus",
"dataset:OSCAR Arabic Unshuffled",
"arxiv:2012.15520",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"tf1_model.tar.gz",
"tf_model.h5",
"tokenizer.json",
"vocab.json"
]
| aubmindlab | 343 | transformers | ---
language: ar
datasets:
- wikipedia
- OSIAN
- 1.5B Arabic Corpus
- OSCAR Arabic Unshuffled
widget:
- text: "يحكى أن مزارعا مخادعا قام ببيع بئر الماء الموجود في أرضه لجاره مقابل مبلغ كبير من المال"
- text: "القدس مدينة تاريخية، بناها الكنعانيون في"
- text: "كان يا ما كان في قديم الزمان"
---
# Arabic GPT2
You can find more information in our paper [AraGPT2](https://arxiv.org/abs/2012.15520)
The code in this repository was used to train all GPT2 variants. The code support training and fine-tuning GPT2 on GPUs and TPUs via the TPUEstimator API.
GPT2-base and medium uses the code from the `gpt2` folder and can trains models from the [minimaxir/gpt-2-simple](https://github.com/minimaxir/gpt-2-simple) repository.
These models were trained using the `lamb` optimizer and follow the same architecture as `gpt2` and are fully compatible with the `transformers` library.
GPT2-large and GPT2-mega were trained using the [imcaspar/gpt2-ml](https://github.com/imcaspar/gpt2-ml/) library, and follow the `grover` architecture. You can use the pytorch classes found in `grover/modeling_gpt2.py` as a direct replacement for classes in the `transformers` library (it should support version `v4.x` from `transformers`).
Both models are trained using the `adafactor` optimizer, since the `adam` and `lamb` optimizer use too much memory causing the model to not even fit 1 batch on a TPU core.
AraGPT2 is trained on the same large Arabic Dataset as AraBERTv2.
# Usage
## Testing the model using `transformers`:
```python
from transformers import GPT2TokenizerFast, pipeline
#for base and medium
from transformers import GPT2LMHeadModel
#for large and mega
from arabert.aragpt2.grover.modeling_gpt2 import GPT2LMHeadModel
from arabert.preprocess import ArabertPreprocessor
MODEL_NAME='aragpt2-medium'
arabert_prep = ArabertPreprocessor(model_name=MODEL_NAME)
text=""
text_clean = arabert_prep.preprocess(text)
model = GPT2LMHeadModel.from_pretrained(MODEL_NAME)
tokenizer = GPT2TokenizerFast.from_pretrained(MODEL_NAME)
generation_pipeline = pipeline("text-generation",model=model,tokenizer=tokenizer)
#feel free to try different decodinn settings
generation_pipeline(text,
pad_token_id=tokenizer.eos_token_id,
num_beams=10,
max_length=200,
top_p=0.9,
repetition_penalty = 3.0,
no_repeat_ngram_size = 3)[0]['generated_text']
```
## Finetunning using `transformers`:
Follow the guide linked [here](https://towardsdatascience.com/fine-tuning-gpt2-on-colab-gpu-for-free-340468c92ed)
## Finetuning using our code with TF 1.15.4:
Create the Training TFRecords:
```bash
python create_pretraining_data.py
--input_file=<RAW TEXT FILE with documents/article sperated by an empty line>
--output_file=<OUTPUT TFRecord>
--tokenizer_dir=<Directory with the GPT2 Tokenizer files>
```
Finetuning:
```bash
python3 run_pretraining.py \\
--input_file="gs://<GS_BUCKET>/pretraining_data/*" \\
--output_dir="gs://<GS_BUCKET>/pretraining_model/" \\
--config_file="config/small_hparams.json" \\
--batch_size=128 \\
--eval_batch_size=8 \\
--num_train_steps= \\
--num_warmup_steps= \\
--learning_rate= \\
--save_checkpoints_steps= \\
--max_seq_length=1024 \\
--max_eval_steps= \\
--optimizer="lamb" \\
--iterations_per_loop=5000 \\
--keep_checkpoint_max=10 \\
--use_tpu=True \\
--tpu_name=<TPU NAME> \\
--do_train=True \\
--do_eval=False
```
# Model Sizes
Model | Optimizer | Context size | Embedding Size | Num of heads | Num of layers | Model Size / Num of Params |
---|:---:|:---:|:---:|:---:|:---:|:---:
AraGPT2-base | `lamb` | 1024 | 768 | 12 | 12 | 527MB / 135M |
AraGPT2-medium | `lamb` | 1024 | 1024 | 16 | 24 | 1.38G/370M |
AraGPT2-large | `adafactor` | 1024 | 1280 | 20 | 36 | 2.98GB/792M |
AraGPT2-mega | `adafactor` | 1024 | 1536 | 25 | 48 | 5.5GB/1.46B |
All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats.
## Compute
Model | Hardware | num of examples (seq len = 1024) | Batch Size | Num of Steps | Time (in days)
---|:---:|:---:|:---:|:---:|:---:
AraGPT2-base | TPUv3-128 | 9.7M | 1792 | 125K | 1.5
AraGPT2-medium | TPUv3-8 | 9.7M | 80 | 1M | 15
AraGPT2-large | TPUv3-128 | 9.7M | 256 | 220k | 3
AraGPT2-mega | TPUv3-128 | 9.7M | 256 | 780K | 9
# Dataset
The pretraining data used for the new AraGPT2 model is also used for **AraBERTv2 and AraELECTRA**.
The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation)
For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the dataset used in AraBERTv1 but with out the websites that we previously crawled:
- OSCAR unshuffled and filtered.
- [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01
- [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4)
- [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619)
- Assafir news articles. Huge thank you for Assafir for giving us the data
# Disclaimer
The text generated by AraGPT2 is automatically generated by a neural network model trained on a large amount of texts, which does not represent the authors' or their institutes' official attitudes and preferences. The text generated by AraGPT2 should only be used for research and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it.
# If you used this model please cite us as :
```
@inproceedings{antoun-etal-2021-aragpt2,
title = "{A}ra{GPT}2: Pre-Trained Transformer for {A}rabic Language Generation",
author = "Antoun, Wissam and
Baly, Fady and
Hajj, Hazem",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Virtual)",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.wanlp-1.21",
pages = "196--207",
}
```
# Acknowledgments
Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT.
# Contacts
**Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]>
**Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
|
aubmindlab/aragpt2-mega-detector-long | 2021-03-11T21:46:39.000Z | [
"pytorch",
"electra",
"text-classification",
"ar",
"arxiv:2012.15520",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| aubmindlab | 56 | transformers | ---
language: ar
widget:
- text: "وإذا كان هناك من لا يزال يعتقد أن لبنان هو سويسرا الشرق ، فهو مخطئ إلى حد بعيد . فلبنان ليس سويسرا ، ولا يمكن أن يكون كذلك . لقد عاش اللبنانيون في هذا البلد منذ ما يزيد عن ألف وخمسمئة عام ، أي منذ تأسيس الإمارة الشهابية التي أسسها الأمير فخر الدين المعني الثاني ( 1697 - 1742 )"
---
# AraGPT2 Detector
Machine generated detector model from the [AraGPT2: Pre-Trained Transformer for Arabic Language Generation paper](https://arxiv.org/abs/2012.15520)
This model is trained on the long text passages, and achieves a 99.4% F1-Score.
# How to use it:
```python
from transformers import pipeline
from arabert.preprocess import ArabertPreprocessor
processor = ArabertPreprocessor(model="aubmindlab/araelectra-base-discriminator")
pipe = pipeline("sentiment-analysis", model = "aubmindlab/aragpt2-mega-detector-long")
text = " "
text_prep = processor.preprocess(text)
result = pipe(text_prep)
# [{'label': 'machine-generated', 'score': 0.9977743625640869}]
```
# If you used this model please cite us as :
```
@misc{antoun2020aragpt2,
title={AraGPT2: Pre-Trained Transformer for Arabic Language Generation},
author={Wissam Antoun and Fady Baly and Hazem Hajj},
year={2020},
eprint={2012.15520},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
# Contacts
**Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]>
**Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]> |
aubmindlab/aragpt2-mega | 2021-05-21T13:49:51.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"ar",
"dataset:wikipedia",
"dataset:OSIAN",
"dataset:1.5B Arabic Corpus",
"dataset:OSCAR Arabic Unshuffled",
"arxiv:2012.15520",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"tf1_model.tar.gz",
"tokenizer.json",
"vocab.json"
]
| aubmindlab | 79 | transformers | ---
language: ar
datasets:
- wikipedia
- OSIAN
- 1.5B Arabic Corpus
- OSCAR Arabic Unshuffled
inference: false
widget:
- text: "يحكى أن مزارعا مخادعا قام ببيع بئر الماء الموجود في أرضه لجاره مقابل مبلغ كبير من المال"
- text: "القدس مدينة تاريخية، بناها الكنعانيون في"
- text: "كان يا ما كان في قديم الزمان"
---
# Arabic GPT2
You can find more information in our paper [AraGPT2](https://arxiv.org/abs/2012.15520)
The code in this repository was used to train all GPT2 variants. The code support training and fine-tuning GPT2 on GPUs and TPUs via the TPUEstimator API.
GPT2-base and medium uses the code from the `gpt2` folder and can trains models from the [minimaxir/gpt-2-simple](https://github.com/minimaxir/gpt-2-simple) repository.
These models were trained using the `lamb` optimizer and follow the same architecture as `gpt2` and are fully compatible with the `transformers` library.
GPT2-large and GPT2-mega were trained using the [imcaspar/gpt2-ml](https://github.com/imcaspar/gpt2-ml/) library, and follow the `grover` architecture. You can use the pytorch classes found in `grover/modeling_gpt2.py` as a direct replacement for classes in the `transformers` library (it should support version `v4.x` from `transformers`).
Both models are trained using the `adafactor` optimizer, since the `adam` and `lamb` optimizer use too much memory causing the model to not even fit 1 batch on a TPU core.
AraGPT2 is trained on the same large Arabic Dataset as AraBERTv2.
# Usage
## Testing the model using `transformers`:
```python
from transformers import GPT2TokenizerFast, pipeline
#for base and medium
from transformers import GPT2LMHeadModel
#for large and mega
from arabert.aragpt2.grover.modeling_gpt2 import GPT2LMHeadModel
from arabert.preprocess import ArabertPreprocessor
MODEL_NAME='aragpt2-mega'
arabert_prep = ArabertPreprocessor(model_name=MODEL_NAME)
text=""
text_clean = arabert_prep.preprocess(text)
model = GPT2LMHeadModel.from_pretrained(MODEL_NAME)
tokenizer = GPT2TokenizerFast.from_pretrained(MODEL_NAME)
generation_pipeline = pipeline("text-generation",model=model,tokenizer=tokenizer)
#feel free to try different decodinn settings
generation_pipeline(text,
pad_token_id=tokenizer.eos_token_id,
num_beams=10,
max_length=200,
top_p=0.9,
repetition_penalty = 3.0,
no_repeat_ngram_size = 3)[0]['generated_text']
>>>
```
## Finetunning using `transformers`:
Follow the guide linked [here](https://towardsdatascience.com/fine-tuning-gpt2-on-colab-gpu-for-free-340468c92ed)
## Finetuning using our code with TF 1.15.4:
Create the Training TFRecords:
```bash
python create_pretraining_data.py
--input_file=<RAW TEXT FILE with documents/article sperated by an empty line>
--output_file=<OUTPUT TFRecord>
--tokenizer_dir=<Directory with the GPT2 Tokenizer files>
```
Finetuning:
```bash
python3 run_pretraining.py \
--input_file="gs://<GS_BUCKET>/pretraining_data/*" \
--output_dir="gs://<GS_BUCKET>/pretraining_model/" \
--config_file="config/small_hparams.json" \
--batch_size=128 \
--eval_batch_size=8 \
--num_train_steps= \
--num_warmup_steps= \
--learning_rate= \
--save_checkpoints_steps= \
--max_seq_length=1024 \
--max_eval_steps= \
--optimizer="lamb" \
--iterations_per_loop=5000 \
--keep_checkpoint_max=10 \
--use_tpu=True \
--tpu_name=<TPU NAME> \
--do_train=True \
--do_eval=False
```
# Model Sizes
Model | Optimizer | Context size | Embedding Size | Num of heads | Num of layers | Model Size / Num of Params |
---|:---:|:---:|:---:|:---:|:---:|:---:
AraGPT2-base | `lamb` | 1024 | 768 | 12 | 12 | 527MB/135M |
AraGPT2-medium | `lamb` | 1024 | 1024 | 16 | 24 | 1.38G/370M |
AraGPT2-large | `adafactor` | 1024 | 1280 | 20 | 36 | 2.98GB/792M |
AraGPT2-mega | `adafactor` | 1024 | 1536 | 25 | 48 | 5.5GB/1.46B |
All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats.
## Compute
For Dataset Source see the [Dataset Section](#Dataset)
Model | Hardware | num of examples (seq len = 1024) | Batch Size | Num of Steps | Time (in days)
---|:---:|:---:|:---:|:---:|:---:
AraGPT2-base | TPUv3-128 | 9.7M | 1792 | 125K | 1.5
AraGPT2-medium | TPUv3-8 | 9.7M | 1152 | 85K | 1.5
AraGPT2-large | TPUv3-128 | 9.7M | 256 | 220k | 3
AraGPT2-mega | TPUv3-128 | 9.7M | 256 | 780K | 9
# Dataset
The pretraining data used for the new AraBERT model is also used for **GPT2 and ELECTRA**.
The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation)
For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled:
- OSCAR unshuffled and filtered.
- [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01
- [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4)
- [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619)
- Assafir news articles. Huge thank you for Assafir for giving us the data
# Disclaimer
The text generated by GPT2 Arabic is automatically generated by a neural network model trained on a large amount of texts, which does not represent the authors' or their institutes' official attitudes and preferences. The text generated by GPT2 Arabic should only be used for research and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it.
# If you used this model please cite us as :
```
@inproceedings{antoun-etal-2021-aragpt2,
title = "{A}ra{GPT}2: Pre-Trained Transformer for {A}rabic Language Generation",
author = "Antoun, Wissam and
Baly, Fady and
Hajj, Hazem",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Virtual)",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.wanlp-1.21",
pages = "196--207",
}
```
# Acknowledgments
Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT.
# Contacts
**Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]>
**Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
|
aubmindlab/bert-base-arabert | 2021-05-19T11:49:06.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"ar",
"dataset:wikipedia",
"dataset:OSIAN",
"dataset:1.5B Arabic Corpus",
"arxiv:2003.00104",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
]
| aubmindlab | 2,158 | transformers | ---
language: ar
datasets:
- wikipedia
- OSIAN
- 1.5B Arabic Corpus
widget:
- text: " عاصم +ة لبنان هي [MASK] ."
---
# !!! A newer version of this model is available !!! [AraBERTv2](https://huggingface.co/aubmindlab/bert-base-arabertv2)
# AraBERT v1 & v2 : Pre-training BERT for Arabic Language Understanding
<img src="https://raw.githubusercontent.com/aub-mind/arabert/master/arabert_logo.png" width="100" align="left"/>
**AraBERT** is an Arabic pretrained lanaguage model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERT uses the same BERT-Base config. More details are available in the [AraBERT Paper](https://arxiv.org/abs/2003.00104) and in the [AraBERT Meetup](https://github.com/WissamAntoun/pydata_khobar_meetup)
There are two versions of the model, AraBERTv0.1 and AraBERTv1, with the difference being that AraBERTv1 uses pre-segmented text where prefixes and suffixes were splitted using the [Farasa Segmenter](http://alt.qcri.org/farasa/segmenter.html).
We evalaute AraBERT models on different downstream tasks and compare them to [mBERT]((https://github.com/google-research/bert/blob/master/multilingual.md)), and other state of the art models (*To the extent of our knowledge*). The Tasks were Sentiment Analysis on 6 different datasets ([HARD](https://github.com/elnagara/HARD-Arabic-Dataset), [ASTD-Balanced](https://www.aclweb.org/anthology/D15-1299), [ArsenTD-Lev](https://staff.aub.edu.lb/~we07/Publications/ArSentD-LEV_Sentiment_Corpus.pdf), [LABR](https://github.com/mohamedadaly/LABR)), Named Entity Recognition with the [ANERcorp](http://curtis.ml.cmu.edu/w/courses/index.php/ANERcorp), and Arabic Question Answering on [Arabic-SQuAD and ARCD](https://github.com/husseinmozannar/SOQAL)
# AraBERTv2
## What's New!
AraBERT now comes in 4 new variants to replace the old v1 versions:
More Detail in the AraBERT folder and in the [README](https://github.com/aub-mind/arabert/blob/master/AraBERT/README.md) and in the [AraBERT Paper](https://arxiv.org/abs/2003.00104v2)
Model | HuggingFace Model Name | Size (MB/Params)| Pre-Segmentation | DataSet (Sentences/Size/nWords) |
---|:---:|:---:|:---:|:---:
AraBERTv0.2-base | [bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) | 543MB / 136M | No | 200M / 77GB / 8.6B |
AraBERTv0.2-large| [bert-large-arabertv02](https://huggingface.co/aubmindlab/bert-large-arabertv02) | 1.38G 371M | No | 200M / 77GB / 8.6B |
AraBERTv2-base| [bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) | 543MB 136M | Yes | 200M / 77GB / 8.6B |
AraBERTv2-large| [bert-large-arabertv2](https://huggingface.co/aubmindlab/bert-large-arabertv2) | 1.38G 371M | Yes | 200M / 77GB / 8.6B |
AraBERTv0.1-base| [bert-base-arabertv01](https://huggingface.co/aubmindlab/bert-base-arabertv01) | 543MB 136M | No | 77M / 23GB / 2.7B |
AraBERTv1-base| [bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) | 543MB 136M | Yes | 77M / 23GB / 2.7B |
All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats.
## Better Pre-Processing and New Vocab
We identified an issue with AraBERTv1's wordpiece vocabulary. The issue came from punctuations and numbers that were still attached to words when learned the wordpiece vocab. We now insert a space between numbers and characters and around punctuation characters.
The new vocabulary was learnt using the `BertWordpieceTokenizer` from the `tokenizers` library, and should now support the Fast tokenizer implementation from the `transformers` library.
**P.S.**: All the old BERT codes should work with the new BERT, just change the model name and check the new preprocessing dunction
**Please read the section on how to use the [preprocessing function](#Preprocessing)**
## Bigger Dataset and More Compute
We used ~3.5 times more data, and trained for longer.
For Dataset Sources see the [Dataset Section](#Dataset)
Model | Hardware | num of examples with seq len (128 / 512) |128 (Batch Size/ Num of Steps) | 512 (Batch Size/ Num of Steps) | Total Steps | Total Time (in Days) |
---|:---:|:---:|:---:|:---:|:---:|:---:
AraBERTv0.2-base | TPUv3-8 | 420M / 207M |2560 / 1M | 384/ 2M | 3M | -
AraBERTv0.2-large | TPUv3-128 | 420M / 207M | 13440 / 250K | 2056 / 300K | 550K | -
AraBERTv2-base | TPUv3-8 | 520M / 245M |13440 / 250K | 2056 / 300K | 550K | -
AraBERTv2-large | TPUv3-128 | 520M / 245M | 13440 / 250K | 2056 / 300K | 550K | -
AraBERT-base (v1/v0.1) | TPUv2-8 | - |512 / 900K | 128 / 300K| 1.2M | 4 days
# Dataset
The pretraining data used for the new AraBERT model is also used for Arabic **GPT2 and ELECTRA**.
The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation)
For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled:
- OSCAR unshuffled and filtered.
- [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01
- [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4)
- [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619)
- Assafir news articles. Huge thank you for Assafir for giving us the data
# Preprocessing
It is recommended to apply our preprocessing function before training/testing on any dataset.
**Install farasapy to segment text for AraBERT v1 & v2 `pip install farasapy`**
```python
from arabert.preprocess import ArabertPreprocessor
model_name="bert-base-arabert"
arabert_prep = ArabertPreprocessor(model_name=model_name)
text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري"
arabert_prep.preprocess(text)
>>>"و+ لن نبالغ إذا قل +نا إن هاتف أو كمبيوتر ال+ مكتب في زمن +نا هذا ضروري"
```
## Accepted_models
```
bert-base-arabertv01
bert-base-arabert
bert-base-arabertv02
bert-base-arabertv2
bert-large-arabertv02
bert-large-arabertv2
araelectra-base
aragpt2-base
aragpt2-medium
aragpt2-large
aragpt2-mega
```
# TensorFlow 1.x models
The TF1.x model are available in the HuggingFace models repo.
You can download them as follows:
- via git-lfs: clone all the models in a repo
```bash
curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
sudo apt-get install git-lfs
git lfs install
git clone https://huggingface.co/aubmindlab/MODEL_NAME
tar -C ./MODEL_NAME -zxvf /content/MODEL_NAME/tf1_model.tar.gz
```
where `MODEL_NAME` is any model under the `aubmindlab` name
- via `wget`:
- Go to the tf1_model.tar.gz file on huggingface.co/models/aubmindlab/MODEL_NAME.
- copy the `oid sha256`
- then run `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/INSERT_THE_SHA_HERE` (ex: for `aragpt2-base`: `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/3766fc03d7c2593ff2fb991d275e96b81b0ecb2098b71ff315611d052ce65248`)
# If you used this model please cite us as :
Google Scholar has our Bibtex wrong (missing name), use this instead
```
@inproceedings{antoun2020arabert,
title={AraBERT: Transformer-based Model for Arabic Language Understanding},
author={Antoun, Wissam and Baly, Fady and Hajj, Hazem},
booktitle={LREC 2020 Workshop Language Resources and Evaluation Conference 11--16 May 2020},
pages={9}
}
```
# Acknowledgments
Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT.
## Contacts
**Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]>
**Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
|
aubmindlab/bert-base-arabertv01 | 2021-05-19T11:50:51.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"ar",
"dataset:wikipedia",
"dataset:OSIAN",
"dataset:1.5B Arabic Corpus",
"arxiv:2003.00104",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
]
| aubmindlab | 751 | transformers | ---
language: ar
datasets:
- wikipedia
- OSIAN
- 1.5B Arabic Corpus
widget:
- text: " عاصمة لبنان هي [MASK] ."
---
# !!! A newer version of this model is available !!! [AraBERTv02](https://huggingface.co/aubmindlab/bert-base-arabertv02)
# AraBERT v1 & v2 : Pre-training BERT for Arabic Language Understanding
<img src="https://raw.githubusercontent.com/aub-mind/arabert/master/arabert_logo.png" width="100" align="left"/>
**AraBERT** is an Arabic pretrained lanaguage model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERT uses the same BERT-Base config. More details are available in the [AraBERT Paper](https://arxiv.org/abs/2003.00104) and in the [AraBERT Meetup](https://github.com/WissamAntoun/pydata_khobar_meetup)
There are two versions of the model, AraBERTv0.1 and AraBERTv1, with the difference being that AraBERTv1 uses pre-segmented text where prefixes and suffixes were splitted using the [Farasa Segmenter](http://alt.qcri.org/farasa/segmenter.html).
We evalaute AraBERT models on different downstream tasks and compare them to [mBERT]((https://github.com/google-research/bert/blob/master/multilingual.md)), and other state of the art models (*To the extent of our knowledge*). The Tasks were Sentiment Analysis on 6 different datasets ([HARD](https://github.com/elnagara/HARD-Arabic-Dataset), [ASTD-Balanced](https://www.aclweb.org/anthology/D15-1299), [ArsenTD-Lev](https://staff.aub.edu.lb/~we07/Publications/ArSentD-LEV_Sentiment_Corpus.pdf), [LABR](https://github.com/mohamedadaly/LABR)), Named Entity Recognition with the [ANERcorp](http://curtis.ml.cmu.edu/w/courses/index.php/ANERcorp), and Arabic Question Answering on [Arabic-SQuAD and ARCD](https://github.com/husseinmozannar/SOQAL)
# AraBERTv2
## What's New!
AraBERT now comes in 4 new variants to replace the old v1 versions:
More Detail in the AraBERT folder and in the [README](https://github.com/aub-mind/arabert/blob/master/AraBERT/README.md) and in the [AraBERT Paper](https://arxiv.org/abs/2003.00104v2)
Model | HuggingFace Model Name | Size (MB/Params)| Pre-Segmentation | DataSet (Sentences/Size/nWords) |
---|:---:|:---:|:---:|:---:
AraBERTv0.2-base | [bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) | 543MB / 136M | No | 200M / 77GB / 8.6B |
AraBERTv0.2-large| [bert-large-arabertv02](https://huggingface.co/aubmindlab/bert-large-arabertv02) | 1.38G 371M | No | 200M / 77GB / 8.6B |
AraBERTv2-base| [bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) | 543MB 136M | Yes | 200M / 77GB / 8.6B |
AraBERTv2-large| [bert-large-arabertv2](https://huggingface.co/aubmindlab/bert-large-arabertv2) | 1.38G 371M | Yes | 200M / 77GB / 8.6B |
AraBERTv0.1-base| [bert-base-arabertv01](https://huggingface.co/aubmindlab/bert-base-arabertv01) | 543MB 136M | No | 77M / 23GB / 2.7B |
AraBERTv1-base| [bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) | 543MB 136M | Yes | 77M / 23GB / 2.7B |
All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats.
## Better Pre-Processing and New Vocab
We identified an issue with AraBERTv1's wordpiece vocabulary. The issue came from punctuations and numbers that were still attached to words when learned the wordpiece vocab. We now insert a space between numbers and characters and around punctuation characters.
The new vocabulary was learnt using the `BertWordpieceTokenizer` from the `tokenizers` library, and should now support the Fast tokenizer implementation from the `transformers` library.
**P.S.**: All the old BERT codes should work with the new BERT, just change the model name and check the new preprocessing dunction
**Please read the section on how to use the [preprocessing function](#Preprocessing)**
## Bigger Dataset and More Compute
We used ~3.5 times more data, and trained for longer.
For Dataset Sources see the [Dataset Section](#Dataset)
Model | Hardware | num of examples with seq len (128 / 512) |128 (Batch Size/ Num of Steps) | 512 (Batch Size/ Num of Steps) | Total Steps | Total Time (in Days) |
---|:---:|:---:|:---:|:---:|:---:|:---:
AraBERTv0.2-base | TPUv3-8 | 420M / 207M |2560 / 1M | 384/ 2M | 3M | -
AraBERTv0.2-large | TPUv3-128 | 420M / 207M | 13440 / 250K | 2056 / 300K | 550K | -
AraBERTv2-base | TPUv3-8 | 520M / 245M |13440 / 250K | 2056 / 300K | 550K | -
AraBERTv2-large | TPUv3-128 | 520M / 245M | 13440 / 250K | 2056 / 300K | 550K | -
AraBERT-base (v1/v0.1) | TPUv2-8 | - |512 / 900K | 128 / 300K| 1.2M | 4 days
# Dataset
The pretraining data used for the new AraBERT model is also used for Arabic **GPT2 and ELECTRA**.
The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation)
For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled:
- OSCAR unshuffled and filtered.
- [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01
- [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4)
- [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619)
- Assafir news articles. Huge thank you for Assafir for giving us the data
# Preprocessing
It is recommended to apply our preprocessing function before training/testing on any dataset.
**Install farasapy to segment text for AraBERT v1 & v2 `pip install farasapy`**
```python
from arabert.preprocess import ArabertPreprocessor
model_name="bert-base-arabertv01"
arabert_prep = ArabertPreprocessor(model_name=model_name)
text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري"
arabert_prep.preprocess(text)
```
## Accepted_models
```
bert-base-arabertv01
bert-base-arabert
bert-base-arabertv02
bert-base-arabertv2
bert-large-arabertv02
bert-large-arabertv2
araelectra-base
aragpt2-base
aragpt2-medium
aragpt2-large
aragpt2-mega
```
# TensorFlow 1.x models
The TF1.x model are available in the HuggingFace models repo.
You can download them as follows:
- via git-lfs: clone all the models in a repo
```bash
curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
sudo apt-get install git-lfs
git lfs install
git clone https://huggingface.co/aubmindlab/MODEL_NAME
tar -C ./MODEL_NAME -zxvf /content/MODEL_NAME/tf1_model.tar.gz
```
where `MODEL_NAME` is any model under the `aubmindlab` name
- via `wget`:
- Go to the tf1_model.tar.gz file on huggingface.co/models/aubmindlab/MODEL_NAME.
- copy the `oid sha256`
- then run `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/INSERT_THE_SHA_HERE` (ex: for `aragpt2-base`: `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/3766fc03d7c2593ff2fb991d275e96b81b0ecb2098b71ff315611d052ce65248`)
# If you used this model please cite us as :
Google Scholar has our Bibtex wrong (missing name), use this instead
```
@inproceedings{antoun2020arabert,
title={AraBERT: Transformer-based Model for Arabic Language Understanding},
author={Antoun, Wissam and Baly, Fady and Hajj, Hazem},
booktitle={LREC 2020 Workshop Language Resources and Evaluation Conference 11--16 May 2020},
pages={9}
}
```
# Acknowledgments
Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT.
# Contacts
**Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]>
**Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
|
aubmindlab/bert-base-arabertv02 | 2021-05-19T11:52:08.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"ar",
"dataset:wikipedia",
"dataset:OSIAN",
"dataset:1.5B Arabic Corpus",
"dataset:OSCAR Arabic Unshuffled",
"arxiv:2003.00104",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf1_model.tar.gz",
"tf_model.h5",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
]
| aubmindlab | 7,057 | transformers | ---
language: ar
datasets:
- wikipedia
- OSIAN
- 1.5B Arabic Corpus
- OSCAR Arabic Unshuffled
widget:
- text: " عاصمة لبنان هي [MASK] ."
---
# AraBERT v1 & v2 : Pre-training BERT for Arabic Language Understanding
<img src="https://raw.githubusercontent.com/aub-mind/arabert/master/arabert_logo.png" width="100" align="left"/>
**AraBERT** is an Arabic pretrained lanaguage model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERT uses the same BERT-Base config. More details are available in the [AraBERT Paper](https://arxiv.org/abs/2003.00104) and in the [AraBERT Meetup](https://github.com/WissamAntoun/pydata_khobar_meetup)
There are two versions of the model, AraBERTv0.1 and AraBERTv1, with the difference being that AraBERTv1 uses pre-segmented text where prefixes and suffixes were splitted using the [Farasa Segmenter](http://alt.qcri.org/farasa/segmenter.html).
We evalaute AraBERT models on different downstream tasks and compare them to [mBERT]((https://github.com/google-research/bert/blob/master/multilingual.md)), and other state of the art models (*To the extent of our knowledge*). The Tasks were Sentiment Analysis on 6 different datasets ([HARD](https://github.com/elnagara/HARD-Arabic-Dataset), [ASTD-Balanced](https://www.aclweb.org/anthology/D15-1299), [ArsenTD-Lev](https://staff.aub.edu.lb/~we07/Publications/ArSentD-LEV_Sentiment_Corpus.pdf), [LABR](https://github.com/mohamedadaly/LABR)), Named Entity Recognition with the [ANERcorp](http://curtis.ml.cmu.edu/w/courses/index.php/ANERcorp), and Arabic Question Answering on [Arabic-SQuAD and ARCD](https://github.com/husseinmozannar/SOQAL)
# AraBERTv2
## What's New!
AraBERT now comes in 4 new variants to replace the old v1 versions:
More Detail in the AraBERT folder and in the [README](https://github.com/aub-mind/arabert/blob/master/AraBERT/README.md) and in the [AraBERT Paper](https://arxiv.org/abs/2003.00104v2)
Model | HuggingFace Model Name | Size (MB/Params)| Pre-Segmentation | DataSet (Sentences/Size/nWords) |
---|:---:|:---:|:---:|:---:
AraBERTv0.2-base | [bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) | 543MB / 136M | No | 200M / 77GB / 8.6B |
AraBERTv0.2-large| [bert-large-arabertv02](https://huggingface.co/aubmindlab/bert-large-arabertv02) | 1.38G 371M | No | 200M / 77GB / 8.6B |
AraBERTv2-base| [bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) | 543MB 136M | Yes | 200M / 77GB / 8.6B |
AraBERTv2-large| [bert-large-arabertv2](https://huggingface.co/aubmindlab/bert-large-arabertv2) | 1.38G 371M | Yes | 200M / 77GB / 8.6B |
AraBERTv0.1-base| [bert-base-arabertv01](https://huggingface.co/aubmindlab/bert-base-arabertv01) | 543MB 136M | No | 77M / 23GB / 2.7B |
AraBERTv1-base| [bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) | 543MB 136M | Yes | 77M / 23GB / 2.7B |
All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats.
## Better Pre-Processing and New Vocab
We identified an issue with AraBERTv1's wordpiece vocabulary. The issue came from punctuations and numbers that were still attached to words when learned the wordpiece vocab. We now insert a space between numbers and characters and around punctuation characters.
The new vocabulary was learnt using the `BertWordpieceTokenizer` from the `tokenizers` library, and should now support the Fast tokenizer implementation from the `transformers` library.
**P.S.**: All the old BERT codes should work with the new BERT, just change the model name and check the new preprocessing dunction
**Please read the section on how to use the [preprocessing function](#Preprocessing)**
## Bigger Dataset and More Compute
We used ~3.5 times more data, and trained for longer.
For Dataset Sources see the [Dataset Section](#Dataset)
Model | Hardware | num of examples with seq len (128 / 512) |128 (Batch Size/ Num of Steps) | 512 (Batch Size/ Num of Steps) | Total Steps | Total Time (in Days) |
---|:---:|:---:|:---:|:---:|:---:|:---:
AraBERTv0.2-base | TPUv3-8 | 420M / 207M | 2560 / 1M | 384/ 2M | 3M | -
AraBERTv0.2-large | TPUv3-128 | 420M / 207M | 13440 / 250K | 2056 / 300K | 550K | 7
AraBERTv2-base | TPUv3-8 | 420M / 207M | 2560 / 1M | 384/ 2M | 3M | -
AraBERTv2-large | TPUv3-128 | 520M / 245M | 13440 / 250K | 2056 / 300K | 550K | 7
AraBERT-base (v1/v0.1) | TPUv2-8 | - |512 / 900K | 128 / 300K| 1.2M | 4
# Dataset
The pretraining data used for the new AraBERT model is also used for Arabic **GPT2 and ELECTRA**.
The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation)
For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled:
- OSCAR unshuffled and filtered.
- [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01
- [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4)
- [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619)
- Assafir news articles. Huge thank you for Assafir for giving us the data
# Preprocessing
It is recommended to apply our preprocessing function before training/testing on any dataset.
**Install farasapy to segment text for AraBERT v1 & v2 `pip install farasapy`**
```python
from arabert.preprocess import ArabertPreprocessor
model_name="bert-base-arabertv02"
arabert_prep = ArabertPreprocessor(model_name=model_name)
text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري"
arabert_prep.preprocess(text)
```
## Accepted_models
```
bert-base-arabertv01
bert-base-arabert
bert-base-arabertv02
bert-base-arabertv2
bert-large-arabertv02
bert-large-arabertv2
araelectra-base
aragpt2-base
aragpt2-medium
aragpt2-large
aragpt2-mega
```
# TensorFlow 1.x models
The TF1.x model are available in the HuggingFace models repo.
You can download them as follows:
- via git-lfs: clone all the models in a repo
```bash
curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
sudo apt-get install git-lfs
git lfs install
git clone https://huggingface.co/aubmindlab/MODEL_NAME
tar -C ./MODEL_NAME -zxvf /content/MODEL_NAME/tf1_model.tar.gz
```
where `MODEL_NAME` is any model under the `aubmindlab` name
- via `wget`:
- Go to the tf1_model.tar.gz file on huggingface.co/models/aubmindlab/MODEL_NAME.
- copy the `oid sha256`
- then run `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/INSERT_THE_SHA_HERE` (ex: for `aragpt2-base`: `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/3766fc03d7c2593ff2fb991d275e96b81b0ecb2098b71ff315611d052ce65248`)
# If you used this model please cite us as :
Google Scholar has our Bibtex wrong (missing name), use this instead
```
@inproceedings{antoun2020arabert,
title={AraBERT: Transformer-based Model for Arabic Language Understanding},
author={Antoun, Wissam and Baly, Fady and Hajj, Hazem},
booktitle={LREC 2020 Workshop Language Resources and Evaluation Conference 11--16 May 2020},
pages={9}
}
```
# Acknowledgments
Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT.
# Contacts
**Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]>
**Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
|
aubmindlab/bert-base-arabertv2 | 2021-05-19T11:53:26.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"ar",
"dataset:wikipedia",
"dataset:OSIAN",
"dataset:1.5B Arabic Corpus",
"dataset:OSCAR Arabic Unshuffled",
"arxiv:2003.00104",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf1_model.tar.gz",
"tf_model.h5",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
]
| aubmindlab | 6,152 | transformers | ---
language: ar
datasets:
- wikipedia
- OSIAN
- 1.5B Arabic Corpus
- OSCAR Arabic Unshuffled
widget:
- text: " عاصم +ة لبنان هي [MASK] ."
---
# AraBERT v1 & v2 : Pre-training BERT for Arabic Language Understanding
<img src="https://raw.githubusercontent.com/aub-mind/arabert/master/arabert_logo.png" width="100" align="left"/>
**AraBERT** is an Arabic pretrained lanaguage model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERT uses the same BERT-Base config. More details are available in the [AraBERT Paper](https://arxiv.org/abs/2003.00104) and in the [AraBERT Meetup](https://github.com/WissamAntoun/pydata_khobar_meetup)
There are two versions of the model, AraBERTv0.1 and AraBERTv1, with the difference being that AraBERTv1 uses pre-segmented text where prefixes and suffixes were splitted using the [Farasa Segmenter](http://alt.qcri.org/farasa/segmenter.html).
We evalaute AraBERT models on different downstream tasks and compare them to [mBERT]((https://github.com/google-research/bert/blob/master/multilingual.md)), and other state of the art models (*To the extent of our knowledge*). The Tasks were Sentiment Analysis on 6 different datasets ([HARD](https://github.com/elnagara/HARD-Arabic-Dataset), [ASTD-Balanced](https://www.aclweb.org/anthology/D15-1299), [ArsenTD-Lev](https://staff.aub.edu.lb/~we07/Publications/ArSentD-LEV_Sentiment_Corpus.pdf), [LABR](https://github.com/mohamedadaly/LABR)), Named Entity Recognition with the [ANERcorp](http://curtis.ml.cmu.edu/w/courses/index.php/ANERcorp), and Arabic Question Answering on [Arabic-SQuAD and ARCD](https://github.com/husseinmozannar/SOQAL)
# AraBERTv2
## What's New!
AraBERT now comes in 4 new variants to replace the old v1 versions:
More Detail in the AraBERT folder and in the [README](https://github.com/aub-mind/arabert/blob/master/AraBERT/README.md) and in the [AraBERT Paper](https://arxiv.org/abs/2003.00104v2)
Model | HuggingFace Model Name | Size (MB/Params)| Pre-Segmentation | DataSet (Sentences/Size/nWords) |
---|:---:|:---:|:---:|:---:
AraBERTv0.2-base | [bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) | 543MB / 136M | No | 200M / 77GB / 8.6B |
AraBERTv0.2-large| [bert-large-arabertv02](https://huggingface.co/aubmindlab/bert-large-arabertv02) | 1.38G 371M | No | 200M / 77GB / 8.6B |
AraBERTv2-base| [bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) | 543MB 136M | Yes | 200M / 77GB / 8.6B |
AraBERTv2-large| [bert-large-arabertv2](https://huggingface.co/aubmindlab/bert-large-arabertv2) | 1.38G 371M | Yes | 200M / 77GB / 8.6B |
AraBERTv0.1-base| [bert-base-arabertv01](https://huggingface.co/aubmindlab/bert-base-arabertv01) | 543MB 136M | No | 77M / 23GB / 2.7B |
AraBERTv1-base| [bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) | 543MB 136M | Yes | 77M / 23GB / 2.7B |
All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats.
## Better Pre-Processing and New Vocab
We identified an issue with AraBERTv1's wordpiece vocabulary. The issue came from punctuations and numbers that were still attached to words when learned the wordpiece vocab. We now insert a space between numbers and characters and around punctuation characters.
The new vocabulary was learnt using the `BertWordpieceTokenizer` from the `tokenizers` library, and should now support the Fast tokenizer implementation from the `transformers` library.
**P.S.**: All the old BERT codes should work with the new BERT, just change the model name and check the new preprocessing dunction
**Please read the section on how to use the [preprocessing function](#Preprocessing)**
## Bigger Dataset and More Compute
We used ~3.5 times more data, and trained for longer.
For Dataset Sources see the [Dataset Section](#Dataset)
Model | Hardware | num of examples with seq len (128 / 512) |128 (Batch Size/ Num of Steps) | 512 (Batch Size/ Num of Steps) | Total Steps | Total Time (in Days) |
---|:---:|:---:|:---:|:---:|:---:|:---:
AraBERTv0.2-base | TPUv3-8 | 420M / 207M | 2560 / 1M | 384/ 2M | 3M | -
AraBERTv0.2-large | TPUv3-128 | 420M / 207M | 13440 / 250K | 2056 / 300K | 550K | 7
AraBERTv2-base | TPUv3-8 | 420M / 207M | 2560 / 1M | 384/ 2M | 3M | -
AraBERTv2-large | TPUv3-128 | 520M / 245M | 13440 / 250K | 2056 / 300K | 550K | 7
AraBERT-base (v1/v0.1) | TPUv2-8 | - |512 / 900K | 128 / 300K| 1.2M | 4
# Dataset
The pretraining data used for the new AraBERT model is also used for Arabic **AraGPT2 and AraELECTRA**.
The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation)
For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled:
- OSCAR unshuffled and filtered.
- [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01
- [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4)
- [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619)
- Assafir news articles. Huge thank you for Assafir for giving us the data
# Preprocessing
It is recommended to apply our preprocessing function before training/testing on any dataset.
**Install farasapy to segment text for AraBERT v1 & v2 `pip install farasapy`**
```python
from arabert.preprocess import ArabertPreprocessor
model_name="bert-base-arabertv2"
arabert_prep = ArabertPreprocessor(model_name=model_name)
text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري"
arabert_prep.preprocess(text)
>>>"و+ لن نبالغ إذا قل +نا إن هاتف أو كمبيوتر ال+ مكتب في زمن +نا هذا ضروري"
```
## Accepted_models
```
bert-base-arabertv01
bert-base-arabert
bert-base-arabertv02
bert-base-arabertv2
bert-large-arabertv02
bert-large-arabertv2
araelectra-base
aragpt2-base
aragpt2-medium
aragpt2-large
aragpt2-mega
```
# TensorFlow 1.x models
The TF1.x model are available in the HuggingFace models repo.
You can download them as follows:
- via git-lfs: clone all the models in a repo
```bash
curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
sudo apt-get install git-lfs
git lfs install
git clone https://huggingface.co/aubmindlab/MODEL_NAME
tar -C ./MODEL_NAME -zxvf /content/MODEL_NAME/tf1_model.tar.gz
```
where `MODEL_NAME` is any model under the `aubmindlab` name
- via `wget`:
- Go to the tf1_model.tar.gz file on huggingface.co/models/aubmindlab/MODEL_NAME.
- copy the `oid sha256`
- then run `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/INSERT_THE_SHA_HERE` (ex: for `aragpt2-base`: `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/3766fc03d7c2593ff2fb991d275e96b81b0ecb2098b71ff315611d052ce65248`)
# If you used this model please cite us as :
Google Scholar has our Bibtex wrong (missing name), use this instead
```
@inproceedings{antoun2020arabert,
title={AraBERT: Transformer-based Model for Arabic Language Understanding},
author={Antoun, Wissam and Baly, Fady and Hajj, Hazem},
booktitle={LREC 2020 Workshop Language Resources and Evaluation Conference 11--16 May 2020},
pages={9}
}
```
# Acknowledgments
Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT.
# Contacts
**Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]>
**Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
|
aubmindlab/bert-large-arabertv02 | 2021-05-19T11:56:11.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"ar",
"dataset:wikipedia",
"dataset:OSIAN",
"dataset:1.5B Arabic Corpus",
"dataset:OSCAR Arabic Unshuffled",
"arxiv:2003.00104",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf1_model.tar.gz",
"tf_model.h5",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
]
| aubmindlab | 1,022 | transformers | ---
language: ar
datasets:
- wikipedia
- OSIAN
- 1.5B Arabic Corpus
- OSCAR Arabic Unshuffled
widget:
- text: " عاصمة لبنان هي [MASK] ."
---
# AraBERT v1 & v2 : Pre-training BERT for Arabic Language Understanding
<img src="https://raw.githubusercontent.com/aub-mind/arabert/master/arabert_logo.png" width="100" align="left"/>
**AraBERT** is an Arabic pretrained lanaguage model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERT uses the same BERT-Base config. More details are available in the [AraBERT Paper](https://arxiv.org/abs/2003.00104) and in the [AraBERT Meetup](https://github.com/WissamAntoun/pydata_khobar_meetup)
There are two versions of the model, AraBERTv0.1 and AraBERTv1, with the difference being that AraBERTv1 uses pre-segmented text where prefixes and suffixes were splitted using the [Farasa Segmenter](http://alt.qcri.org/farasa/segmenter.html).
We evalaute AraBERT models on different downstream tasks and compare them to [mBERT]((https://github.com/google-research/bert/blob/master/multilingual.md)), and other state of the art models (*To the extent of our knowledge*). The Tasks were Sentiment Analysis on 6 different datasets ([HARD](https://github.com/elnagara/HARD-Arabic-Dataset), [ASTD-Balanced](https://www.aclweb.org/anthology/D15-1299), [ArsenTD-Lev](https://staff.aub.edu.lb/~we07/Publications/ArSentD-LEV_Sentiment_Corpus.pdf), [LABR](https://github.com/mohamedadaly/LABR)), Named Entity Recognition with the [ANERcorp](http://curtis.ml.cmu.edu/w/courses/index.php/ANERcorp), and Arabic Question Answering on [Arabic-SQuAD and ARCD](https://github.com/husseinmozannar/SOQAL)
# AraBERTv2
## What's New!
AraBERT now comes in 4 new variants to replace the old v1 versions:
More Detail in the AraBERT folder and in the [README](https://github.com/aub-mind/arabert/blob/master/AraBERT/README.md) and in the [AraBERT Paper](https://arxiv.org/abs/2003.00104v2)
Model | HuggingFace Model Name | Size (MB/Params)| Pre-Segmentation | DataSet (Sentences/Size/nWords) |
---|:---:|:---:|:---:|:---:
AraBERTv0.2-base | [bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) | 543MB / 136M | No | 200M / 77GB / 8.6B |
AraBERTv0.2-large| [bert-large-arabertv02](https://huggingface.co/aubmindlab/bert-large-arabertv02) | 1.38G 371M | No | 200M / 77GB / 8.6B |
AraBERTv2-base| [bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) | 543MB 136M | Yes | 200M / 77GB / 8.6B |
AraBERTv2-large| [bert-large-arabertv2](https://huggingface.co/aubmindlab/bert-large-arabertv2) | 1.38G 371M | Yes | 200M / 77GB / 8.6B |
AraBERTv0.1-base| [bert-base-arabertv01](https://huggingface.co/aubmindlab/bert-base-arabertv01) | 543MB 136M | No | 77M / 23GB / 2.7B |
AraBERTv1-base| [bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) | 543MB 136M | Yes | 77M / 23GB / 2.7B |
All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats.
## Better Pre-Processing and New Vocab
We identified an issue with AraBERTv1's wordpiece vocabulary. The issue came from punctuations and numbers that were still attached to words when learned the wordpiece vocab. We now insert a space between numbers and characters and around punctuation characters.
The new vocabulary was learnt using the `BertWordpieceTokenizer` from the `tokenizers` library, and should now support the Fast tokenizer implementation from the `transformers` library.
**P.S.**: All the old BERT codes should work with the new BERT, just change the model name and check the new preprocessing dunction
**Please read the section on how to use the [preprocessing function](#Preprocessing)**
## Bigger Dataset and More Compute
We used ~3.5 times more data, and trained for longer.
For Dataset Sources see the [Dataset Section](#Dataset)
Model | Hardware | num of examples with seq len (128 / 512) |128 (Batch Size/ Num of Steps) | 512 (Batch Size/ Num of Steps) | Total Steps | Total Time (in Days) |
---|:---:|:---:|:---:|:---:|:---:|:---:
AraBERTv0.2-base | TPUv3-8 | 420M / 207M | 2560 / 1M | 384/ 2M | 3M | -
AraBERTv0.2-large | TPUv3-128 | 420M / 207M | 13440 / 250K | 2056 / 300K | 550K | 7
AraBERTv2-base | TPUv3-8 | 420M / 207M | 2560 / 1M | 384/ 2M | 3M | -
AraBERTv2-large | TPUv3-128 | 520M / 245M | 13440 / 250K | 2056 / 300K | 550K | 7
AraBERT-base (v1/v0.1) | TPUv2-8 | - |512 / 900K | 128 / 300K| 1.2M | 4
# Dataset
The pretraining data used for the new AraBERT model is also used for Arabic **GPT2 and ELECTRA**.
The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation)
For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled:
- OSCAR unshuffled and filtered.
- [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01
- [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4)
- [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619)
- Assafir news articles. Huge thank you for Assafir for giving us the data
# Preprocessing
It is recommended to apply our preprocessing function before training/testing on any dataset.
**Install farasapy to segment text for AraBERT v1 & v2 `pip install farasapy`**
```python
from arabert.preprocess import ArabertPreprocessor
model_name="bert-large-arabertv02"
arabert_prep = ArabertPreprocessor(model_name=model_name)
text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري"
arabert_prep.preprocess(text)
```
## Accepted_models
```
bert-base-arabertv01
bert-base-arabert
bert-base-arabertv02
bert-base-arabertv2
bert-large-arabertv02
bert-large-arabertv2
araelectra-base
aragpt2-base
aragpt2-medium
aragpt2-large
aragpt2-mega
```
# TensorFlow 1.x models
The TF1.x model are available in the HuggingFace models repo.
You can download them as follows:
- via git-lfs: clone all the models in a repo
```bash
curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
sudo apt-get install git-lfs
git lfs install
git clone https://huggingface.co/aubmindlab/MODEL_NAME
tar -C ./MODEL_NAME -zxvf /content/MODEL_NAME/tf1_model.tar.gz
```
where `MODEL_NAME` is any model under the `aubmindlab` name
- via `wget`:
- Go to the tf1_model.tar.gz file on huggingface.co/models/aubmindlab/MODEL_NAME.
- copy the `oid sha256`
- then run `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/INSERT_THE_SHA_HERE` (ex: for `aragpt2-base`: `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/3766fc03d7c2593ff2fb991d275e96b81b0ecb2098b71ff315611d052ce65248`)
# If you used this model please cite us as :
Google Scholar has our Bibtex wrong (missing name), use this instead
```
@inproceedings{antoun2020arabert,
title={AraBERT: Transformer-based Model for Arabic Language Understanding},
author={Antoun, Wissam and Baly, Fady and Hajj, Hazem},
booktitle={LREC 2020 Workshop Language Resources and Evaluation Conference 11--16 May 2020},
pages={9}
}
```
# Acknowledgments
Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT.
# Contacts
**Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]>
**Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
|
aubmindlab/bert-large-arabertv2 | 2021-05-19T11:59:18.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"ar",
"dataset:wikipedia",
"dataset:OSIAN",
"dataset:1.5B Arabic Corpus",
"dataset:OSCAR Arabic Unshuffled",
"arxiv:2003.00104",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf1_model.tar.gz",
"tf_model.h5",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
]
| aubmindlab | 863 | transformers | ---
language: ar
datasets:
- wikipedia
- OSIAN
- 1.5B Arabic Corpus
- OSCAR Arabic Unshuffled
widget:
- text: " عاصم +ة لبنان هي [MASK] ."
---
# AraBERT v1 & v2 : Pre-training BERT for Arabic Language Understanding
<img src="https://raw.githubusercontent.com/aub-mind/arabert/master/arabert_logo.png" width="100" align="left"/>
**AraBERT** is an Arabic pretrained lanaguage model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERT uses the same BERT-Base config. More details are available in the [AraBERT Paper](https://arxiv.org/abs/2003.00104) and in the [AraBERT Meetup](https://github.com/WissamAntoun/pydata_khobar_meetup)
There are two versions of the model, AraBERTv0.1 and AraBERTv1, with the difference being that AraBERTv1 uses pre-segmented text where prefixes and suffixes were splitted using the [Farasa Segmenter](http://alt.qcri.org/farasa/segmenter.html).
We evalaute AraBERT models on different downstream tasks and compare them to [mBERT]((https://github.com/google-research/bert/blob/master/multilingual.md)), and other state of the art models (*To the extent of our knowledge*). The Tasks were Sentiment Analysis on 6 different datasets ([HARD](https://github.com/elnagara/HARD-Arabic-Dataset), [ASTD-Balanced](https://www.aclweb.org/anthology/D15-1299), [ArsenTD-Lev](https://staff.aub.edu.lb/~we07/Publications/ArSentD-LEV_Sentiment_Corpus.pdf), [LABR](https://github.com/mohamedadaly/LABR)), Named Entity Recognition with the [ANERcorp](http://curtis.ml.cmu.edu/w/courses/index.php/ANERcorp), and Arabic Question Answering on [Arabic-SQuAD and ARCD](https://github.com/husseinmozannar/SOQAL)
# AraBERTv2
## What's New!
AraBERT now comes in 4 new variants to replace the old v1 versions:
More Detail in the AraBERT folder and in the [README](https://github.com/aub-mind/arabert/blob/master/AraBERT/README.md) and in the [AraBERT Paper](https://arxiv.org/abs/2003.00104v2)
Model | HuggingFace Model Name | Size (MB/Params)| Pre-Segmentation | DataSet (Sentences/Size/nWords) |
---|:---:|:---:|:---:|:---:
AraBERTv0.2-base | [bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) | 543MB / 136M | No | 200M / 77GB / 8.6B |
AraBERTv0.2-large| [bert-large-arabertv02](https://huggingface.co/aubmindlab/bert-large-arabertv02) | 1.38G 371M | No | 200M / 77GB / 8.6B |
AraBERTv2-base| [bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) | 543MB 136M | Yes | 200M / 77GB / 8.6B |
AraBERTv2-large| [bert-large-arabertv2](https://huggingface.co/aubmindlab/bert-large-arabertv2) | 1.38G 371M | Yes | 200M / 77GB / 8.6B |
AraBERTv0.1-base| [bert-base-arabertv01](https://huggingface.co/aubmindlab/bert-base-arabertv01) | 543MB 136M | No | 77M / 23GB / 2.7B |
AraBERTv1-base| [bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) | 543MB 136M | Yes | 77M / 23GB / 2.7B |
All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats.
## Better Pre-Processing and New Vocab
We identified an issue with AraBERTv1's wordpiece vocabulary. The issue came from punctuations and numbers that were still attached to words when learned the wordpiece vocab. We now insert a space between numbers and characters and around punctuation characters.
The new vocabulary was learnt using the `BertWordpieceTokenizer` from the `tokenizers` library, and should now support the Fast tokenizer implementation from the `transformers` library.
**P.S.**: All the old BERT codes should work with the new BERT, just change the model name and check the new preprocessing dunction
**Please read the section on how to use the [preprocessing function](#Preprocessing)**
## Bigger Dataset and More Compute
We used ~3.5 times more data, and trained for longer.
For Dataset Sources see the [Dataset Section](#Dataset)
Model | Hardware | num of examples with seq len (128 / 512) |128 (Batch Size/ Num of Steps) | 512 (Batch Size/ Num of Steps) | Total Steps | Total Time (in Days) |
---|:---:|:---:|:---:|:---:|:---:|:---:
AraBERTv0.2-base | TPUv3-8 | 420M / 207M | 2560 / 1M | 384/ 2M | 3M | -
AraBERTv0.2-large | TPUv3-128 | 420M / 207M | 13440 / 250K | 2056 / 300K | 550K | 7
AraBERTv2-base | TPUv3-8 | 420M / 207M | 2560 / 1M | 384/ 2M | 3M | -
AraBERTv2-large | TPUv3-128 | 520M / 245M | 13440 / 250K | 2056 / 300K | 550K | 7
AraBERT-base (v1/v0.1) | TPUv2-8 | - |512 / 900K | 128 / 300K| 1.2M | 4
# Dataset
The pretraining data used for the new AraBERT model is also used for Arabic **GPT2 and ELECTRA**.
The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation)
For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled:
- OSCAR unshuffled and filtered.
- [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01
- [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4)
- [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619)
- Assafir news articles. Huge thank you for Assafir for giving us the data
# Preprocessing
It is recommended to apply our preprocessing function before training/testing on any dataset.
**Install farasapy to segment text for AraBERT v1 & v2 `pip install farasapy`**
```python
from arabert.preprocess import ArabertPreprocessor
model_name="bert-large-arabertv2"
arabert_prep = ArabertPreprocessor(model_name=model_name)
text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري"
arabert_prep.preprocess(text)
>>>"و+ لن نبالغ إذا قل +نا إن هاتف أو كمبيوتر ال+ مكتب في زمن +نا هذا ضروري"
```
## Accepted_models
```
bert-base-arabertv01
bert-base-arabert
bert-base-arabertv02
bert-base-arabertv2
bert-large-arabertv02
bert-large-arabertv2
araelectra-base
aragpt2-base
aragpt2-medium
aragpt2-large
aragpt2-mega
```
# TensorFlow 1.x models
The TF1.x model are available in the HuggingFace models repo.
You can download them as follows:
- via git-lfs: clone all the models in a repo
```bash
curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
sudo apt-get install git-lfs
git lfs install
git clone https://huggingface.co/aubmindlab/MODEL_NAME
tar -C ./MODEL_NAME -zxvf /content/MODEL_NAME/tf1_model.tar.gz
```
where `MODEL_NAME` is any model under the `aubmindlab` name
- via `wget`:
- Go to the tf1_model.tar.gz file on huggingface.co/models/aubmindlab/MODEL_NAME.
- copy the `oid sha256`
- then run `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/INSERT_THE_SHA_HERE` (ex: for `aragpt2-base`: `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/3766fc03d7c2593ff2fb991d275e96b81b0ecb2098b71ff315611d052ce65248`)
# If you used this model please cite us as :
Google Scholar has our Bibtex wrong (missing name), use this instead
```
@inproceedings{antoun2020arabert,
title={AraBERT: Transformer-based Model for Arabic Language Understanding},
author={Antoun, Wissam and Baly, Fady and Hajj, Hazem},
booktitle={LREC 2020 Workshop Language Resources and Evaluation Conference 11--16 May 2020},
pages={9}
}
```
# Acknowledgments
Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT.
# Contacts
**Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]>
**Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
|
auday/paraphraser_model1 | 2021-03-22T14:41:08.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin"
]
| auday | 26 | transformers | This folder contain a Google T5 Transformer Fine-tuned to generate paraphrases using:
- Para_NMT_50M_Paraphrasing_train_small.csv 134337 lines of pair sentences 19Mbytes
- Para_NMT_50M_Paraphrasing_val_small.csv 14928 lines of pair sentences 2.0Mbytes
Training Start Time: Sun Mar 14 18:27:15 2021
Training End Time: Sun Mar 14 22:19:00 2021
|
auday/paraphraser_model2 | 2021-05-04T04:56:34.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin"
]
| auday | 95 | transformers | This folder contain a Google T5 Transformer Fine-tuned to generate paraphrases using:
- Quora_pair_train 134337 lines of pair sentences 14 Mbytes
- Quora_pair_val 14928 lines of pair sentences 1.6 Mbytes
training epoch: 6
Start Time: Sun Mar 14 18:27:15 2021
End Time: Sun Mar 14 22:19:00 2021
|
audreyakwenye/activity_matching | 2021-05-28T13:16:42.000Z | []
| [
".gitattributes"
]
| audreyakwenye | 0 | |||
ausgequetschtem/dfgsdgwewrfwer | 2021-04-03T15:03:31.000Z | []
| [
".gitattributes",
"README.md"
]
| ausgequetschtem | 0 | <a href="https://edu.wyoming.gov/wp-content/uploads/formidable/93/Watch-Raya-and-the-Last-Dragon-Online-03-April-2021.pdf">free robux generator</a>
<a href="https://www.esa.org/enjustice/wp-content/uploads/sites/28/formidable/4/Watch-Raya-and-the-Last-Dragon-Online-03-April-2021.pdf">free robux generator</a>
<a href="https://www.esa.org/enjustice/wp-content/uploads/sites/28/formidable/4/Watch-Demon-Slayer-Mugen-Train-Online-03-April-2021.pdf">free robux generator</a>
<a href="https://www.esa.org/enjustice/wp-content/uploads/sites/28/formidable/4/Watch-Girl-in-the-Basement-Online-03-April-2021.pdf">free robux generator</a>
<a href="https://www.esa.org/enjustice/wp-content/uploads/sites/28/formidable/4/Watch-Godzilla-vs-Kong-Online-03-April-2021.pdf">free robux generator</a>
<a href="https://www.esa.org/enjustice/wp-content/uploads/sites/28/formidable/4/Watch-Mortal-Kombat-Online-03-April-2021.pdf">free robux generator</a>
<a href="https://www.esa.org/enjustice/wp-content/uploads/sites/28/formidable/4/Watch-Nobody-Online-03-April-2021.pdf">free robux generator</a>
<a href="https://www.esa.org/enjustice/wp-content/uploads/sites/28/formidable/4/Watch-The-Unholy-Online-03-April-2021.pdf">free robux generator</a>
<a href="https://www.esa.org/enjustice/wp-content/uploads/sites/28/formidable/4/Watch-Tom-Jerry-Online-03-April-2021.pdf">free robux generator</a>
<a href="https://www.esa.org/enjustice/wp-content/uploads/sites/28/formidable/4/Watch-Voyagers-Online-03-April-2021.pdf">free robux generator</a>
<a href="https://www.esa.org/enjustice/wp-content/uploads/sites/28/formidable/4/Watch-Zack-Snyders-Justice-League-Online-03-April-2021.pdf">free robux generator</a>
<a href="https://www.esa.org/enjustice/wp-content/uploads/sites/28/formidable/4/Watch-Raya-and-the-Last-Dragon-Full-03-April-2021.pdf">free robux generator</a>
<a href="https://www.esa.org/enjustice/wp-content/uploads/sites/28/formidable/4/Watch-Demon-Slayer-Kimetsu-no-Yaiba-Full-03-April-2021.pdf">free robux generator</a>
<a href="https://www.esa.org/enjustice/wp-content/uploads/sites/28/formidable/4/Watch-Girl-in-the-Basement-Full-03-April-2021.pdf">free robux generator</a>
<a href="https://www.esa.org/enjustice/wp-content/uploads/sites/28/formidable/4/Watch-Godzilla-vs-Kong-Dragon-Full-03-April-2021.pdf">free robux generator</a>
<a href="https://www.esa.org/enjustice/wp-content/uploads/sites/28/formidable/4/Watch-Mortal-Kombat-Full-03-April-2021.pdf">free robux generator</a>
<a href="https://www.esa.org/enjustice/wp-content/uploads/sites/28/formidable/4/Watch-Nobody-Full-03-April-2021.pdf">free robux generator</a>
<a href="https://www.esa.org/enjustice/wp-content/uploads/sites/28/formidable/4/Watch-The-Unholy-Full-03-April-2021.pdf">free robux generator</a>
<a href="https://www.esa.org/enjustice/wp-content/uploads/sites/28/formidable/4/Watch-Tom-Jerry-Dragon-Full-03-April-2021.pdf">free robux generator</a>
<a href="https://www.esa.org/enjustice/wp-content/uploads/sites/28/formidable/4/Watch-Voyagers-Full-03-April-2021.pdf">free robux generator</a>
<a href="https://www.esa.org/enjustice/wp-content/uploads/sites/28/formidable/4/Watch-Zack-Snyders-Justice-League-Full-03-April-2021.pdf">free robux generator</a>
<a href="https://www.open.edu/openlearn/profiles/zu16827">free robux generator</a>
<a href="https://www.open.edu/openlearn/profiles/zu16831">free robux generator</a>
<a href="https://www.open.edu/openlearn/profiles/zu16840">free robux generator</a>
<a href="https://www.open.edu/openlearn/profiles/zu16845">free robux generator</a>
<a href="https://www.open.edu/openlearn/profiles/zu16853">free robux generator</a>
<a href="https://www.open.edu/openlearn/profiles/zu16860">free robux generator</a>
<a href="https://www.open.edu/openlearn/profiles/zu16866">free robux generator</a>
<a href="https://www.open.edu/openlearn/profiles/zu16874">free robux generator</a>
<a href="https://www.open.edu/openlearn/profiles/zu16878">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994711/-watch-the-unholy-2021-f-u-l-l-f-r-e-e/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994710/123movies-watch-the-unholy-2021-full-movie-online-free-stream/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994712/123movies-watch-the-unholy-online-2021-full-free/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994713/full-watch-the-unholy-2021-hd-online-full-free-download/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994714/-watch-the-unholy-2021-online-full-version-123movies/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994715/-watch-the-unholy-2021-o-n-l-i-n-e/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994716/free-watch-full-the-unholy-2021-123movies/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994717/full-watch-the-unholy-2021-hd-full-movie-online-free/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994718/123movies-watch-the-unholy-2021-hd-online-full-free-download/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994720/123movies-watch-demon-slayer-kimetsu-no-yaiba-mugen-train-2020-hd-online-full-free-stream/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994721/123movies-watch-girl-in-the-basement-2021-hd-online-full-free-stream/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994722/123movies-watch-godzilla-vs-kong-2021-hd-online-full-free-stream/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994723/123movies-watch-nobody-2021-hd-online-full-free-stream/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994724/123movies-watch-tom-and-jerry-2021-hd-online-full-free-stream/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994725/123movies-watch-zack-snyder-s-justice-league-2021-hd-online-full-free-stream/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994726/123movies-watch-chaos-walking-2021-hd-online-full-free-stream/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994727/123movies-watch-coming-2-america-2021-hd-online-full-free-stream/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994728/123movies-watch-judas-and-the-black-messiah-2021-hd-online-full-free-stream/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994729/123movies-watch-monster-hunter-2020-hd-online-full-free-stream/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994730/123movies-watch-mortal-kombat-2021-hd-online-full-free-stream/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994731/123movies-watch-raya-and-the-last-dragon-2021-hd-online-full-free-stream/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994732/123movies-watch-soul-2020-hd-online-full-free-stream/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994733/123movies-watch-voyagers-2021-hd-online-full-free-stream/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994734/123movies-watch-the-unholy-2021-hd-online-full-free-stream/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994736/-watch-demon-slayer-kimetsu-no-yaiba-mugen-train-2020-f-u-l-l-f-r-e-e/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994737/-watch-girl-in-the-basement-2021-f-u-l-l-f-r-e-e/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994738/-watch-godzilla-vs-kong-2021-f-u-l-l-f-r-e-e/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994739/-watch-nobody-2021-f-u-l-l-f-r-e-e/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994740/-watch-tom-and-jerry-2021-f-u-l-l-f-r-e-e/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994741/-watch-zack-snyder-s-justice-league-2021-f-u-l-l-f-r-e-e/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994742/-watch-chaos-walking-2021-f-u-l-l-f-r-e-e/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994743/-watch-coming-2-america-2021-f-u-l-l-f-r-e-e/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994744/-watch-judas-and-the-black-messiah-2021-f-u-l-l-f-r-e-e/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994745/-watch-monster-hunter-2020-f-u-l-l-f-r-e-e/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994746/-watch-mortal-kombat-2021-f-u-l-l-f-r-e-e/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994747/-watch-raya-and-the-last-dragon-2021-f-u-l-l-f-r-e-e/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994748/-watch-cherry-2021-f-u-l-l-f-r-e-e/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994749/-watch-voyagers-2021-f-u-l-l-f-r-e-e/">free robux generator</a>
<a href="https://careers.cfainstitute.org/job/8994750/-watch-the-unholy-2021-f-u-l-l-f-r-e-e/">free robux generator</a>
<a href="https://www.reddit.com/user/precisees/comments/mja0mh/windows_95_how_does_it_look_today/">thanks</a>
<a href="https://m.mydigoo.com/forums-topicdetail-253669.html">thanks</a>
<a href="https://www.posts123.com/post/1483570/pasdweiouoisdhfksdfh">thanks</a>
<a href="https://dcm.shivtr.com/forum_threads/3360395">thanks</a>
<a href="https://www.onfeetnation.com/profiles/blogs/skdjkdfhsdkjfhsdkjf">thanks</a> |
||
avadhesh06/as | 2021-05-14T02:21:26.000Z | []
| [
".gitattributes"
]
| avadhesh06 | 0 | |||
avichr/ar_hd | 2021-05-19T12:01:47.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"log_history.json",
"pytorch_model.bin",
"training_args.bin",
"vocab.txt"
]
| avichr | 21 | transformers | |
avichr/heBERT | 2021-05-19T12:02:40.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"arxiv:1810.04805",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"training_args.bin",
"vocab.txt"
]
| avichr | 567 | transformers | ## HeBERT: Pre-trained BERT for Polarity Analysis and Emotion Recognition
HeBERT is a Hebrew pretrained language model. It is based on Google's BERT architecture and it is BERT-Base config [(Devlin et al. 2018)](https://arxiv.org/abs/1810.04805). <br>
### HeBert was trained on three dataset:
1. A Hebrew version of OSCAR [(Ortiz, 2019)](https://oscar-corpus.com/): ~9.8 GB of data, including 1 billion words and over 20.8 millions sentences.
2. A Hebrew dump of [Wikipedia](https://dumps.wikimedia.org/hewiki/latest/): ~650 MB of data, including over 63 millions words and 3.8 millions sentences
3. Emotion UGC data that was collected for the purpose of this study. (described below)
We evaluated the model on emotion recognition and sentiment analysis, for a downstream tasks.
### Emotion UGC Data Description
Our User Genrated Content (UGC) is comments written on articles collected from 3 major news sites, between January 2020 to August 2020,. Total data size ~150 MB of data, including over 7 millions words and 350K sentences.
4000 sentences annotated by crowd members (3-10 annotators per sentence) for 8 emotions (anger, disgust, expectation , fear, happy, sadness, surprise and trust) and overall sentiment / polarity<br>
In order to valid the annotation, we search an agreement between raters to emotion in each sentence using krippendorff's alpha [(krippendorff, 1970)](https://journals.sagepub.com/doi/pdf/10.1177/001316447003000105). We left sentences that got alpha > 0.7. Note that while we found a general agreement between raters about emotion like happy, trust and disgust, there are few emotion with general disagreement about them, apparently given the complexity of finding them in the text (e.g. expectation and surprise).
## Stay tuned!
We are still working on our model and will edit this page as we progress.<br>
Note that we have released only sentiment analysis (polarity) at this point, emotion detection will be released later on.<br>
our git: https://github.com/avichaychriqui/HeBERT
## If you use this model please cite us as :
Chriqui, A., & Yahav, I. (2021). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. arXiv preprint arXiv:2102.01909.
```
@article{chriqui2021hebert,
title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition},
author={Chriqui, Avihay and Yahav, Inbal},
journal={arXiv preprint arXiv:2102.01909},
year={2021}
}
```
|
avichr/heBERT_sentiment_analysis | 2021-05-19T12:03:43.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"arxiv:1810.04805",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"training_args.bin",
"vocab.txt"
]
| avichr | 1,583 | transformers | ## HeBERT: Pre-trained BERT for Polarity Analysis and Emotion Recognition
HeBERT is a Hebrew pretrained language model. It is based on Google's BERT architecture and it is BERT-Base config [(Devlin et al. 2018)](https://arxiv.org/abs/1810.04805). <br>
HeBert was trained on three dataset:
1. A Hebrew version of OSCAR [(Ortiz, 2019)](https://oscar-corpus.com/): ~9.8 GB of data, including 1 billion words and over 20.8 millions sentences.
2. A Hebrew dump of Wikipedia: ~650 MB of data, including over 63 millions words and 3.8 millions sentences
3. Emotion UGC data that was collected for the purpose of this study. (described below)
We evaluated the model on emotion recognition and sentiment analysis, for a downstream tasks.
### Emotion UGC Data Description
Our User Genrated Content (UGC) is comments written on articles collected from 3 major news sites, between January 2020 to August 2020,. Total data size ~150 MB of data, including over 7 millions words and 350K sentences.
4000 sentences annotated by crowd members (3-10 annotators per sentence) for 8 emotions (anger, disgust, expectation , fear, happy, sadness, surprise and trust) and overall sentiment / polarity<br>
In order to valid the annotation, we search an agreement between raters to emotion in each sentence using krippendorff's alpha [(krippendorff, 1970)](https://journals.sagepub.com/doi/pdf/10.1177/001316447003000105). We left sentences that got alpha > 0.7. Note that while we found a general agreement between raters about emotion like happy, trust and disgust, there are few emotion with general disagreement about them, apparently given the complexity of finding them in the text (e.g. expectation and surprise).
### Performance
#### sentiment analysis \t\t\t\t
| | precision | recall | f1-score |
|--------------|-----------|--------|----------|
| natural | 0.83 | 0.56 | 0.67 |
| positive | 0.96 | 0.92 | 0.94 |
| negative | 0.97 | 0.99 | 0.98 |
| accuracy | | | 0.97 |
| macro avg | 0.92 | 0.82 | 0.86 |
| weighted avg | 0.96 | 0.97 | 0.96 |
## Stay tuned!
We are still working on our model and will edit this page as we progress.<br>
Note that we have released only sentiment analysis (polarity) at this point, emotion detection will be released later on.<br>
our git: https://github.com/avichaychriqui/HeBERT
## If you used this model please cite us as :
Chriqui, A., & Yahav, I. (2021). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. arXiv preprint arXiv:2102.01909.
```
@article{chriqui2021hebert,
title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition},
author={Chriqui, Avihay and Yahav, Inbal},
journal={arXiv preprint arXiv:2102.01909},
year={2021}
}
```
|
awl/pegasus_cnn | 2021-06-07T21:07:29.000Z | []
| [
".gitattributes"
]
| awl | 0 | |||
ayameRushia/wav2vec2-large-xlsr-indo | 2021-03-30T09:58:44.000Z | [
"pytorch",
"wav2vec2",
"id",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"optimizer.pt",
"preprocessor_config.json",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.json"
]
| ayameRushia | 9 | transformers | ---
language: id
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Indonesia by Ayame Rushia
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice id
type: common_voice
args: id
metrics:
- name: Test WER
type: wer
value: ???
---
# Wav2Vec2-Large-XLSR-53-Indonesia
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Indonesia using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "id", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("ayameRushia/wav2vec2-large-xlsr-indonesia-demo")
model = Wav2Vec2ForCTC.from_pretrained("ayameRushia/wav2vec2-large-xlsr-indonesia-demo")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "id", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("ayameRushia/wav2vec2-large-xlsr-indonesia-demo")
model = Wav2Vec2ForCTC.from_pretrained("ayameRushia/wav2vec2-large-xlsr-indonesia-demo")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**:
WER = 20.072720 %
## Training
Training using common voice dataset |
ayameRushia/wav2vec2-large-xlsr-indonesia | 2021-03-30T10:08:04.000Z | [
"pytorch",
"wav2vec2",
"id",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"optimizer.pt",
"preprocessor_config.json",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.json"
]
| ayameRushia | 7 | transformers | ---
language: id
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Indonesia by Ayame Rushia
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice id
type: common_voice
args: id
metrics:
- name: Test WER
type: wer
value: ???
---
# Wav2Vec2-Large-XLSR-53-Indonesia
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Indonesia using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "id", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("ayameRushia/wav2vec2-large-xlsr-indonesia-demo")
model = Wav2Vec2ForCTC.from_pretrained("ayameRushia/wav2vec2-large-xlsr-indonesia-demo")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "id", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("ayameRushia/wav2vec2-large-xlsr-indonesia-demo")
model = Wav2Vec2ForCTC.from_pretrained("ayameRushia/wav2vec2-large-xlsr-indonesia-demo")
model.to("cuda")
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**:
WER = 19.830319 %
## Training
Training using common voice dataset |
ayansinha/false-positives-scancode-bert-base-uncased-L8-1 | 2021-05-19T12:04:24.000Z | [
"tf",
"bert",
"masked-lm",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:scancode-rules",
"transformers",
"license",
"sentence-classification",
"scancode",
"license-compliance",
"license:apache-2.0",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| ayansinha | 28 | transformers | ---
language: en
tags:
- license
- sentence-classification
- scancode
- license-compliance
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
- scancode-rules
version: 1.0
---
# `false-positives-scancode-bert-base-uncased-L8-1`
## Intended Use
This model is intended to be used for Sentence Classification which is used for results
analysis in [`scancode-results-analyzer`](https://github.com/nexB/scancode-results-analyzer).
`scancode-results-analyzer` helps detect faulty scans in [`scancode-toolkit`](https://github.com/nexB/scancode-results-analyzer) by using statistics and nlp modeling, among other tools,
to make Scancode better.
#### How to use
Refer [quickstart](https://github.com/nexB/scancode-results-analyzer#quickstart---local-machine) section in `scancode-results-analyzer` documentation, for installing and getting started.
- [Link to Code](https://github.com/nexB/scancode-results-analyzer/blob/master/src/results_analyze/nlp_models.py)
Then in `NLPModelsPredict` class, function `predict_basic_false_positive` uses this classifier to
predict sentances as either valid license tags or false positives.
#### Limitations and bias
As this model is a fine-tuned version of the [`bert-base-uncased`](https://huggingface.co/bert-base-uncased) model,
it has the same biases, but as the task it is fine-tuned to is a very specific field
(license tags vs false positives) without those intended biases, it's safe to assume
those don't apply at all here.
## Training and Fine-Tuning Data
The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers).
Then this `bert-base-uncased` model was fine-tuned on Scancode Rule texts, specifically
trained in the context of sentence classification, where the two classes are
- License Tags
- False Positives of License Tags
## Training procedure
For fine-tuning procedure and training, refer `scancode-results-analyzer` code.
- [Link to Code](https://github.com/nexB/scancode-results-analyzer/blob/master/src/results_analyze/nlp_models.py)
In `NLPModelsTrain` class, function `prepare_input_data_false_positive` prepares the
training data.
In `NLPModelsTrain` class, function `train_basic_false_positive_classifier` fine-tunes
this classifier.
1. Model - [BertBaseUncased](https://huggingface.co/bert-base-uncased) (Weights 0.5 GB)
2. Sentence Length - 8
3. Labels - 2 (False Positive/License Tag)
4. After 4-6 Epochs of Fine-Tuning with learning rate 2e-5 (6 secs each on an RTX 2060)
Note: The classes aren't balanced.
## Eval results
- Accuracy on the training data (90%) : 0.99 (+- 0.005)
- Accuracy on the validation data (10%) : 0.96 (+- 0.015)
The errors have lower confidence scores using thresholds on confidence scores almost
makes it a perfect classifier as the classification task is comparatively easier.
Results are stable, in the sence fine-tuning accuracy is very easily achieved every
time, though more learning epochs makes the data overfit, i.e. the training loss
decreases, but the validation loss increases, even though accuracies are very stable
even on overfitting.
|
ayansinha/lic-class-scancode-bert-base-cased-L32-1 | 2021-05-19T12:04:46.000Z | [
"tf",
"bert",
"masked-lm",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:scancode-rules",
"transformers",
"license",
"sentence-classification",
"scancode",
"license-compliance",
"license:apache-2.0",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| ayansinha | 24 | transformers | ---
language: en
tags:
- license
- sentence-classification
- scancode
- license-compliance
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
- scancode-rules
version: 1.0
---
# `lic-class-scancode-bert-base-cased-L32-1`
## Intended Use
This model is intended to be used for Sentence Classification which is used for results
analysis in [`scancode-results-analyzer`](https://github.com/nexB/scancode-results-analyzer).
`scancode-results-analyzer` helps detect faulty scans in [`scancode-toolkit`](https://github.com/nexB/scancode-results-analyzer) by using statistics and nlp modeling, among other tools,
to make Scancode better.
## How to Use
Refer [quickstart](https://github.com/nexB/scancode-results-analyzer#quickstart---local-machine) section in `scancode-results-analyzer` documentation, for installing and getting started.
- [Link to Code](https://github.com/nexB/scancode-results-analyzer/blob/master/src/results_analyze/nlp_models.py)
Then in `NLPModelsPredict` class, function `predict_basic_lic_class` uses this classifier to
predict sentances as either valid license tags or false positives.
## Limitations and Bias
As this model is a fine-tuned version of the [`bert-base-cased`](https://huggingface.co/bert-base-cased) model,
it has the same biases, but as the task it is fine-tuned to is a very specific task
(license text/notice/tag/referance) without those intended biases, it's safe to assume
those don't apply at all here.
## Training and Fine-Tuning Data
The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers).
Then this `bert-base-cased` model was fine-tuned on Scancode Rule texts, specifically
trained in the context of sentence classification, where the four classes are
- License Text
- License Notice
- License Tag
- License Referance
## Training Procedure
For fine-tuning procedure and training, refer `scancode-results-analyzer` code.
- [Link to Code](https://github.com/nexB/scancode-results-analyzer/blob/master/src/results_analyze/nlp_models.py)
In `NLPModelsTrain` class, function `prepare_input_data_false_positive` prepares the
training data.
In `NLPModelsTrain` class, function `train_basic_false_positive_classifier` fine-tunes
this classifier.
1. Model - [BertBaseCased](https://huggingface.co/bert-base-cased) (Weights 0.5 GB)
2. Sentence Length - 32
3. Labels - 4 (License Text/Notice/Tag/Referance)
4. After 4 Epochs of Fine-Tuning with learning rate 2e-5 (60 secs each on an RTX 2060)
Note: The classes aren't balanced.
## Eval Results
- Accuracy on the training data (90%) : 0.98 (+- 0.01)
- Accuracy on the validation data (10%) : 0.84 (+- 0.01)
## Further Work
1. Apllying Splitting/Aggregation Strategies
2. Data Augmentation according to Vaalidation Errors
3. Bigger/Better Suited Models
|
aychang/bert-base-cased-trec-coarse | 2021-05-19T12:05:27.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"dataset:trec",
"transformers",
"license:mit"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| aychang | 71 | transformers | ---
language:
- en
thumbnail:
tags:
- text-classification
license: mit
datasets:
- trec
metrics:
---
# bert-base-cased trained on TREC 6-class task
## Model description
A simple base BERT model trained on the "trec" dataset.
## Intended uses & limitations
#### How to use
##### Transformers
```python
# Load model and tokenizer
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Use pipeline
from transformers import pipeline
model_name = "aychang/bert-base-cased-trec-coarse"
nlp = pipeline("sentiment-analysis", model=model_name, tokenizer=model_name)
results = nlp(["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"])
```
##### AdaptNLP
```python
from adaptnlp import EasySequenceClassifier
model_name = "aychang/bert-base-cased-trec-coarse"
texts = ["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"]
classifer = EasySequenceClassifier
results = classifier.tag_text(text=texts, model_name_or_path=model_name, mini_batch_size=2)
```
#### Limitations and bias
This is minimal language model trained on a benchmark dataset.
## Training data
TREC https://huggingface.co/datasets/trec
## Training procedure
Preprocessing, hardware used, hyperparameters...
#### Hardware
One V100
#### Hyperparameters and Training Args
```python
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir='./models',
num_train_epochs=2,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
warmup_steps=500,
weight_decay=0.01,
evaluation_strategy="steps",
logging_dir='./logs',
save_steps=3000
)
```
## Eval results
```
{'epoch': 2.0,
'eval_accuracy': 0.974,
'eval_f1': array([0.98181818, 0.94444444, 1. , 0.99236641, 0.96995708,
0.98159509]),
'eval_loss': 0.138086199760437,
'eval_precision': array([0.98540146, 0.98837209, 1. , 0.98484848, 0.94166667,
0.97560976]),
'eval_recall': array([0.97826087, 0.90425532, 1. , 1. , 1. ,
0.98765432]),
'eval_runtime': 1.6132,
'eval_samples_per_second': 309.943}
```
|
aychang/bert-large-cased-whole-word-masking-finetuned-squad | 2021-01-25T08:04:52.000Z | [
"en",
"dataset:squad",
"question-answering",
"torchscript",
"FastNN",
"license:mit"
]
| question-answering | [
".gitattributes",
"README.md",
"config.pbtxt",
"1/model.pt"
]
| aychang | 0 | ---
language:
- en
thumbnail:
tags:
- question-answering
- torchscript
- FastNN
license: mit
datasets:
- squad
metrics:
---
# TorchScript model of bert-large-cased-whole-word-masking-finetuned-squad
## Model description
A serialized torchscript model of bert-large-cased-whole-word-masking-finetuned-squad with a config.pbtxt for deployment using NVIDIA Triton Inference Server. |
|
aychang/distilbert-base-cased-trec-coarse | 2021-01-24T20:14:42.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:trec",
"transformers",
"license:mit"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| aychang | 17 | transformers | ---
language:
- en
thumbnail:
tags:
- text-classification
license: mit
datasets:
- trec
metrics:
---
# TREC 6-class Task: distilbert-base-cased
## Model description
A simple base distilBERT model trained on the "trec" dataset.
## Intended uses & limitations
#### How to use
##### Transformers
```python
# Load model and tokenizer
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Use pipeline
from transformers import pipeline
model_name = "aychang/distilbert-base-cased-trec-coarse"
nlp = pipeline("sentiment-analysis", model=model_name, tokenizer=model_name)
results = nlp(["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"])
```
##### AdaptNLP
```python
from adaptnlp import EasySequenceClassifier
model_name = "aychang/distilbert-base-cased-trec-coarse"
texts = ["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"]
classifer = EasySequenceClassifier
results = classifier.tag_text(text=texts, model_name_or_path=model_name, mini_batch_size=2)
```
#### Limitations and bias
This is minimal language model trained on a benchmark dataset.
## Training data
TREC https://huggingface.co/datasets/trec
## Training procedure
Preprocessing, hardware used, hyperparameters...
#### Hardware
One V100
#### Hyperparameters and Training Args
```python
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir='./models',
overwrite_output_dir=False,
num_train_epochs=2,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
warmup_steps=500,
weight_decay=0.01,
evaluation_strategy="steps",
logging_dir='./logs',
fp16=False,
eval_steps=500,
save_steps=300000
)
```
## Eval results
```
{'epoch': 2.0,
'eval_accuracy': 0.97,
'eval_f1': array([0.98220641, 0.91620112, 1. , 0.97709924, 0.98678414,
0.97560976]),
'eval_loss': 0.14275787770748138,
'eval_precision': array([0.96503497, 0.96470588, 1. , 0.96969697, 0.98245614,
0.96385542]),
'eval_recall': array([1. , 0.87234043, 1. , 0.98461538, 0.99115044,
0.98765432]),
'eval_runtime': 0.9731,
'eval_samples_per_second': 513.798}
```
|
aychang/distilbert-squad | 2021-01-25T08:37:16.000Z | [
"en",
"dataset:squad",
"question-answering",
"torchscript",
"FastNN",
"license:mit"
]
| question-answering | [
".gitattributes",
"README.md",
"config.pbtxt",
"1/model.pt"
]
| aychang | 0 | ---
language:
- en
thumbnail:
tags:
- question-answering
- torchscript
- FastNN
license: mit
datasets:
- squad
metrics:
---
# TorchScript model of distilbert-squad
## Model description
A serialized torchscript model of distilbert-squad with a config.pbtxt for deployment using NVIDIA Triton Inference Server. |
|
aychang/fasterrcnn-resnet50-cpu | 2021-01-25T08:29:49.000Z | [
"en",
"dataset:coco",
"object-detection",
"torchscript",
"FastNN",
"license:mit"
]
| object-detection | [
".gitattributes",
"README.md",
"config.pbtxt",
"1/model.pt"
]
| aychang | 0 | ---
language:
- en
thumbnail:
tags:
- object-detection
- torchscript
- FastNN
license: mit
datasets:
- coco
metrics:
---
# TorchScript model of faster-rcnn
## Model description
A serialized torchscript model of [faster-rcnn](https://pytorch.org/vision/stable/models.html#faster-r-cnn) with a config.pbtxt for deployment using NVIDIA Triton Inference Server. |
|
aychang/roberta-base-imdb | 2021-05-20T14:25:56.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"en",
"dataset:imdb",
"transformers",
"license:mit"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json",
"vocab.txt"
]
| aychang | 838 | transformers | ---
language:
- en
thumbnail:
tags:
- text-classification
license: mit
datasets:
- imdb
metrics:
---
# IMDB Sentiment Task: roberta-base
## Model description
A simple base roBERTa model trained on the "imdb" dataset.
## Intended uses & limitations
#### How to use
##### Transformers
```python
# Load model and tokenizer
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Use pipeline
from transformers import pipeline
model_name = "aychang/roberta-base-imdb"
nlp = pipeline("sentiment-analysis", model=model_name, tokenizer=model_name)
results = nlp(["I didn't really like it because it was so terrible.", "I love how easy it is to watch and get good results."])
```
##### AdaptNLP
```python
from adaptnlp import EasySequenceClassifier
model_name = "aychang/roberta-base-imdb"
texts = ["I didn't really like it because it was so terrible.", "I love how easy it is to watch and get good results."]
classifer = EasySequenceClassifier
results = classifier.tag_text(text=texts, model_name_or_path=model_name, mini_batch_size=2)
```
#### Limitations and bias
This is minimal language model trained on a benchmark dataset.
## Training data
IMDB https://huggingface.co/datasets/imdb
## Training procedure
#### Hardware
One V100
#### Hyperparameters and Training Args
```python
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir='./models',
overwrite_output_dir=False,
num_train_epochs=2,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
warmup_steps=500,
weight_decay=0.01,
evaluation_strategy="steps",
logging_dir='./logs',
fp16=False,
eval_steps=800,
save_steps=300000
)
```
## Eval results
```
{'epoch': 2.0,
'eval_accuracy': 0.94668,
'eval_f1': array([0.94603457, 0.94731017]),
'eval_loss': 0.2578844428062439,
'eval_precision': array([0.95762642, 0.93624502]),
'eval_recall': array([0.93472, 0.95864]),
'eval_runtime': 244.7522,
'eval_samples_per_second': 102.144}
```
|
ayushsubedi/v0_drg | 2021-03-16T14:43:17.000Z | []
| [
".gitattributes"
]
| ayushsubedi | 0 | |||
azazbutt7/image_captioning | 2021-04-21T02:25:56.000Z | []
| [
".gitattributes"
]
| azazbutt7 | 0 | |||
azkamovie/hkiytkiyii | 2021-05-20T13:46:11.000Z | []
| [
".gitattributes",
"README.md"
]
| azkamovie | 0 | |||
azkamovie/rtujtejueju | 2021-06-02T17:23:40.000Z | []
| [
".gitattributes",
"README.md"
]
| azkamovie | 0 | |||
azkamovie/treutute | 2021-06-04T18:31:38.000Z | []
| [
".gitattributes",
"README.md"
]
| azkamovie | 0 | |||
azkamovie/tykirykiyi | 2021-06-06T18:05:08.000Z | []
| [
".gitattributes",
"README.md"
]
| azkamovie | 0 | |||
azkamovie/tykiyirkir | 2021-06-09T18:15:27.000Z | []
| [
".gitattributes",
"README.md"
]
| azkamovie | 0 | |||
azkamovie/tyujrtjuti | 2021-05-29T12:46:13.000Z | []
| [
".gitattributes",
"README.md"
]
| azkamovie | 0 | |||
azkamovie/tyujtruu | 2021-05-26T18:41:52.000Z | []
| [
".gitattributes",
"README.md"
]
| azkamovie | 0 | |||
azkamovie/yhyhtyty | 2021-05-20T13:49:48.000Z | []
| [
".gitattributes",
"yedyry"
]
| azkamovie | 0 | |||
azkamovie/yoikyutoyro | 2021-05-23T05:42:30.000Z | []
| [
".gitattributes",
"README.md"
]
| azkamovie | 0 | |||
azkamovie/ytjrfyiy | 2021-06-14T20:40:31.000Z | []
| [
".gitattributes",
"README.md"
]
| azkamovie | 0 | |||
azkamovie/yuiyikryiyi | 2021-06-17T18:08:30.000Z | []
| [
".gitattributes",
"README.md"
]
| azkamovie | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.