modelId
stringlengths 4
112
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 21
values | files
list | publishedBy
stringlengths 2
37
| downloads_last_month
int32 0
9.44M
| library
stringclasses 15
values | modelCard
large_stringlengths 0
100k
|
---|---|---|---|---|---|---|---|---|
allenai/specter | 2021-05-18T23:28:05.000Z | [
"pytorch",
"jax",
"bert",
"en",
"dataset:SciDocs",
"arxiv:2004.07180",
"transformers",
"license:apache-2.0"
]
| [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| allenai | 4,086 | transformers | ---
language: en
thumbnail: "https://camo.githubusercontent.com/7d080b7a769f7fdf64ac0ebeb47b039cb50be35287e3071f9d633f0fe33e7596/68747470733a2f2f692e6962622e636f2f33544331576d472f737065637465722d6c6f676f2d63726f707065642e706e67"
license: apache-2.0
datasets:
- SciDocs
metrics:
- F1
- accuracy
- map
- ndcg
---
## SPECTER
SPECTER is a pre-trained language model to generate document-level embedding of documents. It is pre-trained on a a powerful signal of document-level relatedness: the citation graph. Unlike existing pretrained language models, SPECTER can be easily applied to downstream applications without task-specific fine-tuning.
Paper: [SPECTER: Document-level Representation Learning using Citation-informed Transformers](https://arxiv.org/pdf/2004.07180.pdf)
Original Repo: [Github](https://github.com/allenai/specter)
Evaluation Benchmark: [SciDocs](https://github.com/allenai/scidocs)
Authors: *Arman Cohan, Sergey Feldman, Iz Beltagy, Doug Downey, Daniel S. Weld*
|
|
allenai/t5-small-next-word-generator-qoogle | 2021-03-26T17:39:00.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| allenai | 43 | transformers | Next word generator trained on questions. Receives partial questions and tries to predict the next word.
Example use:
```python
from transformers import T5Config, T5ForConditionalGeneration, T5Tokenizer
model_name = "allenai/t5-small-next-word-generator-qoogle"
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("Which")
run_model("Which two")
run_model("Which two counties")
run_model("Which two counties are")
run_model("Which two counties are the")
run_model("Which two counties are the biggest")
run_model("Which two counties are the biggest economic")
run_model("Which two counties are the biggest economic powers")
```
which should result in the following:
```
['one']
['statements']
['are']
['in']
['most']
['in']
['zones']
['of']
```
|
allenai/t5-small-squad11 | 2021-03-26T19:30:13.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| allenai | 16 | transformers | SQuAD 1.1 question-answering based on T5-small.
Example use:
```python
from transformers import T5Config, T5ForConditionalGeneration, T5Tokenizer
model_name = "allenai/t5-small-next-word-generator-qoogle"
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("Who is the winner of 2009 olympics? \n Jack and Jill participated, but James won the games.")```
which should result in the following:
```
['James']
```
|
allenai/t5-small-squad2-next-word-generator-squad | 2021-03-26T16:48:51.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| allenai | 15 | transformers | Next word generator trained on questions. Receives partial questions and tries to predict the next word.
Example use:
```python
from transformers import T5Config, T5ForConditionalGeneration, T5Tokenizer
model_name = "allenai/t5-small-squad2-next-word-generator-squad"
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("Which")
run_model("Which two")
run_model("Which two counties")
run_model("Which two counties are")
run_model("Which two counties are the")
run_model("Which two counties are the biggest")
run_model("Which two counties are the biggest economic")
run_model("Which two counties are the biggest economic powers")
```
which should result in the following:
```
['one']
['statements']
['are']
['in']
['most']
['in']
['zones']
['of']
```
|
allenai/t5-small-squad2-question-generation | 2021-03-10T21:20:29.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| allenai | 71 | transformers | A simple question-generation model built based on SQuAD 2.0 dataset.
Example use:
```python
from transformers import T5Config, T5ForConditionalGeneration, T5Tokenizer
model_name = "allenai/t5-small-squad2-question-generation"
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("shrouds herself in white and walks penitentially disguised as brotherly love through factories and parliaments; offers help, but desires power;")
run_model("He thanked all fellow bloggers and organizations that showed support.")
run_model("Races are held between April and December at the Veliefendi Hippodrome near Bakerky, 15 km (9 miles) west of Istanbul.")
```
which should result in the following:
```
['What is the name of the man who is a brotherly love?']
['What did He thank all fellow bloggers and organizations that showed support?']
['Where is the Veliefendi Hippodrome located?']
```
|
allenai/unifiedqa-t5-11b | 2020-11-13T12:10:29.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| allenai | 286 | transformers | |
allenai/unifiedqa-t5-3b | 2020-11-13T11:54:17.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| allenai | 582 | transformers | |
allenai/unifiedqa-t5-base | 2020-10-29T20:29:37.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| allenai | 2,188 | transformers | |
allenai/unifiedqa-t5-large | 2020-10-29T20:40:22.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| allenai | 3,672 | transformers | |
allenai/unifiedqa-t5-small | 2020-10-29T20:21:32.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| allenai | 2,459 | transformers | |
allenai/wmt16-en-de-12-1 | 2020-12-11T21:33:17.000Z | [
"pytorch",
"fsmt",
"seq2seq",
"en",
"de",
"dataset:wmt16",
"arxiv:2006.10369",
"transformers",
"translation",
"wmt16",
"allenai",
"license:apache-2.0",
"text2text-generation"
]
| translation | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab-src.json",
"vocab-tgt.json"
]
| allenai | 26 | transformers |
---
language:
- en
- de
thumbnail:
tags:
- translation
- wmt16
- allenai
license: apache-2.0
datasets:
- wmt16
metrics:
- bleu
---
# FSMT
## Model description
This is a ported version of fairseq-based [wmt16 transformer](https://github.com/jungokasai/deep-shallow/) for en-de.
For more details, please, see [Deep Encoder, Shallow Decoder: Reevaluating the Speed-Quality Tradeoff in Machine Translation](https://arxiv.org/abs/2006.10369).
All 3 models are available:
* [wmt16-en-de-dist-12-1](https://huggingface.co/allenai/wmt16-en-de-dist-12-1)
* [wmt16-en-de-dist-6-1](https://huggingface.co/allenai/wmt16-en-de-dist-6-1)
* [wmt16-en-de-12-1](https://huggingface.co/allenai/wmt16-en-de-12-1)
## Intended uses & limitations
#### How to use
```python
from transformers import FSMTForConditionalGeneration, FSMTTokenizer
mname = "allenai/wmt16-en-de-12-1"
tokenizer = FSMTTokenizer.from_pretrained(mname)
model = FSMTForConditionalGeneration.from_pretrained(mname)
input = "Machine learning is great, isn't it?"
input_ids = tokenizer.encode(input, return_tensors="pt")
outputs = model.generate(input_ids)
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(decoded) # Maschinelles Lernen ist großartig, nicht wahr?
```
#### Limitations and bias
## Training data
Pretrained weights were left identical to the original model released by allenai. For more details, please, see the [paper](https://arxiv.org/abs/2006.10369).
## Eval results
Here are the BLEU scores:
model | fairseq | transformers
-------|---------|----------
wmt16-en-de-12-1 | 26.9 | 25.75
The score is slightly below the score reported in the paper, as the researchers don't use `sacrebleu` and measure the score on tokenized outputs. `transformers` score was measured using `sacrebleu` on detokenized outputs.
The score was calculated using this code:
```bash
git clone https://github.com/huggingface/transformers
cd transformers
export PAIR=en-de
export DATA_DIR=data/$PAIR
export SAVE_DIR=data/$PAIR
export BS=8
export NUM_BEAMS=5
mkdir -p $DATA_DIR
sacrebleu -t wmt16 -l $PAIR --echo src > $DATA_DIR/val.source
sacrebleu -t wmt16 -l $PAIR --echo ref > $DATA_DIR/val.target
echo $PAIR
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py allenai/wmt16-en-de-12-1 $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS
```
## Data Sources
- [training, etc.](http://www.statmt.org/wmt16/)
- [test set](http://matrix.statmt.org/test_sets/newstest2016.tgz?1504722372)
### BibTeX entry and citation info
```
@misc{kasai2020deep,
title={Deep Encoder, Shallow Decoder: Reevaluating the Speed-Quality Tradeoff in Machine Translation},
author={Jungo Kasai and Nikolaos Pappas and Hao Peng and James Cross and Noah A. Smith},
year={2020},
eprint={2006.10369},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
allenai/wmt16-en-de-dist-12-1 | 2020-12-11T21:33:20.000Z | [
"pytorch",
"fsmt",
"seq2seq",
"en",
"de",
"dataset:wmt16",
"arxiv:2006.10369",
"transformers",
"translation",
"wmt16",
"allenai",
"license:apache-2.0",
"text2text-generation"
]
| translation | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab-src.json",
"vocab-tgt.json"
]
| allenai | 34 | transformers |
---
language:
- en
- de
thumbnail:
tags:
- translation
- wmt16
- allenai
license: apache-2.0
datasets:
- wmt16
metrics:
- bleu
---
# FSMT
## Model description
This is a ported version of fairseq-based [wmt16 transformer](https://github.com/jungokasai/deep-shallow/) for en-de.
For more details, please, see [Deep Encoder, Shallow Decoder: Reevaluating the Speed-Quality Tradeoff in Machine Translation](https://arxiv.org/abs/2006.10369).
All 3 models are available:
* [wmt16-en-de-dist-12-1](https://huggingface.co/allenai/wmt16-en-de-dist-12-1)
* [wmt16-en-de-dist-6-1](https://huggingface.co/allenai/wmt16-en-de-dist-6-1)
* [wmt16-en-de-12-1](https://huggingface.co/allenai/wmt16-en-de-12-1)
## Intended uses & limitations
#### How to use
```python
from transformers import FSMTForConditionalGeneration, FSMTTokenizer
mname = "allenai/wmt16-en-de-dist-12-1"
tokenizer = FSMTTokenizer.from_pretrained(mname)
model = FSMTForConditionalGeneration.from_pretrained(mname)
input = "Machine learning is great, isn't it?"
input_ids = tokenizer.encode(input, return_tensors="pt")
outputs = model.generate(input_ids)
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(decoded) # Maschinelles Lernen ist großartig, nicht wahr?
```
#### Limitations and bias
## Training data
Pretrained weights were left identical to the original model released by allenai. For more details, please, see the [paper](https://arxiv.org/abs/2006.10369).
## Eval results
Here are the BLEU scores:
model | fairseq | transformers
-------|---------|----------
wmt16-en-de-dist-12-1 | 28.3 | 27.52
The score is slightly below the score reported in the paper, as the researchers don't use `sacrebleu` and measure the score on tokenized outputs. `transformers` score was measured using `sacrebleu` on detokenized outputs.
The score was calculated using this code:
```bash
git clone https://github.com/huggingface/transformers
cd transformers
export PAIR=en-de
export DATA_DIR=data/$PAIR
export SAVE_DIR=data/$PAIR
export BS=8
export NUM_BEAMS=5
mkdir -p $DATA_DIR
sacrebleu -t wmt16 -l $PAIR --echo src > $DATA_DIR/val.source
sacrebleu -t wmt16 -l $PAIR --echo ref > $DATA_DIR/val.target
echo $PAIR
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py allenai/wmt16-en-de-dist-12-1 $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS
```
## Data Sources
- [training, etc.](http://www.statmt.org/wmt16/)
- [test set](http://matrix.statmt.org/test_sets/newstest2016.tgz?1504722372)
### BibTeX entry and citation info
```
@misc{kasai2020deep,
title={Deep Encoder, Shallow Decoder: Reevaluating the Speed-Quality Tradeoff in Machine Translation},
author={Jungo Kasai and Nikolaos Pappas and Hao Peng and James Cross and Noah A. Smith},
year={2020},
eprint={2006.10369},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
allenai/wmt16-en-de-dist-6-1 | 2020-12-11T21:33:24.000Z | [
"pytorch",
"fsmt",
"seq2seq",
"en",
"de",
"dataset:wmt16",
"arxiv:2006.10369",
"transformers",
"translation",
"wmt16",
"allenai",
"license:apache-2.0",
"text2text-generation"
]
| translation | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab-src.json",
"vocab-tgt.json"
]
| allenai | 34 | transformers |
---
language:
- en
- de
thumbnail:
tags:
- translation
- wmt16
- allenai
license: apache-2.0
datasets:
- wmt16
metrics:
- bleu
---
# FSMT
## Model description
This is a ported version of fairseq-based [wmt16 transformer](https://github.com/jungokasai/deep-shallow/) for en-de.
For more details, please, see [Deep Encoder, Shallow Decoder: Reevaluating the Speed-Quality Tradeoff in Machine Translation](https://arxiv.org/abs/2006.10369).
All 3 models are available:
* [wmt16-en-de-dist-12-1](https://huggingface.co/allenai/wmt16-en-de-dist-12-1)
* [wmt16-en-de-dist-6-1](https://huggingface.co/allenai/wmt16-en-de-dist-6-1)
* [wmt16-en-de-12-1](https://huggingface.co/allenai/wmt16-en-de-12-1)
## Intended uses & limitations
#### How to use
```python
from transformers import FSMTForConditionalGeneration, FSMTTokenizer
mname = "allenai/wmt16-en-de-dist-6-1"
tokenizer = FSMTTokenizer.from_pretrained(mname)
model = FSMTForConditionalGeneration.from_pretrained(mname)
input = "Machine learning is great, isn't it?"
input_ids = tokenizer.encode(input, return_tensors="pt")
outputs = model.generate(input_ids)
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(decoded) # Maschinelles Lernen ist großartig, nicht wahr?
```
#### Limitations and bias
## Training data
Pretrained weights were left identical to the original model released by allenai. For more details, please, see the [paper](https://arxiv.org/abs/2006.10369).
## Eval results
Here are the BLEU scores:
model | fairseq | transformers
-------|---------|----------
wmt16-en-de-dist-6-1 | 27.4 | 27.11
The score is slightly below the score reported in the paper, as the researchers don't use `sacrebleu` and measure the score on tokenized outputs. `transformers` score was measured using `sacrebleu` on detokenized outputs.
The score was calculated using this code:
```bash
git clone https://github.com/huggingface/transformers
cd transformers
export PAIR=en-de
export DATA_DIR=data/$PAIR
export SAVE_DIR=data/$PAIR
export BS=8
export NUM_BEAMS=5
mkdir -p $DATA_DIR
sacrebleu -t wmt16 -l $PAIR --echo src > $DATA_DIR/val.source
sacrebleu -t wmt16 -l $PAIR --echo ref > $DATA_DIR/val.target
echo $PAIR
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py allenai/wmt16-en-de-dist-6-1 $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS
```
## Data Sources
- [training, etc.](http://www.statmt.org/wmt16/)
- [test set](http://matrix.statmt.org/test_sets/newstest2016.tgz?1504722372)
### BibTeX entry and citation info
```
@misc{kasai2020deep,
title={Deep Encoder, Shallow Decoder: Reevaluating the Speed-Quality Tradeoff in Machine Translation},
author={Jungo Kasai and Nikolaos Pappas and Hao Peng and James Cross and Noah A. Smith},
year={2020},
eprint={2006.10369},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
allenai/wmt19-de-en-6-6-base | 2020-12-11T21:33:27.000Z | [
"pytorch",
"fsmt",
"seq2seq",
"de",
"en",
"dataset:wmt19",
"arxiv:2006.10369",
"transformers",
"translation",
"wmt19",
"allenai",
"license:apache-2.0",
"text2text-generation"
]
| translation | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab-src.json",
"vocab-tgt.json"
]
| allenai | 223 | transformers |
---
language:
- de
- en
thumbnail:
tags:
- translation
- wmt19
- allenai
license: apache-2.0
datasets:
- wmt19
metrics:
- bleu
---
# FSMT
## Model description
This is a ported version of fairseq-based [wmt19 transformer](https://github.com/jungokasai/deep-shallow/) for de-en.
For more details, please, see [Deep Encoder, Shallow Decoder: Reevaluating the Speed-Quality Tradeoff in Machine Translation](https://arxiv.org/abs/2006.10369).
2 models are available:
* [wmt19-de-en-6-6-big](https://huggingface.co/allenai/wmt19-de-en-6-6-big)
* [wmt19-de-en-6-6-base](https://huggingface.co/allenai/wmt19-de-en-6-6-base)
## Intended uses & limitations
#### How to use
```python
from transformers import FSMTForConditionalGeneration, FSMTTokenizer
mname = "allenai/wmt19-de-en-6-6-base"
tokenizer = FSMTTokenizer.from_pretrained(mname)
model = FSMTForConditionalGeneration.from_pretrained(mname)
input = "Maschinelles Lernen ist großartig, nicht wahr?"
input_ids = tokenizer.encode(input, return_tensors="pt")
outputs = model.generate(input_ids)
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(decoded) # Machine learning is great, isn't it?
```
#### Limitations and bias
## Training data
Pretrained weights were left identical to the original model released by allenai. For more details, please, see the [paper](https://arxiv.org/abs/2006.10369).
## Eval results
Here are the BLEU scores:
model | transformers
-------|---------
wmt19-de-en-6-6-base | 38.37
The score was calculated using this code:
```bash
git clone https://github.com/huggingface/transformers
cd transformers
export PAIR=de-en
export DATA_DIR=data/$PAIR
export SAVE_DIR=data/$PAIR
export BS=8
export NUM_BEAMS=5
mkdir -p $DATA_DIR
sacrebleu -t wmt19 -l $PAIR --echo src > $DATA_DIR/val.source
sacrebleu -t wmt19 -l $PAIR --echo ref > $DATA_DIR/val.target
echo $PAIR
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py allenai/wmt19-de-en-6-6-base $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS
```
## Data Sources
- [training, etc.](http://www.statmt.org/wmt19/)
- [test set](http://matrix.statmt.org/test_sets/newstest2019.tgz?1556572561)
### BibTeX entry and citation info
```
@misc{kasai2020deep,
title={Deep Encoder, Shallow Decoder: Reevaluating the Speed-Quality Tradeoff in Machine Translation},
author={Jungo Kasai and Nikolaos Pappas and Hao Peng and James Cross and Noah A. Smith},
year={2020},
eprint={2006.10369},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
allenai/wmt19-de-en-6-6-big | 2020-12-11T21:33:31.000Z | [
"pytorch",
"fsmt",
"seq2seq",
"de",
"en",
"dataset:wmt19",
"arxiv:2006.10369",
"transformers",
"translation",
"wmt19",
"allenai",
"license:apache-2.0",
"text2text-generation"
]
| translation | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab-src.json",
"vocab-tgt.json"
]
| allenai | 119 | transformers |
---
language:
- de
- en
thumbnail:
tags:
- translation
- wmt19
- allenai
license: apache-2.0
datasets:
- wmt19
metrics:
- bleu
---
# FSMT
## Model description
This is a ported version of fairseq-based [wmt19 transformer](https://github.com/jungokasai/deep-shallow/) for de-en.
For more details, please, see [Deep Encoder, Shallow Decoder: Reevaluating the Speed-Quality Tradeoff in Machine Translation](https://arxiv.org/abs/2006.10369).
2 models are available:
* [wmt19-de-en-6-6-big](https://huggingface.co/allenai/wmt19-de-en-6-6-big)
* [wmt19-de-en-6-6-base](https://huggingface.co/allenai/wmt19-de-en-6-6-base)
## Intended uses & limitations
#### How to use
```python
from transformers import FSMTForConditionalGeneration, FSMTTokenizer
mname = "allenai/wmt19-de-en-6-6-big"
tokenizer = FSMTTokenizer.from_pretrained(mname)
model = FSMTForConditionalGeneration.from_pretrained(mname)
input = "Maschinelles Lernen ist großartig, nicht wahr?"
input_ids = tokenizer.encode(input, return_tensors="pt")
outputs = model.generate(input_ids)
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(decoded) # Machine learning is great, isn't it?
```
#### Limitations and bias
## Training data
Pretrained weights were left identical to the original model released by allenai. For more details, please, see the [paper](https://arxiv.org/abs/2006.10369).
## Eval results
Here are the BLEU scores:
model | transformers
-------|---------
wmt19-de-en-6-6-big | 39.9
The score was calculated using this code:
```bash
git clone https://github.com/huggingface/transformers
cd transformers
export PAIR=de-en
export DATA_DIR=data/$PAIR
export SAVE_DIR=data/$PAIR
export BS=8
export NUM_BEAMS=5
mkdir -p $DATA_DIR
sacrebleu -t wmt19 -l $PAIR --echo src > $DATA_DIR/val.source
sacrebleu -t wmt19 -l $PAIR --echo ref > $DATA_DIR/val.target
echo $PAIR
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py allenai/wmt19-de-en-6-6-big $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS
```
## Data Sources
- [training, etc.](http://www.statmt.org/wmt19/)
- [test set](http://matrix.statmt.org/test_sets/newstest2019.tgz?1556572561)
### BibTeX entry and citation info
```
@misc{kasai2020deep,
title={Deep Encoder, Shallow Decoder: Reevaluating the Speed-Quality Tradeoff in Machine Translation},
author={Jungo Kasai and Nikolaos Pappas and Hao Peng and James Cross and Noah A. Smith},
year={2020},
eprint={2006.10369},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
allenyummy/chinese-bert-wwm-ehr-ner-qasl | 2021-05-19T11:42:17.000Z | [
"pytorch",
"bert",
"zh-tw",
"transformers"
]
| [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| allenyummy | 13 | transformers | ---
language: zh-tw
---
# Model name
Chinese-bert-wwm-electrical-health-records-ner-question-answering-sequence-labeling
#### How to use
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("allenyummy/chinese-bert-wwm-ehr-ner-qasl")
model = AutoModelForTokenClassification.from_pretrained("allenyummy/chinese-bert-wwm-ehr-ner-qasl")
``` |
|
allenyummy/chinese-bert-wwm-ehr-ner-sl | 2021-05-19T11:42:42.000Z | [
"pytorch",
"bert",
"zh-tw",
"transformers"
]
| [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| allenyummy | 83 | transformers | ---
language: zh-tw
---
# Model name
Chinese-bert-wwm-electrical-health-records-ner-sequence-labeling
#### How to use
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("allenyummy/chinese-bert-wwm-ehr-ner-sl")
model = AutoModelForTokenClassification.from_pretrained("allenyummy/chinese-bert-wwm-ehr-ner-sl")
```
|
|
alokmatta/wav2vec2-large-xlsr-53-sw | 2021-03-29T00:29:17.000Z | [
"pytorch",
"wav2vec2",
"sw",
"dataset:ALFFA,Gamayun & IWSLT",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"optimizer.pt",
"preprocessor_config.json",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.json",
".ipynb_checkpoints/README-checkpoint.md"
]
| alokmatta | 430 | transformers | language: {lang_id} #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
datasets:
- common_voice #TODO: remove if you did not use the common voice dataset
- TODO: add more datasets if you have used additional datasets. Make sure to use the exact same
dataset name as the one found [here](https://huggingface.co/datasets). If the dataset can not be found in the official datasets, just give it a new name
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: {human_readable_name} #TODO: replace {human_readable_name} with a name of your model as it should appear on the leaderboard. It could be something like `Elgeish XLSR Wav2Vec2 Large 53`
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice {lang_id} #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
type: common_voice
args: {lang_id} #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
metrics:
- name: Test WER
type: wer
value: {wer_result_on_test} #TODO (IMPORTANT): replace {wer_result_on_test} with the WER error rate you achieved on the common_voice test set. It should be in the format XX.XX (don't add the % sign here). **Please** remember to fill out this value after you evaluated your model, so that your model appears on the leaderboard. If you fill out this model card before evaluating your model, please remember to edit the model card afterward to fill in your value
---
# Wav2Vec2-Large-XLSR-53-{language} #TODO: replace language with your {language}, *e.g.* French
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on {language} using the [Common Voice](https://huggingface.co/datasets/common_voice), ... and ... dataset{s}. #TODO: replace {language} with your language, *e.g.* French and eventually add more datasets that were used and eventually remove common voice if model was not trained on common voice
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "{lang_id}", split="test[:2%]") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
processor = Wav2Vec2Processor.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model = Wav2Vec2ForCTC.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice. # TODO: replace #TODO: replace language with your {language}, *e.g.* French
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "{lang_id}", split="test") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model = Wav2Vec2ForCTC.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]' # TODO: adapt this list to include all special characters you removed from the data
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: XX.XX % # TODO: write output of print here. IMPORTANT: Please remember to also replace {wer_result_on_test} at the top of with this value here. tags.
## Training
The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO: adapt to state all the datasets that were used for training.
The script used for training can be found [here](...) # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here. |
alondraa15/Holo | 2021-03-13T08:26:32.000Z | []
| [
".gitattributes",
"README.md"
]
| alondraa15 | 0 | |||
aloxatel/3JQ | 2021-05-20T13:38:29.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| aloxatel | 19 | transformers | |
aloxatel/3RH | 2021-05-20T13:41:07.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| aloxatel | 17 | transformers | |
aloxatel/7EG | 2021-05-20T13:44:27.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| aloxatel | 18 | transformers | |
aloxatel/9WT | 2021-05-18T23:30:14.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| aloxatel | 20 | transformers | |
aloxatel/AVG | 2021-05-20T13:47:24.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| aloxatel | 19 | transformers | |
aloxatel/KS8 | 2021-05-20T13:52:21.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| aloxatel | 16 | transformers | |
aloxatel/QHR | 2021-05-20T13:57:08.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| aloxatel | 17 | transformers | |
aloxatel/W1G | 2021-05-20T14:00:07.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| aloxatel | 21 | transformers | |
aloxatel/W2L | 2021-05-20T14:04:23.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| aloxatel | 23 | transformers | |
aloxatel/bert-base-mnli | 2021-05-18T23:31:06.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| aloxatel | 51 | transformers | |
aloxatel/mbert | 2021-05-19T11:43:34.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt",
".ipynb_checkpoints/config-checkpoint.json"
]
| aloxatel | 27 | transformers | |
alperbayram/first_model | 2021-05-22T12:29:00.000Z | []
| [
".gitattributes"
]
| alperbayram | 0 | |||
alvaroalon2/biobert_chemical_ner | 2021-05-28T10:50:03.000Z | [
"pytorch",
"bert",
"token-classification",
"English",
"dataset:BC5CDR-chemicals",
"dataset:BC4CHEMD",
"transformers",
"NER",
"Biomedical",
"Chemicals"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| alvaroalon2 | 94 | transformers | ---
language: "English"
tags:
- token-classification
- NER
- Biomedical
- Chemicals
datasets:
- BC5CDR-chemicals
- BC4CHEMD
---
BioBERT model fine-tuned in NER task with BC5CDR-chemicals and BC4CHEMD corpus |
alvaroalon2/biobert_diseases_ner | 2021-05-28T11:05:16.000Z | [
"pytorch",
"bert",
"token-classification",
"English",
"dataset:BC5CDR-diseases",
"dataset:ncbi_disease",
"transformers",
"NER",
"Biomedical",
"Diseases"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"eval_results.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"test_predictions.txt",
"test_results.txt",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| alvaroalon2 | 119 | transformers | ---
language: "English"
tags:
- token-classification
- NER
- Biomedical
- Diseases
datasets:
- BC5CDR-diseases
- ncbi_disease
---
BioBERT model fine-tuned in NER task with BC5CDR-diseases and NCBI-diseases corpus |
alvaroalon2/biobert_genetic_ner | 2021-05-28T11:26:58.000Z | [
"pytorch",
"bert",
"token-classification",
"English",
"dataset:JNLPBA",
"dataset:BC2GM",
"transformers",
"NER",
"Biomedical",
"Genetics"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| alvaroalon2 | 31 | transformers | ---
language: "English"
tags:
- token-classification
- NER
- Biomedical
- Genetics
datasets:
- JNLPBA
- BC2GM
---
BioBERT model fine-tuned in NER task with JNLPBA and BC2GM corpus for genetic class entities. |
alvinhou/model_test | 2021-02-10T04:52:59.000Z | []
| [
".gitattributes",
"README.md"
]
| alvinhou | 0 | Hi! |
||
amaleshv/gender_detection | 2021-04-20T09:48:07.000Z | []
| [
".gitattributes",
"README.md"
]
| amaleshv | 0 | |||
amazon/bort | 2021-05-18T23:32:35.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"arxiv:2010.10499",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
]
| amazon | 724 | transformers | ⚠️ **Disclaimer** ⚠️
This model is community-contributed, and not supported by Amazon, Inc.
## BORT
[Amazon's BORT](https://www.amazon.science/blog/a-version-of-the-bert-language-model-thats-20-times-as-fast)
BORT is a highly compressed version of [bert-large](https://huggingface.co/bert-large-uncased) that is up to 10 times faster at inference.
The model is an optimal sub-architecture of *bert-large* that was found using neural architecture search.
[Paper](https://arxiv.org/abs/2010.10499)
**Abstract**
We extract an optimal subset of architectural parameters for the BERT architecture from Devlin et al. (2018) by applying recent breakthroughs in algorithms for neural architecture search. This optimal subset, which we refer to as "Bort", is demonstrably smaller, having an effective (that is, not counting the embedding layer) size of 5.5% the original BERT-large architecture, and 16% of the net size. Bort is also able to be pretrained in 288 GPU hours, which is 1.2% of the time required to pretrain the highest-performing BERT parametric architectural variant, RoBERTa-large (Liu et al., 2019), and about 33% of that of the world-record, in GPU hours, required to train BERT-large on the same hardware. It is also 7.9x faster on a CPU, as well as being better performing than other compressed variants of the architecture, and some of the non-compressed variants: it obtains performance improvements of between 0.3% and 31%, absolute, with respect to BERT-large, on multiple public natural language understanding (NLU) benchmarks.
The original model can be found under:
https://github.com/alexa/bort
**IMPORTANT**
BORT requires a very unique fine-tuning algorithm, called [Agora](https://adewynter.github.io/notes/bort_algorithms_and_applications.html) which is not open-sourced yet.
Standard fine-tuning has not shown to work well in initial experiments, so stay tuned for updates!
|
amberoad/bert-multilingual-passage-reranking-msmarco | 2021-05-18T23:33:46.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"multilingual",
"dataset:msmarco",
"arxiv:1901.04085",
"transformers",
"msmarco",
"passage reranking",
"license:apache-2.0"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| amberoad | 5,279 | transformers | ---
language: multilingual
thumbnail: "https://amberoad.de/images/logo_text.png"
tags:
- msmarco
- multilingual
- passage reranking
license: Apache-2.0
datasets:
- msmarco
metrics:
- MRR
widget:
- query: "What is a corporation?"
passage: "A company is incorporated in a specific nation, often within the bounds of a smaller subset of that nation, such as a state or province. The corporation is then governed by the laws of incorporation in that state. A corporation may issue stock, either private or public, or may be classified as a non-stock corporation. If stock is issued, the corporation will usually be governed by its shareholders, either directly or indirectly."
---
# Passage Reranking Multilingual BERT 🔃 🌍
## Model description
**Input:** Supports over 100 Languages. See [List of supported languages](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages) for all available.
**Purpose:** This module takes a search query [1] and a passage [2] and calculates if the passage matches the query.
It can be used as an improvement for Elasticsearch Results and boosts the relevancy by up to 100%.
**Architecture:** On top of BERT there is a Densly Connected NN which takes the 768 Dimensional [CLS] Token as input and provides the output ([Arxiv](https://arxiv.org/abs/1901.04085)).
**Output:** Just a single value between between -10 and 10. Better matching query,passage pairs tend to have a higher a score.
## Intended uses & limitations
Both query[1] and passage[2] have to fit in 512 Tokens.
As you normally want to rerank the first dozens of search results keep in mind the inference time of approximately 300 ms/query.
#### How to use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("amberoad/bert-multilingual-passage-reranking-msmarco")
model = AutoModelForSequenceClassification.from_pretrained("amberoad/bert-multilingual-passage-reranking-msmarco")
```
This Model can be used as a drop-in replacement in the [Nboost Library](https://github.com/koursaros-ai/nboost)
Through this you can directly improve your Elasticsearch Results without any coding.
## Training data
This model is trained using the [**Microsoft MS Marco Dataset**](https://microsoft.github.io/msmarco/ "Microsoft MS Marco"). This training dataset contains approximately 400M tuples of a query, relevant and non-relevant passages. All datasets used for training and evaluating are listed in this [table](https://github.com/microsoft/MSMARCO-Passage-Ranking#data-information-and-formating). The used dataset for training is called *Train Triples Large*, while the evaluation was made on *Top 1000 Dev*. There are 6,900 queries in total in the development dataset, where each query is mapped to top 1,000 passage retrieved using BM25 from MS MARCO corpus.
## Training procedure
The training is performed the same way as stated in this [README](https://github.com/nyu-dl/dl4marco-bert "NYU Github"). See their excellent Paper on [Arxiv](https://arxiv.org/abs/1901.04085).
We changed the BERT Model from an English only to the default BERT Multilingual uncased Model from [Google](https://huggingface.co/bert-base-multilingual-uncased).
Training was done 400 000 Steps. This equaled 12 hours an a TPU V3-8.
## Eval results
We see nearly similar performance than the English only Model in the English [Bing Queries Dataset](http://www.msmarco.org/). Although the training data is English only internal Tests on private data showed a far higher accurancy in German than all other available models.
Fine-tuned Models | Dependency | Eval Set | Search Boost<a href='#benchmarks'> | Speed on GPU
----------------------------------------------------------------------------------- | ---------------------------------------------------------------------------- | ------------------------------------------------------------------ | ----------------------------------------------------- | ----------------------------------
**`amberoad/Multilingual-uncased-MSMARCO`** (This Model) | <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-blue"/> | <a href ='http://www.msmarco.org/'>bing queries</a> | **+61%** <sub><sup>(0.29 vs 0.18)</sup></sub> | ~300 ms/query <a href='#footnotes'>
`nboost/pt-tinybert-msmarco` | <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-red"/> | <a href ='http://www.msmarco.org/'>bing queries</a> | **+45%** <sub><sup>(0.26 vs 0.18)</sup></sub> | ~50ms/query <a href='#footnotes'>
`nboost/pt-bert-base-uncased-msmarco` | <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-red"/> | <a href ='http://www.msmarco.org/'>bing queries</a> | **+62%** <sub><sup>(0.29 vs 0.18)</sup></sub> | ~300 ms/query<a href='#footnotes'>
`nboost/pt-bert-large-msmarco` | <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-red"/> | <a href ='http://www.msmarco.org/'>bing queries</a> | **+77%** <sub><sup>(0.32 vs 0.18)</sup></sub> | -
`nboost/pt-biobert-base-msmarco` | <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-red"/> | <a href ='https://github.com/naver/biobert-pretrained'>biomed</a> | **+66%** <sub><sup>(0.17 vs 0.10)</sup></sub> | ~300 ms/query<a href='#footnotes'>
This table is taken from [nboost](https://github.com/koursaros-ai/nboost) and extended by the first line.
## Contact Infos

Amberoad is a company focussing on Search and Business Intelligence.
We provide you:
* Advanced Internal Company Search Engines thorugh NLP
* External Search Egnines: Find Competitors, Customers, Suppliers
**Get in Contact now to benefit from our Expertise:**
The training and evaluation was performed by [**Philipp Reissel**](https://reissel.eu/) and [**Igli Manaj**](https://github.com/iglimanaj)
[ Linkedin](https://de.linkedin.com/company/amberoad) | <svg xmlns="http://www.w3.org/2000/svg" x="0px" y="0px"
width="32" height="32"
viewBox="0 0 172 172"
style=" fill:#000000;"><g fill="none" fill-rule="nonzero" stroke="none" stroke-width="1" stroke-linecap="butt" stroke-linejoin="miter" stroke-miterlimit="10" stroke-dasharray="" stroke-dashoffset="0" font-family="none" font-weight="none" font-size="none" text-anchor="none" style="mix-blend-mode: normal"><path d="M0,172v-172h172v172z" fill="none"></path><g fill="#e67e22"><path d="M37.625,21.5v86h96.75v-86h-5.375zM48.375,32.25h10.75v10.75h-10.75zM69.875,32.25h10.75v10.75h-10.75zM91.375,32.25h32.25v10.75h-32.25zM48.375,53.75h75.25v43h-75.25zM80.625,112.875v17.61572c-1.61558,0.93921 -2.94506,2.2687 -3.88428,3.88428h-49.86572v10.75h49.86572c1.8612,3.20153 5.28744,5.375 9.25928,5.375c3.97183,0 7.39808,-2.17347 9.25928,-5.375h49.86572v-10.75h-49.86572c-0.93921,-1.61558 -2.2687,-2.94506 -3.88428,-3.88428v-17.61572z"></path></g></g></svg>[Homepage](https://de.linkedin.com/company/amberoad) | [Email]([email protected])
|
amine/bert-base-5lang-cased | 2021-05-18T23:35:02.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"en",
"fr",
"es",
"de",
"zh",
"dataset:wikipedia",
"transformers",
"multilingual",
"license:apache-2.0",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| amine | 496 | transformers | ---
language:
- en
- fr
- es
- de
- zh
tags:
- pytorch
- bert
- multilingual
- en
- fr
- es
- de
- zh
datasets: wikipedia
license: apache-2.0
inference: false
---
# bert-base-5lang-cased
This is a smaller version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handles only 5 languages (en, fr, es, de and zh) instead of 104.
The model is therefore 30% smaller than the original one (124M parameters instead of 178M) but gives exactly the same representations for the above cited languages.
Starting from `bert-base-5lang-cased` will facilitate the deployment of your model on public cloud platforms while keeping similar results.
For instance, Google Cloud Platform requires that the model size on disk should be lower than 500 MB for serveless deployments (Cloud Functions / Cloud ML) which is not the case of the original `bert-base-multilingual-cased`.
For more information about the models size, memory footprint and loading time please refer to the table below:
| Model | Num parameters | Size | Memory | Loading time |
| ---------------------------- | -------------- | -------- | -------- | ------------ |
| bert-base-multilingual-cased | 178 million | 714 MB | 1400 MB | 4.2 sec |
| bert-base-5lang-cased | 124 million | 495 MB | 950 MB | 3.6 sec |
These measurements have been computed on a [Google Cloud n1-standard-1 machine (1 vCPU, 3.75 GB)](https://cloud.google.com/compute/docs/machine-types\#n1_machine_type).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("amine/bert-base-5lang-cased")
model = AutoModel.from_pretrained("amine/bert-base-5lang-cased")
```
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Multilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
amitness/nepbert | 2021-05-20T14:06:12.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"ne",
"dataset:cc100",
"transformers",
"nepali-laguage-model",
"license:mit",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"training_args.bin",
"vocab.json"
]
| amitness | 40 | transformers | ---
language:
- ne
thumbnail:
tags:
- roberta
- nepali-laguage-model
license: MIT
datasets:
- cc100
widget:
- text: "तिमीलाई कस्तो <mask>?"
---
# nepbert
## Model description
Roberta trained from scratch on the Nepali CC-100 dataset with 12 million sentences.
## Intended uses & limitations
#### How to use
```python
from transformers import pipeline
pipe = pipeline(
"fill-mask",
model="amitness/nepbert",
tokenizer="amitness/nepbert"
)
print(pipe(u"तिमीलाई कस्तो <mask>?"))
```
## Training data
The data was taken from the nepali language subset of CC-100 dataset.
## Training procedure
The model was trained on Google Colab using `1x Tesla V100`. |
amoghsgopadi/wav2vec2-large-xlsr-kn | 2021-03-28T04:37:41.000Z | [
"pytorch",
"wav2vec2",
"kn",
"dataset:openslr",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| amoghsgopadi | 20 | transformers | ---
language: kn
datasets:
- openslr
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Large 53 Kannada by Amogh Gopadi
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: OpenSLR kn
type: openslr
metrics:
- name: Test WER
type: wer
value: 27.08
---
# Wav2Vec2-Large-XLSR-53-Kannada
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Kannada using the [OpenSLR SLR79](http://openslr.org/79/) dataset. When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows, assuming you have a dataset with Kannada `sentence` and `path` fields:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# test_dataset = #TODO: WRITE YOUR CODE TO LOAD THE TEST DATASET. For a sample, see the Colab link in Training Section.
processor = Wav2Vec2Processor.from_pretrained("amoghsgopadi/wav2vec2-large-xlsr-kn")
model = Wav2Vec2ForCTC.from_pretrained("amoghsgopadi/wav2vec2-large-xlsr-kn")
resampler = torchaudio.transforms.Resample(48_000, 16_000) # The original data was with 48,000 sampling rate. You can change it according to your input.
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on 10% of the Kannada data on OpenSLR.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
# test_dataset = #TODO: WRITE YOUR CODE TO LOAD THE TEST DATASET. For sample see the Colab link in Training Section.
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("amoghsgopadi/wav2vec2-large-xlsr-kn")
model = Wav2Vec2ForCTC.from_pretrained("amoghsgopadi/wav2vec2-large-xlsr-kn")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\–\…]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"),
attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 27.08 %
## Training
90% of the OpenSLR Kannada dataset was used for training.
The colab notebook used for training can be found [here](https://colab.research.google.com/github/amoghgopadi/wav2vec2-xlsr-kannada/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Kannada_ASR.ipynb). |
amoux/roberta-cord19-1M7k | 2021-05-20T14:07:22.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"masked-lm",
"english",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
]
| amoux | 21 | transformers | ---
language: english
thumbnail: https://github.githubassets.com/images/icons/emoji/unicode/2695.png
widget:
- text: "Lung infiltrates cause significant morbidity and mortality in immunocompromised <mask>."
- text: "Tuberculosis appears to be an important <mask> in endemic regions especially in the non-HIV, non-hematologic malignancy group."
- text: "For vector-transmitted diseases this places huge significance on vector mortality rates as vectors usually don't <mask> an infection and instead remain infectious for life."
- text: "The lung lesions were characterized by bronchointerstitial pneumonia with accumulation of neutrophils, macrophages and necrotic debris in <mask> and bronchiolar lumens and peribronchiolar/perivascular infiltration of inflammatory cells."
---
# roberta-cord19-1M7k

> This model is based on ***RoBERTa*** and was pre-trained on 1.7 million sentences.
The training corpus was papers taken from *Semantic Scholar*'s CORD-19 historical releases. Corpus size is `13k` papers, `~60M` tokens. I used the full-text `"body_text"` of the papers in training (details below).
#### Usage
```python
from transformers import pipeline
from transformers import RobertaTokenizerFast, RobertaForMaskedLM
tokenizer = RobertaTokenizerFast.from_pretrained("amoux/roberta-cord19-1M7k")
model = RobertaForMaskedLM.from_pretrained("amoux/roberta-cord19-1M7k")
fillmask = pipeline("fill-mask", model=model, tokenizer=tokenizer)
text = "Lung infiltrates cause significant morbidity and mortality in immunocompromised patients."
masked_text = text.replace("patients", tokenizer.mask_token)
predictions = fillmask(masked_text, top_k=3)
```
- Predicted tokens
```bash
[{'sequence': '<s>Lung infiltrates cause significant morbidity and mortality in immunocompromised patients.</s>',
'score': 0.6273621320724487,
'token': 660,
'token_str': 'Ġpatients'},
{'sequence': '<s>Lung infiltrates cause significant morbidity and mortality in immunocompromised individuals.</s>',
'score': 0.19800445437431335,
'token': 1868,
'token_str': 'Ġindividuals'},
{'sequence': '<s>Lung infiltrates cause significant morbidity and mortality in immunocompromised animals.</s>',
'score': 0.022069649770855904,
'token': 1471,
'token_str': 'Ġanimals'}]
```
## Dataset
- About
- name: *CORD-19: The Covid-19 Open Research Dataset*
- date: *2020-03-18*
- md5 | sha1: `a36fe181 | 8fbea927`
- text-key: `body_text`
- subsets (*total*: `13,202`):
- *biorxiv_medrxiv*: `803`
- *comm_use_subset*: `9000`
- *pmc_custom_license*: `1426`
- *noncomm_use_subset*: `1973`
- Splits (*ratio: 0.9*)
- sentences used for training: `1,687,124`
- sentences used for evaluation: `187,459`
- Total training steps: `210,890`
- Total evaluation steps: `23,433`
## Parameters
- Data
- block_size: `256`
- Training
- per_device_train_batch_size: `8`
- per_device_eval_batch_size: `8`
- gradient_accumulation_steps: `2`
- learning_rate: `5e-5`
- num_train_epochs: `2`
- fp16: `True`
- fp16_opt_level: `'01'`
- seed: `42`
- Output
- global_step: `210890`
- training_loss: `3.5964575726682155`
## Evaluation
- Perplexity: `17.469366079957922`
### Citation
> Allen Institute CORD-19 [Historical Releases](https://ai2-semanticscholar-cord-19.s3-us-west-2.amazonaws.com/historical_releases.html)
```
@article{Wang2020CORD19TC,
title={CORD-19: The Covid-19 Open Research Dataset},
author={Lucy Lu Wang and Kyle Lo and Yoganand Chandrasekhar and Russell Reas and Jiangjiang Yang and Darrin Eide and K. Funk and Rodney Michael Kinney and Ziyang Liu and W. Merrill and P. Mooney and D. Murdick and Devvret Rishi and Jerry Sheehan and Zhihong Shen and B. Stilson and A. Wade and K. Wang and Christopher Wilhelm and Boya Xie and D. Raymond and Daniel S. Weld and Oren Etzioni and Sebastian Kohlmeier},
journal={ArXiv},
year={2020}
}
``` |
amoux/scibert_nli_squad | 2021-05-18T23:36:56.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| amoux | 63 | transformers | |
amraniworking/testing | 2021-03-29T13:01:41.000Z | []
| [
".gitattributes"
]
| amraniworking | 0 | |||
amritchhetrib/InstgramAnalytics | 2021-01-29T14:01:12.000Z | []
| [
".gitattributes"
]
| amritchhetrib | 0 | |||
amritchhetrib/TwitterAnalytics | 2021-01-29T13:58:06.000Z | []
| [
".gitattributes"
]
| amritchhetrib | 0 | |||
amritchhetrib/TwitterSentimentalAnalysis | 2021-01-29T12:15:13.000Z | []
| [
".gitattributes"
]
| amritchhetrib | 0 | |||
amritchhetrib/model_name | 2021-01-29T12:12:22.000Z | []
| [
".gitattributes"
]
| amritchhetrib | 0 | |||
anas/wav2vec2-large-xlsr-arabic | 2021-03-28T23:26:26.000Z | [
"pytorch",
"wav2vec2",
"ar",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json",
".ipynb_checkpoints/README-checkpoint.md",
".ipynb_checkpoints/preprocessor_config-checkpoint.json",
".ipynb_checkpoints/special_tokens_map-checkpoint.json",
".ipynb_checkpoints/tokenizer_config-checkpoint.json",
".ipynb_checkpoints/vocab-checkpoint.json"
]
| anas | 161 | transformers | ---
language: ar
datasets:
- common_voice: Common Voice Corpus 4
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Hasni XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ar
type: common_voice
args: ar
metrics:
- name: Test WER
type: wer
value: 52.18
---
# Wav2Vec2-Large-XLSR-53-Arabic
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Arabic using the [Common Voice Corpus 4](https://commonvoice.mozilla.org/en/datasets) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ar", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anas/wav2vec2-large-xlsr-arabic")
model = Wav2Vec2ForCTC.from_pretrained("anas/wav2vec2-large-xlsr-arabic")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Arabic test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ar", split="test")
processor = Wav2Vec2Processor.from_pretrained("anas/wav2vec2-large-xlsr-arabic")
model = Wav2Vec2ForCTC.from_pretrained("anas/wav2vec2-large-xlsr-arabic/")
model.to("cuda")
chars_to_ignore_regex = '[\\\\,\\\\؟\\\\.\\\\!\\\\-\\\\;\\\\\\\\:\\\\'\\\\"\\\\☭\\\\«\\\\»\\\\؛\\\\—\\\\ـ\\\\_\\\\،\\\\“\\\\%\\\\‘\\\\”\\\\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
batch["sentence"] = re.sub('[a-z]','',batch["sentence"])
batch["sentence"] = re.sub("[إأٱآا]", "ا", batch["sentence"])
noise = re.compile(""" ّ | # Tashdid
َ | # Fatha
ً | # Tanwin Fath
ُ | # Damma
ٌ | # Tanwin Damm
ِ | # Kasra
ٍ | # Tanwin Kasr
ْ | # Sukun
ـ # Tatwil/Kashida
""", re.VERBOSE)
batch["sentence"] = re.sub(noise, '', batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 52.18 %
## Training
The Common Voice Corpus 4 `train`, `validation`, datasets were used for training
The script used for training can be found [here](...)
Twitter: [here](https://twitter.com/hasnii_anas)
Email: [email protected] |
ancs21/xlm-roberta-large-vi-qa | 2021-05-30T14:13:19.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"vi",
"transformers",
"license:mit"
]
| question-answering | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json"
]
| ancs21 | 200 | transformers | ---
language: "vi"
tags:
- vi
- xlm-roberta
widget:
- text: "Toà nhà nào cao nhất Việt Nam?"
context: "Landmark 81 là một toà nhà chọc trời trong tổ hợp dự án Vinhomes Tân Cảng, một dự án có tổng mức đầu tư 40.000 tỷ đồng, do Công ty Cổ phần Đầu tư xây dựng Tân Liên Phát thuộc Vingroup làm chủ đầu tư. Toà tháp cao 81 tầng, hiện tại là toà nhà cao nhất Việt Nam và là toà nhà cao nhất Đông Nam Á từ tháng 3 năm 2018."
license: "MIT"
metrics:
- f1
- em
---
# XLM-RoBERTa large for QA on Vietnamese languages (also support various languages)
## Overview
- Language model: xlm-roberta-large
- Fine-tune: [deepset/xlm-roberta-large-squad2](https://huggingface.co/deepset/xlm-roberta-large-squad2)
- Language: Vietnamese
- Downstream-task: Extractive QA
- Dataset: [mailong25/bert-vietnamese-question-answering](https://github.com/mailong25/bert-vietnamese-question-answering/tree/master/dataset)
- Training data: train-v2.0.json (SQuAD 2.0 format)
- Eval data: dev-v2.0.json (SQuAD 2.0 format)
- Infrastructure: 1x Tesla P100 (Google Colab)
## Performance
Evaluated on dev-v2.0.json
```
exact: 136 / 141
f1: 0.9692671394799054
```
Evaluated on Vietnamese XQuAD: [xquad.vi.json](https://github.com/deepmind/xquad/blob/master/xquad.vi.json)
```
exact: 604 / 1190
f1: 0.7224454217571596
```
## Author
An Pham (ancs21.ps [at] gmail.com)
## License
MIT |
andreas800/hgf_models | 2021-01-13T12:13:56.000Z | []
| [
".gitattributes"
]
| andreas800 | 0 | |||
andriopa/blueBERT-base-finetuned | 2021-04-06T22:30:33.000Z | []
| [
".gitattributes"
]
| andriopa | 0 | |||
angelo/test | 2020-11-25T11:34:26.000Z | []
| [
".gitattributes"
]
| angelo | 0 | |||
angggapradiktas/model_1 | 2021-03-29T09:51:55.000Z | []
| [
".gitattributes"
]
| angggapradiktas | 0 | |||
angiquer/twitterko-cha-electra-base-discriminator | 2020-07-07T04:33:22.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| angiquer | 10 | transformers | ||
angiquer/twitterko-cha-electra-base-generator | 2020-07-07T04:41:55.000Z | [
"pytorch",
"electra",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"config_default.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| angiquer | 11 | transformers | |
angiquer/twitterko-electra-base-discriminator-large | 2020-07-10T01:48:02.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| angiquer | 14 | transformers | ||
angiquer/twitterko-electra-base-discriminator | 2020-07-10T01:39:01.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| angiquer | 15 | transformers | ||
angiquer/twitterko-electra-base-generator-large | 2020-07-10T01:46:07.000Z | [
"pytorch",
"electra",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| angiquer | 11 | transformers | |
angiquer/twitterko-electra-base-generator | 2020-07-10T01:44:00.000Z | [
"pytorch",
"electra",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| angiquer | 11 | transformers | |
angustay/helloworld | 2021-04-17T06:54:54.000Z | []
| [
".gitattributes"
]
| angustay | 0 | |||
anhdungitvn/finbert | 2020-11-16T07:13:47.000Z | []
| [
".gitattributes"
]
| anhdungitvn | 0 | |||
anilkumar-kanasani/distilbert-base-test-3-model | 2021-01-15T10:07:03.000Z | []
| [
".gitattributes"
]
| anilkumar-kanasani | 0 | |||
aniltrkkn/wav2vec2-large-xlsr-53-turkish | 2021-03-28T19:57:15.000Z | [
"pytorch",
"wav2vec2",
"tr",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| aniltrkkn | 15 | transformers | ---
language: tr
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Wav2Vec2-Large-XLSR-53-Turkish
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice tr
type: common_voice
args: tr
metrics:
- name: Test WER
type: wer
value: 17.46
---
# Wav2Vec2-Large-XLSR-53-Turkish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Turkish using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from unicode_tr import unicode_tr
test_dataset = load_dataset("common_voice", "tr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("aniltrkkn/wav2vec2-large-xlsr-53-turkish")
model = Wav2Vec2ForCTC.from_pretrained("aniltrkkn/wav2vec2-large-xlsr-53-turkish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "tr", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("aniltrkkn/wav2vec2-large-xlsr-53-turkish")
model = Wav2Vec2ForCTC.from_pretrained("aniltrkkn/wav2vec2-large-xlsr-53-turkish")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tbatch["sentence"] = str(unicode_tr(re.sub(chars_to_ignore_regex, "", batch["sentence"])).lower())
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\twith torch.no_grad():
\t\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
\tpred_ids = torch.argmax(logits, dim=-1)
\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 17.46 %
## Training
unicode_tr package is used for converting sentences to lower case since regular lower() does not work well with Turkish.
Since training data is very limited for Turkish, all data is employed with a K-Fold (k=5) training approach. Best model out of the 5 trainings is uploaded. Training arguments:
--num_train_epochs="30" \\
--per_device_train_batch_size="32" \\
--evaluation_strategy="steps" \\
--activation_dropout="0.055" \\
--attention_dropout="0.094" \\
--feat_proj_dropout="0.04" \\
--hidden_dropout="0.047" \\
--layerdrop="0.041" \\
--learning_rate="2.34e-4" \\
--mask_time_prob="0.082" \\
--warmup_steps="250" \\
All trainings took ~20 hours with a GeForce RTX 3090 Graphics Card. |
anisha2102/docvqa | 2021-02-16T09:23:20.000Z | []
| [
".gitattributes"
]
| anisha2102 | 0 | |||
anjay/vvhgvhgch | 2021-03-28T15:19:24.000Z | []
| [
".gitattributes",
"README.md"
]
| anjay | 0 | https://sites.google.com/view/fullwatchgodzillavskong2021wat
https://sites.google.com/view/vvvbbbvvv
https://sites.google.com/view/cccuuuccc
https://sites.google.com/view/fullwatchgodzillavskong2021fre
https://sites.google.com/view/gggwwppp
https://sites.google.com/view/fffrrrfff
https://sites.google.com/view/mahmoedjamil
https://sites.google.com/view/freegodzilla
https://sites.google.com/view/fullwatch123moviesgodzillavsko
https://sites.google.com/view/watchgodzillavskong2021fullfre
https://sites.google.com/view/sdkjfsdgf
https://sites.google.com/view/kseryhfdg
https://sites.google.com/view/sdfdgergdfg
https://sites.google.com/view/watchgodzillavskongfull2021fre
https://sites.google.com/view/gfergtdfg
https://sites.google.com/view/sasdasdwed
https://sites.google.com/view/sdfsdfwe
https://sites.google.com/view/free-download-godzilla-vs-kong
https://sites.google.com/view/dasdertdfg
https://sites.google.com/view/sdfiuysdgfdg
https://sites.google.com/view/yyfdfsdff
https://sites.google.com/view/asdertyddv
https://sites.google.com/view/dffghydcfr
https://sites.google.com/view/sdgfdsgfd
https://sites.google.com/view/fre-watch-godzilla-vs-kong-202
https://sites.google.com/view/freewatchgodzillavskong2021onl
https://sites.google.com/view/ttgjhjs
https://sites.google.com/view/free-godzilla-vs-kong-2021-wat
https://sites.google.com/view/uudfdfggf
https://sites.google.com/view/onlinewatch-godzillavskong2021
https://sites.google.com/view/ftysduig
https://sites.google.com/view/dfwerffy
https://sites.google.com/view/poek
https://sites.google.com/view/eeddun
https://sites.google.com/view/fullgodzillavskong2021watchonl
https://sites.google.com/view/livewatchmotogpqatar2021watchl/
https://sites.google.com/view/watch-motogp-qatar-2021-live/
https://sites.google.com/view/live-watch-motogp-qatar-2021-f/
https://sites.google.com/view/watch-motogpqatar2021live/
https://sites.google.com/view/watchmotogpqatar2021live/
https://sites.google.com/view/acascasc/
https://sites.google.com/view/freewatchmotogpqatar2021live/
https://sites.google.com/view/live-watch-streaming-motogp-qa/
https://sites.google.com/view/watch-motogp-qatar-2021-live-f/
https://sites.google.com/view/livewatchmotogpqatar2021freeon/
https://sites.google.com/view/ascascascascasc/
https://sites.google.com/view/watch-motogp-qatar-2021-online/
https://sites.google.com/view/watch-motogp-qatar-live-2021-f/
https://sites.google.com/view/free-watch-motogp-qatar-2021-l/
https://sites.google.com/view/acadwqdwq/
https://sites.google.com/view/wqqeweqwe/
https://sites.google.com/view/freestreamingmotogpqatar2021wa/
https://sites.google.com/view/watchmotogpqatar2021online/
https://sites.google.com/view/live-watch-motogp-qatar-2021-o/
https://sites.google.com/view/dsdfgfdhg/
https://sites.google.com/view/zxczxcxzczxcsa/
https://sites.google.com/view/fre-watch-motogp-qatar-2021-li/
https://sites.google.com/view/free-watch-motogp-qatar-2021-o/
https://sites.google.com/view/qwdwefrgr/
https://sites.google.com/view/free-motogp-qatar-2021-watch-l/
https://sites.google.com/view/freemotogpqatar2021watchlive/
https://sites.google.com/view/freemotogpqatar2021watchlive/
https://sites.google.com/view/online-watch-motogp-qatar-2021/
https://sites.google.com/view/streamwatchmotogpqatar2021live/
https://sites.google.com/view/ascsacxzqwqe/
https://sites.google.com/view/livewatchmotogpqatar2021freest/
https://sites.google.com/view/watch-motogp-qatar-2021-free-o/
https://sites.google.com/view/live-motogp-qatar-2021-watch-o/
https://sites.google.com/view/watch-365-days-2020-f-u-l-l-f-/
https://sites.google.com/view/watchanotherround2020fullfree/
https://sites.google.com/view/watcharmyofthedead2021fullfree/
https://sites.google.com/view/watchavengersendgame2019fullfr/
https://sites.google.com/view/watchblackwidow2021fullfree/
https://sites.google.com/view/shydturf/
https://sites.google.com/view/watchbosslevel2021fullfree/
https://sites.google.com/view/watchcaptainmarvel2019fullfree/
https://sites.google.com/view/fjkghgk/
https://sites.google.com/view/watch-cherry-2021-f-u-l-l-f-r-/
https://sites.google.com/view/watch-come-true-2020-f-u-l-l-f/
https://sites.google.com/view/watchcoming2america2021fullfre/
https://sites.google.com/view/watch-concrete-cowboy-2020-f-u/
https://sites.google.com/view/watchcosmicsin2021fullfree/
https://sites.google.com/view/watchcrisis2021fullfree/
https://sites.google.com/view/watchcruella2021fullfree/
https://sites.google.com/view/watchdeadlyillusions2021fullfr/
https://sites.google.com/view/watch-dune-2021-f-u-l-l-f-r-e-/
https://sites.google.com/view/watchf92021fullfree/
https://sites.google.com/view/watchgirlinthebasement2021full/
https://sites.google.com/view/hfhkdfkfd/
https://sites.google.com/view/fhfgjfg/
https://sites.google.com/view/hdfhdhs/
https://sites.google.com/view/dhftgfgj/
https://sites.google.com/view/fhfdjhfj/
https://sites.google.com/view/djfjfjdj/
https://sites.google.com/view/dfujfjgjkg/
https://sites.google.com/view/fhfjhfjed/
https://sites.google.com/view/watchgodzillavskong2021fulldow/
https://sites.google.com/view/sdfhyturyi/
https://sites.google.com/view/urutyiutyi/
https://sites.google.com/view/dujryirit/
https://sites.google.com/view/full-watch-123movies-godzilla-/
https://sites.google.com/view/dfjutujry/
https://sites.google.com/view/fgjiyrfikr/
https://sites.google.com/view/djdgjdjd/
https://sites.google.com/view/dfujdtjdrr/
https://sites.google.com/view/jggogogo/
https://sites.google.com/view/hfgufgut/
https://sites.google.com/view/fdjytrtyik/
https://sites.google.com/view/hcfhjfkifi/
https://sites.google.com/view/watch-godzilla-vs-kong-full-20/
https://sites.google.com/view/hfkfuuou/
https://sites.google.com/view/fkfkffkf/
https://sites.google.com/view/watchgodzillavskong2021fullmp4/
https://sites.google.com/view/fhedtjh/
https://sites.google.com/view/freedownloadgodzillavskong2021/
https://sites.google.com/view/dfhdjhdgj/
https://sites.google.com/view/full-watch-godzilla-vs-kong-20/
https://sites.google.com/view/djdgjjkdjd/
https://sites.google.com/view/zcbhszfhbs/
https://sites.google.com/view/fjdsfjsd/
https://sites.google.com/view/gjdfkjkfyk/
https://sites.google.com/view/hsfhdsdjh/
https://sites.google.com/view/fhdsfjdjd/
https://sites.google.com/view/fjsdjsdjsjs/
https://sites.google.com/view/frewatchgodzillavskong2021full/
https://sites.google.com/view/fgyifgi/
https://sites.google.com/view/zhbfsjsjs/
https://sites.google.com/view/freegodzillavskong2021watchful/
https://sites.google.com/view/ggjglgjg/
https://sites.google.com/view/dfshdsjfdgj/
https://sites.google.com/view/online-watch-godzilla-vs-kong-/
https://sites.google.com/view/shfsjsj/
https://sites.google.com/view/dfhfsjjss/
https://sites.google.com/view/wydj/
https://sites.google.com/view/zhsfsjs/
https://sites.google.com/view/full-godzilla-vs-kong-2021-wat/
https://dispenst.medium.com/cases-rising-in-the-us-vaccines-safe-and-effective-for-babies-study-shows-covid-19-updates-d8388bcb368e
https://dispenst.medium.com/40-million-vaccine-doses-thus-far-unused-35-of-american-adults-have-had-a-shot-live-covid-19-f04f701ca5b2
https://dispenst.medium.com/southern-states-brace-for-another-round-of-severe-weather-including-tornadoes-471a7b050226
https://www.posts123.com/post/1450870/cases-rising-again-in-the-us-vaccines-safe-and-effective-for-babies-study-shows-covid-19-updates
https://www.reddit.com/user/anankastic55/comments/mf2mr5/study_shows_covid19_updates/
https://m.mydigoo.com/forums-topicdetail-250476.html
https://paiza.io/projects/ZJmZi54caJXny2NGQnuqYw
https://onlinegdb.com/S1gd3afAEu |
||
anker/Test | 2021-03-04T08:23:21.000Z | []
| [
".gitattributes"
]
| anker | 0 | |||
ankit0208/chatBot | 2021-05-28T17:19:13.000Z | []
| [
".gitattributes"
]
| ankit0208 | 0 | |||
ankitbhatnagar/kmeans | 2021-03-25T22:03:44.000Z | []
| [
".gitattributes"
]
| ankitbhatnagar | 0 | |||
ankur310794/bart-base-keyphrase-generation-kpTimes | 2021-04-09T08:38:24.000Z | [
"pytorch",
"bart",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| ankur310794 | 321 | transformers | |
ankur310794/bart-base-keyphrase-generation-openkp | 2021-04-09T08:43:55.000Z | [
"pytorch",
"bart",
"seq2seq",
"transformers",
"text2text-generation"
]
| text2text-generation | [
".gitattributes",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| ankur310794 | 86 | transformers | |
ankur310794/bert-large-uncased-nq-small-answer | 2021-05-19T11:44:55.000Z | [
"tf",
"bert",
"question-answering",
"dataset:natural_questions",
"transformers",
"small answer"
]
| question-answering | [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| ankur310794 | 41 | transformers | ---
tags:
- small answer
datasets:
- natural_questions
---
# Open Domain Question Answering
A core goal in artificial intelligence is to build systems that can read the web, and then answer complex questions about any topic. These question-answering (QA) systems could have a big impact on the way that we access information. Furthermore, open-domain question answering is a benchmark task in the development of Artificial Intelligence, since understanding text and being able to answer questions about it is something that we generally associate with intelligence.
# The Natural Questions Dataset
To help spur development in open-domain question answering, we have created the Natural Questions (NQ) corpus, along with a challenge website based on this data. The NQ corpus contains questions from real users, and it requires QA systems to read and comprehend an entire Wikipedia article that may or may not contain the answer to the question. The inclusion of real user questions, and the requirement that solutions should read an entire page to find the answer, cause NQ to be a more realistic and challenging task than prior QA datasets.
|
ankur310794/roberta-base-squad2-nq | 2021-05-20T14:10:46.000Z | [
"pytorch",
"jax",
"roberta",
"question-answering",
"dataset:squad_v2",
"dataset:natural_questions",
"transformers",
"qa"
]
| question-answering | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| ankur310794 | 73 | transformers | ---
tags:
- qa
datasets:
- squad_v2
- natural_questions
---
# Roberta-base-Squad2-NQ
## What is SQuAD?
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
SQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering.
## The Natural Questions Dataset
To help spur development in open-domain question answering, we have created the Natural Questions (NQ) corpus, along with a challenge website based on this data. The NQ corpus contains questions from real users, and it requires QA systems to read and comprehend an entire Wikipedia article that may or may not contain the answer to the question. The inclusion of real user questions, and the requirement that solutions should read an entire page to find the answer, cause NQ to be a more realistic and challenging task than prior QA datasets.
## Training
Firstly, we took base roberta model and trained on SQuQD 2.0 dataset for 2 epoch and then after we took NQ Small answer and trained for 1 epoch.
Total Dataset Size: 204416 Examples from squadv2 and NQ Small answer dataset
## Evaluation
Eval Dataset: Squadv2 dev
```
{'exact': 80.2998399730481,
'f1': 83.4402145786235,
'total': 11873,
'HasAns_exact': 79.08232118758434,
'HasAns_f1': 85.37207619635592,
'HasAns_total': 5928,
'NoAns_exact': 81.5138772077376,
'NoAns_f1': 81.5138772077376,
'NoAns_total': 5945,
'best_exact': 80.2998399730481,
'best_exact_thresh': 0.0,
'best_f1': 83.44021457862335,
'best_f1_thresh': 0.0}
```
|
ann101020/le2sbot-hp | 2021-06-04T11:59:14.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"conversational",
"text-generation"
]
| conversational | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
]
| ann101020 | 32 | transformers | ---
tags:
- conversational
---
# My Awesome Model |
annedirkson/ADR_extraction_patient_forum | 2021-06-10T15:37:54.000Z | [
"tf"
]
| [
".gitattributes",
"README.md",
"tf_model.h5",
"tf_model.preproc"
]
| annedirkson | 0 | ktrain predictor for NER of ADR in patient forum discussions. Created in transformers > 3.0 and < 4. 0 & ktrain 0.19.9. |
||
annedirkson/BERT_embeddings_ADR_normalization | 2021-06-09T10:52:47.000Z | [
"pytorch",
"jax",
"transformers"
]
| [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"sparse_encoder.pk",
"sparse_weight.pt",
"vocab.txt"
]
| annedirkson | 8 | transformers | ||
annieptba/causallm-annie-hw8 | 2021-04-11T06:13:26.000Z | []
| [
".gitattributes"
]
| annieptba | 0 | |||
annieptba/causallm-annie | 2021-04-11T06:43:25.000Z | []
| [
".gitattributes"
]
| annieptba | 0 | |||
annieptba/electra-maskedlm-annie | 2021-04-11T04:59:56.000Z | []
| [
".gitattributes"
]
| annieptba | 0 | |||
annieptba/maskedlm-annie-hw8 | 2021-04-11T05:33:42.000Z | []
| [
".gitattributes"
]
| annieptba | 0 | |||
annieptba/maskedlm-annie | 2021-04-11T05:11:20.000Z | []
| [
".gitattributes"
]
| annieptba | 0 | |||
anon/apibart | 2021-04-05T22:29:34.000Z | []
| [
".gitattributes"
]
| anon | 0 | |||
anon-submission-mk/bert-base-macedonian-bulgarian-cased | 2021-05-18T23:39:42.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| anon-submission-mk | 39 | transformers | |
anon-submission-mk/bert-base-macedonian-cased | 2021-05-18T23:40:49.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| anon-submission-mk | 14 | transformers | |
anon-submission-mk/distilbert-base-macedonian-cased | 2021-05-19T11:46:47.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"transformers"
]
| [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| anon-submission-mk | 18 | transformers | ||
anon-submission-mk/electra-base-macedonian-bulgarian-cased-discriminator | 2020-06-17T21:40:34.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| anon-submission-mk | 13 | transformers | ||
anon-submission-mk/electra-base-macedonian-cased-discriminator | 2020-06-17T21:37:57.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| anon-submission-mk | 12 | transformers | ||
anon-submission-mk/electra-base-macedonian-cased-generator | 2020-09-24T12:01:12.000Z | [
"pytorch",
"electra",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| anon-submission-mk | 12 | transformers | |
anonymous-german-nlp/german-gpt2 | 2021-05-21T13:20:42.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"de",
"transformers",
"license:mit",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
]
| anonymous-german-nlp | 1,336 | transformers | ---
language: de
widget:
- text: "Heute ist sehr schönes Wetter in"
license: mit
---
# German GPT-2 model
**Note**: This model was de-anonymized and now lives at:
https://huggingface.co/dbmdz/german-gpt2
Please use the new model name instead! |
ans/vaccinating-covid-tweets | 2021-06-18T04:12:08.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:tweets",
"transformers",
"license:apache-2.0"
]
| text-classification | [
".gitattributes",
"README.md",
"added_tokens.json",
"bpe.codes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| ans | 162 | transformers | ---
language: en
license: apache-2.0
datasets:
- tweets
widget:
- text: "Vaccines to prevent SARS-CoV-2 infection are considered the most promising approach for curbing the pandemic."
---
# Disclaimer: This page is under maintenance. Please DO NOT refer to the information on this page to make any decision yet.
# Vaccinating COVID tweets
A fine-tuned model for fact-classification task on English tweets about COVID-19/vaccine.
## Intended uses & limitations
You can classify if the input tweet (or any others statement) about COVID-19/vaccine is true, false or misleading.
Note that since this model was trained with data up to May 2020, the most recent information may not be reflected.
#### How to use
You can use this model directly on this page or using `transformers` in python.
- Load pipeline and implement with input sequence
```python
from transformers import pipeline
pipe = pipeline("sentiment-analysis", model = "ans/vaccinating-covid-tweets")
seq = "Vaccines to prevent SARS-CoV-2 infection are considered the most promising approach for curbing the pandemic."
pipe(seq)
```
- Expected output
```python
[
{
"label": "false",
"score": 0.07972867041826248
},
{
"label": "misleading",
"score": 0.019911376759409904
},
{
"label": "true",
"score": 0.9003599882125854
}
]
```
- True examples
```python
"By the end of 2020, several vaccines had become available for use in different parts of the world."
"Vaccines to prevent SARS-CoV-2 infection are considered the most promising approach for curbing the pandemic."
"RNA vaccines were the first vaccines for SARS-CoV-2 to be produced and represent an entirely new vaccine approach."
```
- False examples
```python
"COVID-19 vaccine caused new strain in UK."
```
#### Limitations and bias
To conservatively classify whether an input sequence is true or not, the model may have predictions biased toward false/misleading.
## Training data & Procedure
#### Pre-trained baseline model
- Pre-trained model: [BERTweet](https://github.com/VinAIResearch/BERTweet)
- trained based on the RoBERTa pre-training procedure
- 850M General English Tweets (Jan 2012 to Aug 2019)
- 23M COVID-19 English Tweets
- Size of the model: >134M parameters
- Further training
- Pre-training with recent COVID-19/vaccine tweets and fine-tuning for fact classification
#### 1) Pre-training language model
- The model was pre-trained on COVID-19/vaccined related tweets using a masked language modeling (MLM) objective starting from BERTweet
- Following datasets on English tweets were used:
- Tweets with trending #CovidVaccine hashtag, 207,000 tweets uploaded across Aug 2020 to Apr 2021 ([kaggle](https://www.kaggle.com/kaushiksuresh147/covidvaccine-tweets))
- Tweets about all COVID-19 vaccines, 78,000 tweets uploaded across Dec 2020 to May 2021 ([kaggle](https://www.kaggle.com/gpreda/all-covid19-vaccines-tweets))
- COVID-19 Twitter chatter dataset, 590,000 tweets uploaded across Mar 2021 to May 2021 ([github](https://github.com/thepanacealab/covid19_twitter))
#### 2) Fine-tuning for fact classification
- A fine-tuned model on English tweets using a masked language modeling (MLM) objective from [BERTweet](https://github.com/VinAIResearch/BERTweet) for fact-classification task on COVID-19/vaccine.
- Statements from Poynter and Snopes with Selenium 14,000 fact-checked statements from Jan 2020 to May 2021
- Divide original labels within 3 categories
- False: false, no evidence, manipulated, fake, not true, unproven, unverified
- Misleading: misleading, exaggerated, out of context, needs context
- True: true, correct
## Evaluation results
| Training loss | Validation loss | Training accuracy | Validation accuracy |
| --- | --- | --- | --- |
| 0.1062 | 0.1006 | 96.3% | 94.5% |
# Contributors
- This model is a part of final team project from MLDL for DS class at SNU
- Team BIBI - Vaccinating COVID-NineTweets
- Team members: Ahn, Hyunju; An, Jiyong; An, Seungchan; Jeong, Seokho; Kim, Jungmin; Kim, Sangbeom
- Advisor: Prof. Wen-Syan Li
<a href="https://gsds.snu.ac.kr/"><img src="https://gsds.snu.ac.kr/wp-content/uploads/sites/50/2021/04/GSDS_logo2-e1619068952717.png" width="200" height="80"></a> |
anthonymirand/haha_2019_adaptation_task | 2021-05-30T21:16:48.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"training_args.bin"
]
| anthonymirand | 44 | transformers | |
anthonymirand/haha_2019_primary_task | 2021-05-18T23:42:53.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"training_args.bin"
]
| anthonymirand | 36 | transformers | |
antoiloui/belgpt2 | 2021-05-21T13:21:55.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"fr",
"transformers",
"license:mit",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
]
| antoiloui | 154,676 | transformers | ---
language:
- fr
license:
- mit
widget:
- text: "Hier, Elon Musk a"
- text: "Pourquoi a-t-il"
- text: "Tout à coup, elle"
---
# Belgian GPT-2 🇧🇪
**A GPT-2 model pre-trained on a very large and heterogeneous French corpus (~60Gb).**
## Usage
You can use BelGPT-2 with [🤗 transformers](https://github.com/huggingface/transformers):
```python
import torch
from transformers import GPT2Tokenizer, GPT2LMHeadModel
# Load pretrained model and tokenizer
model = GPT2LMHeadModel.from_pretrained("antoiloui/belgpt2")
tokenizer = GPT2Tokenizer.from_pretrained("antoiloui/belgpt2")
# Generate a sample of text
model.eval()
output = model.generate(
bos_token_id=random.randint(1,50000),
do_sample=True,
top_k=50,
max_length=100,
top_p=0.95,
num_return_sequences=1
)
# Decode it
decoded_output = []
for sample in output:
decoded_output.append(tokenizer.decode(sample, skip_special_tokens=True))
print(decoded_output)
```
## Data
Below is the list of all French copora used to pre-trained the model:
| Dataset | `$corpus_name` | Raw size | Cleaned size |
| :------| :--- | :---: | :---: |
| CommonCrawl | `common_crawl` | 200.2 GB | 40.4 GB |
| NewsCrawl | `news_crawl` | 10.4 GB | 9.8 GB |
| Wikipedia | `wiki` | 19.4 GB | 4.1 GB |
| Wikisource | `wikisource` | 4.6 GB | 2.3 GB |
| Project Gutenberg | `gutenberg` | 1.3 GB | 1.1 GB |
| EuroParl | `europarl` | 289.9 MB | 278.7 MB |
| NewsCommentary | `news_commentary` | 61.4 MB | 58.1 MB |
| **Total** | | **236.3 GB** | **57.9 GB** |
## Documentation
Detailed documentation on the pre-trained model, its implementation, and the data can be found [here](https://github.com/antoiloui/belgpt2/blob/master/docs/index.md).
## Citation
For attribution in academic contexts, please cite this work as:
```
@misc{louis2020belgpt2,
author = {Louis, Antoine},
title = {{BelGPT-2: a GPT-2 model pre-trained on French corpora.}},
year = {2020},
howpublished = {\url{https://github.com/antoiloui/belgpt2}},
}
``` |
antoiloui/netbert | 2021-05-18T23:44:04.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"en",
"transformers",
"license:mit",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
]
| antoiloui | 28 | transformers | ---
language:
- en
license:
- mit
widget:
- text: "The nodes of a computer network may include [MASK]."
---
# NetBERT 📶
**A BERT-base model pre-trained on a huge corpus of computer networking text (~23Gb)**.
## Usage
You can use NetBERT with [🤗 transformers](https://github.com/huggingface/transformers):
```python
import torch
from transformers import BertTokenizer, BertForMaskedLM
# Load pretrained model and tokenizer
model = BertForMaskedLM.from_pretrained("antoiloui/netbert")
tokenizer = BertTokenizer.from_pretrained("antoiloui/netbert")
```
## Documentation
Detailed documentation on the pre-trained model, its implementation, and the data can be found [here](https://github.com/antoiloui/netbert/blob/master/docs/index.md).
## Citation
For attribution in academic contexts, please cite this work as:
```
@mastersthesis{louis2020netbert,
title={NetBERT: A Pre-trained Language Representation Model for Computer Networking},
author={Louis, Antoine},
year={2020},
school={University of Liege}
}
``` |
anton-l/megatron-11b | 2021-02-20T13:39:44.000Z | [
"pytorch",
"megatron",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| anton-l | 177 | transformers | |
anton-l/wav2vec2-base-960h | 2021-05-12T21:01:28.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"transformers"
]
| [
".gitattributes",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| anton-l | 14 | transformers | ||
anton-l/wav2vec2-large-xlsr-53-chuvash | 2021-03-28T23:23:11.000Z | [
"pytorch",
"wav2vec2",
"cv",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| anton-l | 15 | transformers | ---
language: cv
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Chuvash XLSR Wav2Vec2 Large 53 by Anton Lozhkov
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice cv
type: common_voice
args: cv
metrics:
- name: Test WER
type: wer
value: 40.01
---
# Wav2Vec2-Large-XLSR-53-Chuvash
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Chuvash using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "cv", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-chuvash")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-chuvash")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Chuvash test data of Common Voice.
```python
import torch
import torchaudio
import urllib.request
import tarfile
import pandas as pd
from tqdm.auto import tqdm
from datasets import load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# Download the raw data instead of using HF datasets to save disk space
data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/cv.tar.gz"
filestream = urllib.request.urlopen(data_url)
data_file = tarfile.open(fileobj=filestream, mode="r|gz")
data_file.extractall()
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-chuvash")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-chuvash")
model.to("cuda")
cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/cv/test.tsv", sep='\t')
clips_path = "cv-corpus-6.1-2020-12-11/cv/clips/"
def clean_sentence(sent):
sent = sent.lower()
# replace non-alpha characters with space
sent = "".join(ch if ch.isalpha() else " " for ch in sent)
# remove repeated spaces
sent = " ".join(sent.split())
return sent
targets = []
preds = []
for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]):
row["sentence"] = clean_sentence(row["sentence"])
speech_array, sampling_rate = torchaudio.load(clips_path + row["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
row["speech"] = resampler(speech_array).squeeze().numpy()
inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
targets.append(row["sentence"])
preds.append(processor.batch_decode(pred_ids)[0])
print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets)))
```
**Test Result**: 40.01 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
The script used for training can be found [here](github.com)
|
anton-l/wav2vec2-large-xlsr-53-estonian | 2021-03-28T23:37:45.000Z | [
"pytorch",
"wav2vec2",
"et",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| anton-l | 16 | transformers | ---
language: et
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Estonian XLSR Wav2Vec2 Large 53 by Anton Lozhkov
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice et
type: common_voice
args: et
metrics:
- name: Test WER
type: wer
value: 30.74
---
# Wav2Vec2-Large-XLSR-53-Estonian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Estonian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "et", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-estonian")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-estonian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Estonian test data of Common Voice.
```python
import torch
import torchaudio
import urllib.request
import tarfile
import pandas as pd
from tqdm.auto import tqdm
from datasets import load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# Download the raw data instead of using HF datasets to save disk space
data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/et.tar.gz"
filestream = urllib.request.urlopen(data_url)
data_file = tarfile.open(fileobj=filestream, mode="r|gz")
data_file.extractall()
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-estonian")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-estonian")
model.to("cuda")
cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/et/test.tsv", sep='\t')
clips_path = "cv-corpus-6.1-2020-12-11/et/clips/"
def clean_sentence(sent):
sent = sent.lower()
# normalize apostrophes
sent = sent.replace("’", "'")
# replace non-alpha characters with space
sent = "".join(ch if ch.isalpha() or ch == "'" else " " for ch in sent)
# remove repeated spaces
sent = " ".join(sent.split())
return sent
targets = []
preds = []
for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]):
row["sentence"] = clean_sentence(row["sentence"])
speech_array, sampling_rate = torchaudio.load(clips_path + row["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
row["speech"] = resampler(speech_array).squeeze().numpy()
inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
targets.append(row["sentence"])
preds.append(processor.batch_decode(pred_ids)[0])
print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets)))
```
**Test Result**: 30.74 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
The script used for training can be found [here](github.com)
|
anton-l/wav2vec2-large-xlsr-53-hungarian | 2021-03-29T06:45:42.000Z | [
"pytorch",
"wav2vec2",
"hu",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| anton-l | 8 | transformers | ---
language: hu
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Hungarian XLSR Wav2Vec2 Large 53 by Anton Lozhkov
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice hu
type: common_voice
args: hu
metrics:
- name: Test WER
type: wer
value: 42.26
---
# Wav2Vec2-Large-XLSR-53-Hungarian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Hungarian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "hu", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-hungarian")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-hungarian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Hungarian test data of Common Voice.
```python
import torch
import torchaudio
import urllib.request
import tarfile
import pandas as pd
from tqdm.auto import tqdm
from datasets import load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# Download the raw data instead of using HF datasets to save disk space
data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/hu.tar.gz"
filestream = urllib.request.urlopen(data_url)
data_file = tarfile.open(fileobj=filestream, mode="r|gz")
data_file.extractall()
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-hungarian")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-hungarian")
model.to("cuda")
cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/hu/test.tsv", sep='\t')
clips_path = "cv-corpus-6.1-2020-12-11/hu/clips/"
def clean_sentence(sent):
sent = sent.lower()
# replace non-alpha characters with space
sent = "".join(ch if ch.isalpha() else " " for ch in sent)
# remove repeated spaces
sent = " ".join(sent.split())
return sent
targets = []
preds = []
for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]):
row["sentence"] = clean_sentence(row["sentence"])
speech_array, sampling_rate = torchaudio.load(clips_path + row["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
row["speech"] = resampler(speech_array).squeeze().numpy()
inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
targets.append(row["sentence"])
preds.append(processor.batch_decode(pred_ids)[0])
print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets)))
```
**Test Result**: 42.26 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
anton-l/wav2vec2-large-xlsr-53-kyrgyz | 2021-03-28T21:32:21.000Z | [
"pytorch",
"wav2vec2",
"ky",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| anton-l | 35 | transformers | ---
language: ky
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Kyrgyz XLSR Wav2Vec2 Large 53 by Anton Lozhkov
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ky
type: common_voice
args: ky
metrics:
- name: Test WER
type: wer
value: 31.88
---
# Wav2Vec2-Large-XLSR-53-Kyrgyz
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Kyrgyz using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ky", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-kyrgyz")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-kyrgyz")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Kyrgyz test data of Common Voice.
```python
import torch
import torchaudio
import urllib.request
import tarfile
import pandas as pd
from tqdm.auto import tqdm
from datasets import load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# Download the raw data instead of using HF datasets to save disk space
data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/ky.tar.gz"
filestream = urllib.request.urlopen(data_url)
data_file = tarfile.open(fileobj=filestream, mode="r|gz")
data_file.extractall()
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-kyrgyz")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-kyrgyz")
model.to("cuda")
cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/ky/test.tsv", sep='\t')
clips_path = "cv-corpus-6.1-2020-12-11/ky/clips/"
def clean_sentence(sent):
sent = sent.lower()
# replace non-alpha characters with space
sent = "".join(ch if ch.isalpha() else " " for ch in sent)
# remove repeated spaces
sent = " ".join(sent.split())
return sent
targets = []
preds = []
for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]):
row["sentence"] = clean_sentence(row["sentence"])
speech_array, sampling_rate = torchaudio.load(clips_path + row["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
row["speech"] = resampler(speech_array).squeeze().numpy()
inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
targets.append(row["sentence"])
preds.append(processor.batch_decode(pred_ids)[0])
print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets)))
```
**Test Result**: 31.88 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.