modelId
stringlengths 4
112
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 21
values | files
sequence | publishedBy
stringlengths 2
37
| downloads_last_month
int32 0
9.44M
| library
stringclasses 15
values | modelCard
large_stringlengths 0
100k
|
---|---|---|---|---|---|---|---|---|
valhalla/distilt5-qa-qg-hl-6-4 | 2020-10-26T18:28:54.000Z | [
"pytorch",
"t5",
"seq2seq",
"dataset:squad",
"transformers",
"question-generation",
"distilt5",
"distilt5-qg",
"license:mit",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] | valhalla | 22 | transformers | ---
datasets:
- squad
tags:
- question-generation
- distilt5
- distilt5-qg
widget:
- text: "generate question: <hl> 42 <hl> is the answer to life, the universe and everything. </s>"
- text: "question: What is 42 context: 42 is the answer to life, the universe and everything. </s>"
license: "MIT"
---
## DistilT5 for question-generation
This is distilled version of [t5-small-qa-qg-hl](https://huggingface.co/valhalla/t5-small-qa-qg-hl) model trained for question answering and answer aware question generation tasks.
The model is distilled using the **No Teacher Distillation** method proposed by Huggingface, [here](https://github.com/huggingface/transformers/tree/master/examples/seq2seq#distilbart).
We just copy alternating layers from `t5-small-qa-qg-hl` and finetune more on the same data. Following table lists other distilled models and their metrics.
| Name | BLEU-4 | METEOR | ROUGE-L | QA-EM | QA-F1 |
|---------------------------------------------------------------------------------|---------|---------|---------|--------|--------|
| [distilt5-qg-hl-6-4](https://huggingface.co/valhalla/distilt5-qg-hl-6-4) | 18.4141 | 24.8417 | 40.3435 | - | - |
| [distilt5-qa-qg-hl-6-4](https://huggingface.co/valhalla/distilt5-qa-qg-hl-6-4) | 18.6493 | 24.9685 | 40.5605 | 76.13 | 84.659 |
| [distilt5-qg-hl-12-6](https://huggingface.co/valhalla/distilt5-qg-hl-12-6) | 20.5275 | 26.5010 | 43.2676 | - | - |
| [distilt5-qa-qg-hl-12-6](https://huggingface.co/valhalla/distilt5-qa-qg-hl-12-6)| 20.6109 | 26.4533 | 43.0895 | 81.61 | 89.831 |
You can play with the model using the inference API. Here's how you can use it
For QG
`generate question: <hl> 42 <hl> is the answer to life, the universe and everything.`
For QA
`question: What is 42 context: 42 is the answer to life, the universe and everything.`
For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
### Model in action 🚀
You'll need to clone the [repo](https://github.com/patil-suraj/question_generation).
[](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb)
```python3
from pipelines import pipeline
nlp = pipeline("multitask-qa-qg", model="valhalla/distilt5-qa-qg-hl-6-4")
# to generate questions simply pass the text
nlp("42 is the answer to life, the universe and everything.")
=> [{'answer': '42', 'question': 'What is the answer to life?'}]
# for qa pass a dict with "question" and "context"
nlp({
"question": "What is 42 ?",
"context": "42 is the answer to life, the universe and everything."
})
=> 'the answer to life, the universe and everything'
``` |
valhalla/distilt5-qg-hl-12-6 | 2020-10-26T18:29:34.000Z | [
"pytorch",
"t5",
"seq2seq",
"dataset:squad",
"transformers",
"question-generation",
"distilt5",
"distilt5-qg",
"license:mit",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] | valhalla | 76 | transformers | ---
datasets:
- squad
tags:
- question-generation
- distilt5
- distilt5-qg
widget:
- text: "<hl> 42 <hl> is the answer to life, the universe and everything. </s>"
- text: "Python is a programming language. It is developed by <hl> Guido Van Rossum <hl>. </s>"
- text: "Although <hl> practicality <hl> beats purity </s>"
license: "MIT"
---
## DistilT5 for question-generation
This is distilled version of [t5-base-qg-hl](https://huggingface.co/valhalla/t5-base-qg-hl) model trained for answer aware question generation task. The answer spans are highlighted within the text with special highlight tokens.
The model is distilled using the **No Teacher Distillation** method proposed by Huggingface, [here](https://github.com/huggingface/transformers/tree/master/examples/seq2seq#distilbart).
We just copy alternating layers from `t5-base-qg-hl` and finetune more on the same data. Following table lists other distilled models and their metrics.
| Name | BLEU-4 | METEOR | ROUGE-L | QA-EM | QA-F1 |
|---------------------------------------------------------------------------------|---------|---------|---------|--------|--------|
| [distilt5-qg-hl-6-4](https://huggingface.co/valhalla/distilt5-qg-hl-6-4) | 18.4141 | 24.8417 | 40.3435 | - | - |
| [distilt5-qa-qg-hl-6-4](https://huggingface.co/valhalla/distilt5-qa-qg-hl-6-4) | 18.6493 | 24.9685 | 40.5605 | 76.13 | 84.659 |
| [distilt5-qg-hl-12-6](https://huggingface.co/valhalla/distilt5-qg-hl-12-6) | 20.5275 | 26.5010 | 43.2676 | - | - |
| [distilt5-qa-qg-hl-12-6](https://huggingface.co/valhalla/distilt5-qa-qg-hl-12-6)| 20.6109 | 26.4533 | 43.0895 | 81.61 | 89.831 |
You can play with the model using the inference API, just highlight the answer spans with `<hl>` tokens. For example
`<hl> 42 <hl> is the answer to life, the universe and everything.`
For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
### Model in action 🚀
You'll need to clone the [repo](https://github.com/patil-suraj/question_generation).
[](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb)
```python3
from pipelines import pipeline
nlp = pipeline("question-generation", model="valhalla/distilt5-qg-hl-12-6")
nlp("42 is the answer to life, universe and everything.")
=> [{'answer': '42', 'question': 'What is the answer to life, the universe and everything?'}]
``` |
valhalla/distilt5-qg-hl-6-4 | 2020-10-26T18:30:11.000Z | [
"pytorch",
"t5",
"seq2seq",
"dataset:squad",
"transformers",
"question-generation",
"distilt5",
"distilt5-qg",
"license:mit",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] | valhalla | 21 | transformers | ---
datasets:
- squad
tags:
- question-generation
- distilt5
- distilt5-qg
widget:
- text: "<hl> 42 <hl> is the answer to life, the universe and everything. </s>"
- text: "Python is a programming language. It is developed by <hl> Guido Van Rossum <hl>. </s>"
- text: "Although <hl> practicality <hl> beats purity </s>"
license: "MIT"
---
## DistilT5 for question-generation
This is distilled version of [t5-small-qa-qg-hl](https://huggingface.co/valhalla/t5-small-qa-qg-hl) model trained for answer aware question generation task. The answer spans are highlighted within the text with special highlight tokens.
The model is distilled using the **No Teacher Distillation** method proposed by Huggingface, [here](https://github.com/huggingface/transformers/tree/master/examples/seq2seq#distilbart).
We just copy alternating layers from `t5-small-qa-qg-hl` and finetune more on the same data. Following table lists other distilled models and their metrics.
| Name | BLEU-4 | METEOR | ROUGE-L | QA-EM | QA-F1 |
|---------------------------------------------------------------------------------|---------|---------|---------|--------|--------|
| [distilt5-qg-hl-6-4](https://huggingface.co/valhalla/distilt5-qg-hl-6-4) | 18.4141 | 24.8417 | 40.3435 | - | - |
| [distilt5-qa-qg-hl-6-4](https://huggingface.co/valhalla/distilt5-qa-qg-hl-6-4) | 18.6493 | 24.9685 | 40.5605 | 76.13 | 84.659 |
| [distilt5-qg-hl-12-6](https://huggingface.co/valhalla/distilt5-qg-hl-12-6) | 20.5275 | 26.5010 | 43.2676 | - | - |
| [distilt5-qa-qg-hl-12-6](https://huggingface.co/valhalla/distilt5-qa-qg-hl-12-6)| 20.6109 | 26.4533 | 43.0895 | 81.61 | 89.831 |
You can play with the model using the inference API, just highlight the answer spans with `<hl>` tokens. For example
`<hl> 42 <hl> is the answer to life, the universe and everything.`
For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
### Model in action 🚀
You'll need to clone the [repo](https://github.com/patil-suraj/question_generation).
[](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb)
```python3
from pipelines import pipeline
nlp = pipeline("question-generation", model="valhalla/distilt5-qg-hl-6-4")
nlp("42 is the answer to life, universe and everything.")
=> [{'answer': '42', 'question': 'What is the answer to life?'}]
``` |
valhalla/electra-base-discriminator-finetuned_squadv1 | 2020-12-11T22:03:34.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | valhalla | 226 | transformers | # ELECTRA-BASE-DISCRIMINATOR finetuned on SQuADv1
This is electra-base-discriminator model finetuned on SQuADv1 dataset for for question answering task.
## Model details
As mentioned in the original paper: ELECTRA is a new method for self-supervised language representation learning.
It can be used to pre-train transformer networks using relatively little compute.
ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network,
similar to the discriminator of a GAN. At small scale, ELECTRA achieves strong results even when trained on a single GPU.
At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset.
| Param | #Value |
|---------------------|--------|
| layers | 12 |
| hidden size | 768 |
| num attetion heads | 12 |
| on disk size | 436MB |
## Model training
This model was trained on google colab v100 GPU.
You can find the fine-tuning colab here
[](https://colab.research.google.com/drive/11yo-LaFsgggwmDSy2P8zD3tzf5cCb-DU?usp=sharing).
## Results
The results are actually slightly better than given in the paper.
In the paper the authors mentioned that electra-base achieves 84.5 EM and 90.8 F1
| Metric | #Value |
|--------|--------|
| EM | 85.0520|
| F1 | 91.6050|
## Model in Action 🚀
```python3
from transformers import pipeline
nlp = pipeline('question-answering', model='valhalla/electra-base-discriminator-finetuned_squadv1')
nlp({
'question': 'What is the answer to everything ?',
'context': '42 is the answer to life the universe and everything'
})
=> {'answer': '42', 'end': 2, 'score': 0.981274963050339, 'start': 0}
```
> Created with ❤️ by Suraj Patil [](https://github.com/patil-suraj/)
[](https://twitter.com/psuraj28)
|
valhalla/gpt-neo-random-tiny | 2021-04-07T16:38:40.000Z | [
"pytorch",
"gpt_neo",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin"
] | valhalla | 5,256 | transformers | **This model is uploaded for testing purpose. It's random model not trained on anything** |
|
valhalla/gpt-norwegian | 2021-06-14T09:18:59.000Z | [] | [
".gitattributes"
] | valhalla | 0 | |||
valhalla/gpt2-norwegian-test | 2021-06-08T11:20:16.000Z | [
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
] | valhalla | 9 | transformers | |
valhalla/gpt2-norwegian | 2021-06-14T09:22:28.000Z | [
"jax",
"tensorboard",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"config.json",
"events.out.tfevents.1623422138.t1v-n-605115f1-w-0.3840.3.v2",
"events.out.tfevents.1623422271.t1v-n-605115f1-w-0.5259.3.v2",
"events.out.tfevents.1623422427.t1v-n-605115f1-w-0.6700.3.v2",
"events.out.tfevents.1623425594.t1v-n-605115f1-w-0.8119.3.v2",
"events.out.tfevents.1623440802.t1v-n-605115f1-w-0.10569.3.v2",
"flax_model.msgpack",
"merges.txt",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
] | valhalla | 0 | transformers | |
valhalla/longformer-base-4096-finetuned-squadv1 | 2021-02-10T16:35:40.000Z | [
"pytorch",
"tf",
"rust",
"longformer",
"question-answering",
"dataset:squad_v1",
"arxiv:2004.05150",
"transformers",
"license:mit"
] | question-answering | [
".gitattributes",
"LICENSE",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"rust_model.ot",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | valhalla | 5,619 | transformers | ---
datasets:
- squad_v1
license: mit
---
# LONGFORMER-BASE-4096 fine-tuned on SQuAD v1
This is longformer-base-4096 model fine-tuned on SQuAD v1 dataset for question answering task.
[Longformer](https://arxiv.org/abs/2004.05150) model created by Iz Beltagy, Matthew E. Peters, Arman Coha from AllenAI. As the paper explains it
> `Longformer` is a BERT-like model for long documents.
The pre-trained model can handle sequences with upto 4096 tokens.
## Model Training
This model was trained on google colab v100 GPU. You can find the fine-tuning colab here [](https://colab.research.google.com/drive/1zEl5D-DdkBKva-DdreVOmN0hrAfzKG1o?usp=sharing).
Few things to keep in mind while training longformer for QA task,
by default longformer uses sliding-window local attention on all tokens. But For QA, all question tokens should have global attention. For more details on this please refer the paper. The `LongformerForQuestionAnswering` model automatically does that for you. To allow it to do that
1. The input sequence must have three sep tokens, i.e the sequence should be encoded like this
` <s> question</s></s> context</s>`. If you encode the question and answer as a input pair, then the tokenizer already takes care of that, you shouldn't worry about it.
2. `input_ids` should always be a batch of examples.
## Results
|Metric | # Value |
|-------------|---------|
| Exact Match | 85.1466 |
| F1 | 91.5415 |
## Model in Action 🚀
```python
import torch
from transformers import AutoTokenizer, AutoModelForQuestionAnswering,
tokenizer = AutoTokenizer.from_pretrained("valhalla/longformer-base-4096-finetuned-squadv1")
model = AutoModelForQuestionAnswering.from_pretrained("valhalla/longformer-base-4096-finetuned-squadv1")
text = "Huggingface has democratized NLP. Huge thanks to Huggingface for this."
question = "What has Huggingface done ?"
encoding = tokenizer(question, text, return_tensors="pt")
input_ids = encoding["input_ids"]
# default is local attention everywhere
# the forward method will automatically set global attention on question tokens
attention_mask = encoding["attention_mask"]
start_scores, end_scores = model(input_ids, attention_mask=attention_mask)
all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0].tolist())
answer_tokens = all_tokens[torch.argmax(start_scores) :torch.argmax(end_scores)+1]
answer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens))
# output => democratized NLP
```
The `LongformerForQuestionAnswering` isn't yet supported in `pipeline` . I'll update this card once the support has been added.
> Created with ❤️ by Suraj Patil [](https://github.com/patil-suraj/)
[](https://twitter.com/psuraj28)
|
valhalla/m2m100_tiny_random | 2021-03-05T09:03:18.000Z | [
"pytorch",
"m2m_100",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | valhalla | 25 | transformers | |
valhalla/s2t_covost2_en_de_small | 2021-02-24T07:22:30.000Z | [
"pytorch",
"speech_to_text_transformer",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | valhalla | 6 | transformers | |
valhalla/s2t_librispeech_large | 2021-02-26T14:25:12.000Z | [
"pytorch",
"speech_to_text_transformer",
"seq2seq",
"en",
"dataset:librispeech_asr",
"transformers",
"audio",
"automatic-speech-recognition",
"license:apache-2.0",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json",
".ipynb_checkpoints/config-checkpoint.json"
] | valhalla | 30 | transformers | ---
language: en
datasets:
- librispeech_asr
tags:
- audio
- automatic-speech-recognition
license: apache-2.0
---
TODO: [To be filled]
## Evaluation on LibriSpeech Test
The following script shows how to evaluate this model on the [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) *"clean"* and *"other"* test dataset.
```python
from datasets import load_dataset
from transformers import Speech2TextTransformerForConditionalGeneration, Speech2TextTransformerTokenizer
import soundfile as sf
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset
model = Speech2TextTransformerForConditionalGeneration.from_pretrained("valhalla/s2t_librispeech_large").to("cuda")
tokenizer = Speech2TextTransformerTokenizer.from_pretrained("valhalla/s2t_librispeech_large", do_upper_case=True)
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
librispeech_eval = librispeech_eval.map(map_to_array)
def map_to_pred(batch):
features = tokenizer(batch["speech"], sample_rate=16000, padding=True, return_tensors="pt")
input_features = features.input_features.to("cuda")
attention_mask = features.attention_mask.to("cuda")
gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask)
batch["transcription"] = tokenizer.batch_decode(gen_tokens, skip_special_tokens=True)
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=["speech"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 3.3 | 7.5 | |
valhalla/s2t_librispeech_medium | 2021-02-26T14:24:39.000Z | [
"pytorch",
"speech_to_text_transformer",
"seq2seq",
"en",
"dataset:librispeech_asr",
"transformers",
"audio",
"automatic-speech-recognition",
"license:apache-2.0",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json",
".ipynb_checkpoints/config-checkpoint.json"
] | valhalla | 8 | transformers | ---
language: en
datasets:
- librispeech_asr
tags:
- audio
- automatic-speech-recognition
license: apache-2.0
---
TODO: [To be filled]
## Evaluation on LibriSpeech Test
The following script shows how to evaluate this model on the [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) *"clean"* and *"other"* test dataset.
```python
from datasets import load_dataset
from transformers import Speech2TextTransformerForConditionalGeneration, Speech2TextTransformerTokenizer
import soundfile as sf
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset
model = Speech2TextTransformerForConditionalGeneration.from_pretrained("valhalla/s2t_librispeech_medium").to("cuda")
tokenizer = Speech2TextTransformerTokenizer.from_pretrained("valhalla/s2t_librispeech_medium", do_upper_case=True)
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
librispeech_eval = librispeech_eval.map(map_to_array)
def map_to_pred(batch):
features = tokenizer(batch["speech"], sample_rate=16000, padding=True, return_tensors="pt")
input_features = features.input_features.to("cuda")
attention_mask = features.attention_mask.to("cuda")
gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask)
batch["transcription"] = tokenizer.batch_decode(gen_tokens, skip_special_tokens=True)
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=["speech"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 3.5 | 7.8 | |
valhalla/s2t_librispeech_small | 2021-02-26T14:24:09.000Z | [
"pytorch",
"speech_to_text_transformer",
"seq2seq",
"en",
"dataset:librispeech_asr",
"transformers",
"audio",
"automatic-speech-recognition",
"license:apache-2.0",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | valhalla | 7 | transformers | ---
language: en
datasets:
- librispeech_asr
tags:
- audio
- automatic-speech-recognition
license: apache-2.0
---
TODO: [To be filled]
## Evaluation on LibriSpeech Test
The following script shows how to evaluate this model on the [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) *"clean"* and *"other"* test dataset.
```python
from datasets import load_dataset
from transformers import Speech2TextTransformerForConditionalGeneration, Speech2TextTransformerTokenizer
import soundfile as sf
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset
model = Speech2TextTransformerForConditionalGeneration.from_pretrained("valhalla/s2t_librispeech_small").to("cuda")
tokenizer = Speech2TextTransformerTokenizer.from_pretrained("valhalla/s2t_librispeech_small", do_upper_case=True)
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
librispeech_eval = librispeech_eval.map(map_to_array)
def map_to_pred(batch):
features = tokenizer(batch["speech"], sample_rate=16000, padding=True, return_tensors="pt")
input_features = features.input_features.to("cuda")
attention_mask = features.attention_mask.to("cuda")
gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask)
batch["transcription"] = tokenizer.batch_decode(gen_tokens, skip_special_tokens=True)
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=["speech"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 4.3 | 9.0 | |
valhalla/s2t_mustc_en_fr_small | 2021-02-26T14:34:11.000Z | [
"pytorch",
"speech_to_text_transformer",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | valhalla | 7 | transformers | |
valhalla/s2t_mustc_multilinguial_medium | 2021-03-03T05:12:34.000Z | [
"pytorch",
"speech_to_text_transformer",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | valhalla | 9 | transformers | |
valhalla/t5-base-cnn-fp6-test | 2021-01-08T16:02:58.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"all_results.json",
"config.json",
"hypothesis.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"train_results.json",
"trainer_state.json",
"training_args.bin",
"val_results.json"
] | valhalla | 8 | transformers | This model is uploaded for testing purpose
|
valhalla/t5-base-e2e-qg | 2020-12-11T22:03:41.000Z | [
"pytorch",
"t5",
"seq2seq",
"dataset:squad",
"arxiv:1910.10683",
"transformers",
"question-generation",
"license:mit",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
] | valhalla | 969 | transformers | ---
datasets:
- squad
tags:
- question-generation
widget:
- text: "Python is a programming language. It is developed by Guido Van Rossum and released in 1991. </s>"
license: mit
---
## T5 for question-generation
This is [t5-base](https://arxiv.org/abs/1910.10683) model trained for end-to-end question generation task. Simply input the text and the model will generate multile questions.
You can play with the model using the inference API, just put the text and see the results!
For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
### Model in action 🚀
You'll need to clone the [repo](https://github.com/patil-suraj/question_generation).
[](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb)
```python3
from pipelines import pipeline
text = "Python is an interpreted, high-level, general-purpose programming language. Created by Guido van Rossum \
and first released in 1991, Python's design philosophy emphasizes code \
readability with its notable use of significant whitespace."
nlp = pipeline("e2e-qg", model="valhalla/t5-base-e2e-qg")
nlp(text)
=> [
'Who created Python?',
'When was Python first released?',
"What is Python's design philosophy?"
]
``` |
valhalla/t5-base-qa-qg-hl | 2020-12-11T22:03:44.000Z | [
"pytorch",
"t5",
"seq2seq",
"dataset:squad",
"arxiv:1910.10683",
"transformers",
"question-generation",
"license:mit",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
] | valhalla | 2,679 | transformers | ---
datasets:
- squad
tags:
- question-generation
widget:
- text: "generate question: <hl> 42 <hl> is the answer to life, the universe and everything. </s>"
- text: "question: What is 42 context: 42 is the answer to life, the universe and everything. </s>"
license: mit
---
## T5 for multi-task QA and QG
This is multi-task [t5-base](https://arxiv.org/abs/1910.10683) model trained for question answering and answer aware question generation tasks.
For question generation the answer spans are highlighted within the text with special highlight tokens (`<hl>`) and prefixed with 'generate question: '. For QA the input is processed like this `question: question_text context: context_text </s>`
You can play with the model using the inference API. Here's how you can use it
For QG
`generate question: <hl> 42 <hl> is the answer to life, the universe and everything. </s>`
For QA
`question: What is 42 context: 42 is the answer to life, the universe and everything. </s>`
For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
### Model in action 🚀
You'll need to clone the [repo](https://github.com/patil-suraj/question_generation).
[](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb)
```python3
from pipelines import pipeline
nlp = pipeline("multitask-qa-qg", model="valhalla/t5-base-qa-qg-hl")
# to generate questions simply pass the text
nlp("42 is the answer to life, the universe and everything.")
=> [{'answer': '42', 'question': 'What is the answer to life, the universe and everything?'}]
# for qa pass a dict with "question" and "context"
nlp({
"question": "What is 42 ?",
"context": "42 is the answer to life, the universe and everything."
})
=> 'the answer to life, the universe and everything'
``` |
valhalla/t5-base-qg-hl | 2020-12-11T22:03:48.000Z | [
"pytorch",
"t5",
"seq2seq",
"dataset:squad",
"arxiv:1910.10683",
"transformers",
"question-generation",
"license:mit",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
] | valhalla | 1,940 | transformers | ---
datasets:
- squad
tags:
- question-generation
widget:
- text: "<hl> 42 <hl> is the answer to life, the universe and everything. </s>"
- text: "Python is a programming language. It is developed by <hl> Guido Van Rossum <hl>. </s>"
- text: "Although <hl> practicality <hl> beats purity </s>"
license: mit
---
## T5 for question-generation
This is [t5-base](https://arxiv.org/abs/1910.10683) model trained for answer aware question generation task. The answer spans are highlighted within the text with special highlight tokens.
You can play with the model using the inference API, just highlight the answer spans with `<hl>` tokens and end the text with `</s>`. For example
`<hl> 42 <hl> is the answer to life, the universe and everything. </s>`
For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
### Model in action 🚀
You'll need to clone the [repo](https://github.com/patil-suraj/question_generation).
[](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb)
```python3
from pipelines import pipeline
nlp = pipeline("question-generation", model="valhalla/t5-base-qg-hl")
nlp("42 is the answer to life, universe and everything.")
=> [{'answer': '42', 'question': 'What is the answer to life, universe and everything?'}]
``` |
valhalla/t5-base-squad | 2020-12-11T22:03:51.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] | valhalla | 127 | transformers | # T5 for question-answering
This is T5-base model fine-tuned on SQuAD1.1 for QA using text-to-text approach
## Model training
This model was trained on colab TPU with 35GB RAM for 4 epochs
## Results:
| Metric | #Value |
|-------------|---------|
| Exact Match | 81.5610 |
| F1 | 89.9601 |
## Model in Action 🚀
```
from transformers import AutoModelWithLMHead, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("valhalla/t5-base-squad")
model = AutoModelWithLMHead.from_pretrained("valhalla/t5-base-squad")
def get_answer(question, context):
input_text = "question: %s context: %s </s>" % (question, context)
features = tokenizer([input_text], return_tensors='pt')
out = model.generate(input_ids=features['input_ids'],
attention_mask=features['attention_mask'])
return tokenizer.decode(out[0])
context = "In Norse mythology, Valhalla is a majestic, enormous hall located in Asgard, ruled over by the god Odin."
question = "What is Valhalla ?"
get_answer(question, context)
# output: 'a majestic, enormous hall located in Asgard, ruled over by the god Odin'
```
Play with this model [](https://colab.research.google.com/drive/1a5xpJiUjZybfU9Mi-aDkOp116PZ9-wni?usp=sharing)
> Created by Suraj Patil [](https://github.com/patil-suraj/)
[](https://twitter.com/psuraj28)
|
valhalla/t5-small-e2e-qg | 2020-12-11T22:03:54.000Z | [
"pytorch",
"t5",
"seq2seq",
"dataset:squad",
"arxiv:1910.10683",
"transformers",
"question-generation",
"license:mit",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
] | valhalla | 1,842 | transformers | ---
datasets:
- squad
tags:
- question-generation
widget:
- text: "Python is developed by Guido Van Rossum and released in 1991. </s>"
license: mit
---
## T5 for question-generation
This is [t5-small](https://arxiv.org/abs/1910.10683) model trained for end-to-end question generation task. Simply input the text and the model will generate multile questions.
You can play with the model using the inference API, just put the text and see the results!
For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
### Model in action 🚀
You'll need to clone the [repo](https://github.com/patil-suraj/question_generation).
[](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb)
```python3
from pipelines import pipeline
text = "Python is an interpreted, high-level, general-purpose programming language. Created by Guido van Rossum \
and first released in 1991, Python's design philosophy emphasizes code \
readability with its notable use of significant whitespace."
nlp = pipeline("e2e-qg")
nlp(text)
=> [
'Who created Python?',
'When was Python first released?',
"What is Python's design philosophy?"
]
``` |
valhalla/t5-small-qa-qg-hl | 2020-12-11T22:03:58.000Z | [
"pytorch",
"t5",
"seq2seq",
"dataset:squad",
"arxiv:1910.10683",
"transformers",
"question-generation",
"license:mit",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
] | valhalla | 8,257 | transformers | ---
datasets:
- squad
tags:
- question-generation
widget:
- text: "generate question: <hl> 42 <hl> is the answer to life, the universe and everything. </s>"
- text: "question: What is 42 context: 42 is the answer to life, the universe and everything. </s>"
license: mit
---
## T5 for multi-task QA and QG
This is multi-task [t5-small](https://arxiv.org/abs/1910.10683) model trained for question answering and answer aware question generation tasks.
For question generation the answer spans are highlighted within the text with special highlight tokens (`<hl>`) and prefixed with 'generate question: '. For QA the input is processed like this `question: question_text context: context_text </s>`
You can play with the model using the inference API. Here's how you can use it
For QG
`generate question: <hl> 42 <hl> is the answer to life, the universe and everything. </s>`
For QA
`question: What is 42 context: 42 is the answer to life, the universe and everything. </s>`
For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
### Model in action 🚀
You'll need to clone the [repo](https://github.com/patil-suraj/question_generation).
[](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb)
```python3
from pipelines import pipeline
nlp = pipeline("multitask-qa-qg")
# to generate questions simply pass the text
nlp("42 is the answer to life, the universe and everything.")
=> [{'answer': '42', 'question': 'What is the answer to life, the universe and everything?'}]
# for qa pass a dict with "question" and "context"
nlp({
"question": "What is 42 ?",
"context": "42 is the answer to life, the universe and everything."
})
=> 'the answer to life, the universe and everything'
``` |
valhalla/t5-small-qg-hl | 2020-12-11T22:04:01.000Z | [
"pytorch",
"t5",
"seq2seq",
"dataset:squad",
"arxiv:1910.10683",
"transformers",
"question-generation",
"license:mit",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_args.bin"
] | valhalla | 3,874 | transformers | ---
datasets:
- squad
tags:
- question-generation
widget:
- text: "<hl> 42 <hl> is the answer to life, the universe and everything. </s>"
- text: "Python is a programming language. It is developed by <hl> Guido Van Rossum <hl>. </s>"
- text: "Simple is better than <hl> complex <hl>. </s>"
license: mit
---
## T5 for question-generation
This is [t5-small](https://arxiv.org/abs/1910.10683) model trained for answer aware question generation task. The answer spans are highlighted within the text with special highlight tokens.
You can play with the model using the inference API, just highlight the answer spans with `<hl>` tokens and end the text with `</s>`. For example
`<hl> 42 <hl> is the answer to life, the universe and everything. </s>`
For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
### Model in action 🚀
You'll need to clone the [repo](https://github.com/patil-suraj/question_generation).
[](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb)
```python3
from pipelines import pipeline
nlp = pipeline("question-generation")
nlp("42 is the answer to life, universe and everything.")
=> [{'answer': '42', 'question': 'What is the answer to life, universe and everything?'}]
``` |
valhalla/t5-small-qg-prepend | 2020-07-06T17:20:20.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] | valhalla | 378 | transformers | |
valhalla/wav2vec-test | 2021-03-26T14:32:25.000Z | [] | [
".gitattributes",
"README.md"
] | valhalla | 0 | |||
vanessahahn/bert-fr-de-en-ar-twitter | 2021-06-08T19:17:23.000Z | [
"pytorch"
] | [
".gitattributes",
"pytorch_model.bin"
] | vanessahahn | 0 | |||
vanhao195/fvi-bert | 2021-05-29T06:00:41.000Z | [] | [
".gitattributes"
] | vanhao195 | 0 | |||
varunravi/ds-ua-2020-bert-trained-with-squad | 2020-12-12T09:03:07.000Z | [] | [
".gitattributes"
] | varunravi | 0 | |||
varunravi/ds-ua-2020-bert-with-squad | 2020-12-12T09:38:15.000Z | [] | [
".gitattributes"
] | varunravi | 0 | |||
vasilis/wav2vec2-large-xlsr-53-estonian | 2021-04-15T09:21:31.000Z | [
"pytorch",
"wav2vec2",
"et",
"dataset:common_voice",
"dataset:NST Estonian ASR Database",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] | automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | vasilis | 15 | transformers | ---
language: et
datasets:
- common_voice
- NST Estonian ASR Database
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Large 53 - Estonian by Vasilis
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice et
type: common_voice
args: et
metrics:
- name: Test WER
type: wer
value: 30.658320
- name: Test CER
type: cer
value: 5.261490
---
# Wav2Vec2-Large-XLSR-53-Estonian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Estonian using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "et", split="test[:2%]") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-Estonian") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-Estonian") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Estonian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "et", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-Estonian")
model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-Estonian")
model.to("cuda")
chars_to_ignore_regex = "[\,\?\.\!\-\;\:\"\“\%\‘\”\�\']" # TODO: adapt this list to include all special characters you removed from the data
resampler = {
48_000: torchaudio.transforms.Resample(48_000, 16_000),
44100: torchaudio.transforms.Resample(44100, 16_000),
32000: torchaudio.transforms.Resample(32000, 16_000)
}
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler[sampling_rate](speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
print("CER: {:2f}".format(100 * wer.compute(predictions=[" ".join(list(entry)) for entry in result["pred_strings"]], references=[" ".join(list(entry)) for entry in result["sentence"]])))
```
**Test Result**: 30.658320 %
## Training
Common voice `train` and `validation` sets were used for finetuning
for 20000 steps (approx. 116 epochs). Both the `feature extractor` (`Wav2Vec2FeatureExtractor`) and
`feature projection` (`Wav2Vec2FeatureProjection`) layer were frozen. Only the `encoder` layer (`Wav2Vec2EncoderStableLayerNorm`) was finetuned.
|
vasilis/wav2vec2-large-xlsr-53-finnish | 2021-03-29T02:30:18.000Z | [
"pytorch",
"wav2vec2",
"fi",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] | automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | vasilis | 13 | transformers | ---
language: fi
datasets:
- common_voice
- CSS10 finnish: Single Speaker Speech Dataset
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: V XLSR Wav2Vec2 Large 53 - finnish
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fi
type: common_voice
args: fi
metrics:
- name: Test WER
type: wer
value: 38.335242
- name: Test CER
type: cer
value: 6.552408
---
# Wav2Vec2-Large-XLSR-53-finnish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on finnish using the [Common Voice](https://huggingface.co/datasets/common_voice) and [CSS10 finnish: Single Speaker Speech Dataset](https://www.kaggle.com/bryanpark/finnish-single-speaker-speech-dataset).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "el", split="test[:2%]") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-finnish") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-finnish") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the finnish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fi", split="test") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-finnish")
model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-finnish")
model.to("cuda")
chars_to_ignore_regex = "[\,\?\.\!\-\;\:\"\“\%\‘\”\�\']" # TODO: adapt this list to include all special characters you removed from the data
replacements = {"…": "", "–": ''}
resampler = {
48_000: torchaudio.transforms.Resample(48_000, 16_000),
44100: torchaudio.transforms.Resample(44100, 16_000),
32000: torchaudio.transforms.Resample(32000, 16_000)
}
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
for key, value in replacements.items():
batch["sentence"] = batch["sentence"].replace(key, value)
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler[sampling_rate](speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
print("CER: {:2f}".format(100 * wer.compute(predictions=[" ".join(list(entry)) for entry in result["pred_strings"]], references=[" ".join(list(entry)) for entry in result["sentence"]])))
```
**Test Result**: 38.335242 %
## Training
The Common Voice train dataset was used for training. Also all of `CSS10 Finnish` was used using the normalized transcripts.
After 20000 steps the models was finetuned using the common voice train and validation sets for 2000 steps more.
|
vasilis/wav2vec2-large-xlsr-53-greek | 2021-03-26T23:51:48.000Z | [
"pytorch",
"wav2vec2",
"el",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] | automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | vasilis | 16 | transformers | ---
language: el
datasets:
- common_voice
- CSS10 Greek: Single Speaker Speech Dataset
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: V XLSR Wav2Vec2 Large 53 - greek
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice el
type: common_voice
args: el
metrics:
- name: Test WER
type: wer
value: 18.996669
- name: Test CER
type: cer
value: 5.781874
---
# Wav2Vec2-Large-XLSR-53-greek
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on greek using the [Common Voice](https://huggingface.co/datasets/common_voice) and [CSS10 Greek: Single Speaker Speech Dataset](https://www.kaggle.com/bryanpark/greek-single-speaker-speech-dataset).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "el", split="test[:2%]") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-greek") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-greek") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the greek test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "el", split="test") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-greek") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-greek") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]' # TODO: adapt this list to include all special characters you removed from the data
normalize_greek_letters = {"ς": "σ"}
# normalize_greek_letters = {"ά": "α", "έ": "ε", "ί": "ι", 'ϊ': "ι", "ύ": "υ", "ς": "σ", "ΐ": "ι", 'ϋ': "υ", "ή": "η", "ώ": "ω", 'ό': "ο"}
remove_chars_greek = {"a": "", "h": "", "n": "", "g": "", "o": "", "v": "", "e": "", "r": "", "t": "", "«": "", "»": "", "m": "", '́': '', "·": "", "’": "", '´': ""}
replacements = {**normalize_greek_letters, **remove_chars_greek}
resampler = {
48_000: torchaudio.transforms.Resample(48_000, 16_000),
44100: torchaudio.transforms.Resample(44100, 16_000),
32000: torchaudio.transforms.Resample(32000, 16_000)
}
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
for key, value in replacements.items():
batch["sentence"] = batch["sentence"].replace(key, value)
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler[sampling_rate](speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
print("CER: {:2f}".format(100 * wer.compute(predictions=[" ".join(list(entry)) for entry in result["pred_strings"]], references=[" ".join(list(entry)) for entry in result["sentence"]])))
```
**Test Result**: 18.996669 %
## Training
The Common Voice train dataset was used for training. Also all of `CSS10 Greek` was used using the normalized transcripts.
During text preprocessing letter `ς` is normalized to `σ` the reason is that both letters sound the same with `ς` only used as the ending character of words. So, the change can be mapped up to proper dictation easily. I tried removing all accents from letters as well that improved `WER` significantly. The model was reaching `17%` WER easily without having converged. However, the text preprocessing needed to do after to fix transcrtiptions would be more complicated. A language model should fix things easily though. Another thing that could be tried out would be to change all of `ι`, `η` ... etc to a single character since all sound the same. similar for `o` and `ω` these should help the acoustic model part significantly since all these characters map to the same sound. But further text normlization would be needed.
|
vasilis/wav2vec2-large-xlsr-53-swedish | 2021-04-09T12:23:23.000Z | [
"pytorch",
"wav2vec2",
"sv-SE",
"dataset:common_voice",
"dataset:NST Swedish ASR Database",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] | automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | vasilis | 10 | transformers | ---
language: sv-SE
datasets:
- common_voice
- NST Swedish ASR Database
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: V XLSR Wav2Vec2 Large 53 - Swedish
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice sv-SE
type: common_voice
args: sv-SE
metrics:
- name: Test WER
type: wer
value: 14.695793
- name: Test CER
type: cer
value: 5.264666
---
# Wav2Vec2-Large-XLSR-53-Swedish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Swedish using the [Common Voice](https://huggingface.co/datasets/common_voice) and parts for the [NST Swedish ASR Database](https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-16/).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "sv-SE", split="test[:2%]") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-swedish") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-swedish") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Swedish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "sv-SE", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-swedish")
model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-swedish")
model.to("cuda")
chars_to_ignore_regex = "[\,\?\.\!\-\;\:\"\“\%\‘\”\�\']" # TODO: adapt this list to include all special characters you removed from the data
resampler = {
48_000: torchaudio.transforms.Resample(48_000, 16_000),
44100: torchaudio.transforms.Resample(44100, 16_000),
32000: torchaudio.transforms.Resample(32000, 16_000)
}
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler[sampling_rate](speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
print("CER: {:2f}".format(100 * wer.compute(predictions=[" ".join(list(entry)) for entry in result["pred_strings"]], references=[" ".join(list(entry)) for entry in result["sentence"]])))
```
**Test Result**: 14.695793 %
## Training
As first step used Common Voice train dataset and parts from NST
as can be found [here](https://github.com/se-asr/nst/tree/master).
Part of NST where removed using this mask
```python
mask = [(5 < len(x.split()) < 20) and np.average([len(entry) for entry in x.split()]) > 5 for x in dataset['transcript'].tolist()]
```
After training like this for 20000 steps the model was finetuned on all of nst data using the mask
```python
mask = [(1 < len(x.split()) < 25) and np.average([len(entry) for entry in x.split()]) > 3 for x in dataset['transcript'].tolist()]
```
and all of common voice for 100000 more steps approximately 16 epochs.
|
vasudevgupta/abnet-iwslt14-de-en | 2021-02-03T07:18:19.000Z | [
"pytorch",
"transformers"
] | [
".gitattributes",
"config.json",
"pytorch_model.bin"
] | vasudevgupta | 12 | transformers | ||
vasudevgupta/bigbird-base-trivia-itc | 2021-04-30T07:35:44.000Z | [
"pytorch",
"big_bird",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] | vasudevgupta | 17 | transformers | Moved here: https://huggingface.co/google/bigbird-base-trivia-itc |
vasudevgupta/bigbird-pegasus-large-arxiv | 2021-05-04T11:12:15.000Z | [
"pytorch",
"bigbird_pegasus",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] | vasudevgupta | 21 | transformers | Moved here: https://huggingface.co/google/bigbird-pegasus-large-arxiv |
vasudevgupta/bigbird-pegasus-large-bigpatent | 2021-05-04T11:12:37.000Z | [
"pytorch",
"bigbird_pegasus",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] | vasudevgupta | 34 | transformers | Moved here: https://huggingface.co/google/bigbird-pegasus-large-bigpatent |
vasudevgupta/bigbird-pegasus-large-pubmed | 2021-05-04T11:12:55.000Z | [
"pytorch",
"bigbird_pegasus",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] | vasudevgupta | 47 | transformers | Moved here: https://huggingface.co/google/bigbird-pegasus-large-pubmed |
vasudevgupta/bigbird-roberta-base | 2021-04-30T07:36:20.000Z | [
"pytorch",
"big_bird",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] | vasudevgupta | 70 | transformers | Moved here: https://huggingface.co/google/bigbird-roberta-base |
vasudevgupta/bigbird-roberta-large | 2021-04-30T07:36:35.000Z | [
"pytorch",
"big_bird",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] | vasudevgupta | 15 | transformers | Moved here: https://huggingface.co/google/bigbird-roberta-large |
vasudevgupta/bigbird-roberta-natural-questions | 2021-05-12T03:20:58.000Z | [
"pytorch",
"big_bird",
"question-answering",
"en",
"dataset:natural_questions",
"transformers",
"license:apache-2.0"
] | question-answering | [
".gitattributes",
"README.md",
"config.json",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin"
] | vasudevgupta | 654 | transformers | ---
language: en
license: apache-2.0
datasets: natural_questions
widget:
- text: "Who added BigBird to HuggingFace Transformers?"
context: "BigBird Pegasus just landed! Thanks to Vasudev Gupta, BigBird Pegasus from Google AI is merged into HuggingFace Transformers. Check it out today!!!"
---
This checkpoint is obtained after training `BigBirdForQuestionAnswering` (with extra pooler head) on [`natural_questions`](https://huggingface.co/datasets/natural_questions) dataset for ~ 2 weeks on 2 K80 GPUs. Script for training can be found here: https://github.com/vasudevgupta7/bigbird
| Exact Match | 47.44 |
|-------------|-------|
**Use this model just like any other model from 🤗Transformers**
```python
from transformers import BigBirdForQuestionAnswering
model_id = "vasudevgupta/bigbird-roberta-natural-questions"
model = BigBirdForQuestionAnswering.from_pretrained(model_id)
tokenizer = BigBirdTokenizer.from_pretrained(model_id)
```
In case you are interested in predicting category (null, long, short, yes, no) as well, use `BigBirdForNaturalQuestions` (instead of `BigBirdForQuestionAnswering`) from my training script.
|
vasudevgupta/dl-hack-distilgpt2 | 2021-05-23T13:29:37.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | vasudevgupta | 19 | transformers | DL research papers **Title -> abstract**
**Using this model**
```python
from transformers import pipeline, GPT2LMHeadModel, GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("vasudevgupta/dl-hack-distilgpt2")
model = GPT2LMHeadModel.from_pretrained("vasudevgupta/dl-hack-distilgpt2")
agent = pipeline("text-generation", model=model, tokenizer=tokenizer)
print(agent("An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale", max_length=200))
``` |
vasudevgupta/dl-hack-gpt2-large | 2021-05-23T13:34:31.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | vasudevgupta | 7 | transformers | DL research papers **Title -> abstract**
**Using this model**
```python
from transformers import pipeline, GPT2LMHeadModel, GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("vasudevgupta/dl-hack-gpt2-large")
model = GPT2LMHeadModel.from_pretrained("vasudevgupta/dl-hack-gpt2-large")
agent = pipeline("text-generation", model=model, tokenizer=tokenizer)
print(agent("An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale", max_length=200))
``` |
vasudevgupta/dl-hack-pegasus-large | 2021-04-30T07:33:27.000Z | [
"pytorch",
"pegasus",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] | vasudevgupta | 9 | transformers | Deep Learning research papers **Title -> abstract** |
vasudevgupta/dummy | 2021-06-01T05:49:08.000Z | [] | [
".gitattributes"
] | vasudevgupta | 0 | |||
vasudevgupta/flax-bigbird-natural-questions | 2021-06-18T07:10:23.000Z | [
"jax",
"joblib",
"big_bird",
"question-answering",
"en",
"dataset:natural_questions",
"transformers",
"license:apache-2.0"
] | question-answering | [
".gitattributes",
"README.md",
"args.joblib",
"config.json",
"data_collator.joblib",
"flax_model.msgpack",
"opt_state.msgpack",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json",
"training_state.json"
] | vasudevgupta | 8 | transformers | |
vasudevgupta/mbart-bhasha-guj-eng | 2021-05-12T03:30:44.000Z | [
"pytorch",
"mbart",
"seq2seq",
"dataset:pib",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json"
] | vasudevgupta | 25 | transformers | ---
datasets: pib
widget:
- text: "હેય! હું વાસુદેવ ગુપ્તા છું"
---
mBART (a pre-trained model by Facebook) is pre-trained to de-noise multiple languages simultaneously with BART objective.
Checkpoint available in this repository is obtained after fine-tuning `facebook/mbart-large-cc25` on all samples (~60K) from Bhasha (pib_v1.3) Gujarati-English parallel corpus. This checkpoint gives decent results for Gujarati-english translation. |
vasudevgupta/mbart-bhasha-hin-eng | 2021-05-12T03:36:02.000Z | [
"pytorch",
"mbart",
"seq2seq",
"dataset:pib",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json"
] | vasudevgupta | 13 | transformers | ---
datasets: pib
widget:
- text: "नमस्ते! मैं वासुदेव गुप्ता हूं"
---
mBART (a pre-trained model by Facebook) is pre-trained to de-noise multiple languages simultaneously with BART objective.
Checkpoint available in this repository is obtained after fine-tuning `facebook/mbart-large-cc25` on all samples (~260K) from Bhasha (pib_v1.3) Hindi-English parallel corpus. This checkpoint gives decent results for Hindi-english translation. |
vasudevgupta/mbart-iitb-hin-eng | 2021-05-12T03:35:21.000Z | [
"pytorch",
"mbart",
"seq2seq",
"dataset:pib",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json"
] | vasudevgupta | 26 | transformers | ---
datasets: pib
widget:
- text: "नमस्ते! मैं वासुदेव गुप्ता हूं"
---
mBART (a pre-trained model by Facebook) is pre-trained to de-noise multiple languages simultaneously with BART objective.
Checkpoint available in this repository is obtained after fine-tuning `facebook/mbart-large-cc25` on 0.5 M samples from IIT-B Hindi-English parallel corpus. This checkpoint gives decent results for Hindi-english translation. |
vasudevgupta/mbart-summarizer-interiit | 2021-03-28T17:49:15.000Z | [
"pytorch",
"mbart",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json"
] | vasudevgupta | 42 | transformers | This model is trained as a part of **InterIIT'21 competition**, on the dataset provided by Bridgei2i. It is able to do multilingual (Hindi, English, Hinglish) summarization (many -> one) & is capable of generating summaries in English regardless of the input language.
| Rouge-L | Sacrebleu | Headline Similarity (using sentence-transformers) |
|-----------------------|-----------|---------------------------------------------------|
| p=0.46 r=0.49 f1=0.52 | 23.46 | 0.75 |
mBART is initialized from **facebook/mbart-large-cc25** and is trained as per strategy mentioned in our [GitHub](https://github.com/vasudevgupta7/Bridgei2i-Winning-Solutions). |
vasudevgupta/offnote-mbart-adapters-bhasha | 2021-04-07T13:53:17.000Z | [] | [
".gitattributes",
".gitignore",
"README.md",
"adapters-guj-eng.pt",
"adapters-hin-eng.pt"
] | vasudevgupta | 18 | **Project GitHub:** https://github.com/vasudevgupta7/transformers-adapters
**Notes**
* base model can be downloaded from `facebook/mbart-large-cc25`
* `adapters-hin-eng.pt`: adapters hin-eng
* `adapters-guj-eng.pt`: adapters guj-eng
|
||
vasudevgupta/tf-wav2vec2-base-960h | 2021-06-12T23:18:52.000Z | [
"tf",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"tf_model.h5"
] | vasudevgupta | 7 | transformers | TensorFlow version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h). Obtained using script from https://github.com/vasudevgupta7/gsoc-wav2vec2. |
|
vblagoje/bert-base-searchqa | 2021-05-20T08:49:36.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | vblagoje | 19 | transformers | |
vblagoje/bert-english-uncased-finetuned-chunk | 2021-05-20T08:50:30.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers"
] | token-classification | [
".gitattributes",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"test_predictions.txt",
"test_results.txt",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | vblagoje | 30 | transformers | |
vblagoje/bert-english-uncased-finetuned-pos | 2021-05-20T08:51:26.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers"
] | token-classification | [
".gitattributes",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"test_results.txt",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | vblagoje | 1,064 | transformers | |
vblagoje/tiny_bert_sparse | 2021-05-20T08:52:06.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"log_history.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | vblagoje | 9 | transformers | |
vctc92/defdsfsd | 2021-06-15T15:45:30.000Z | [] | [
".gitattributes"
] | vctc92 | 0 | |||
vctc92/test | 2021-05-23T07:34:57.000Z | [] | [
".gitattributes"
] | vctc92 | 0 | |||
vctc92/test1 | 2021-06-01T11:30:44.000Z | [] | [
".gitattributes"
] | vctc92 | 0 | |||
vctc92/test123 | 2021-06-08T16:40:16.000Z | [] | [
".gitattributes"
] | vctc92 | 0 | |||
vera-pro/bert-mention-de | 2021-05-20T08:53:09.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers"
] | token-classification | [
".gitattributes",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt",
"checkpoint-6000/optimizer.pt"
] | vera-pro | 16 | transformers | |
vera-pro/bert-mention-en | 2021-05-20T08:54:13.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers"
] | token-classification | [
".gitattributes",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | vera-pro | 25 | transformers | |
vera-pro/bert-mention-fr | 2021-05-20T08:55:28.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers"
] | token-classification | [
".gitattributes",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | vera-pro | 23 | transformers | |
verissimomanoel/RobertaTwitterBR | 2021-05-20T22:53:32.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"training_args.bin",
"vocab.json"
] | verissimomanoel | 11 | transformers | ### Twitter RoBERTa BR
This is a RoBERTa Twitter in Portuguese model trained on ~7M tweets.
The results will be posted in the future.
### Example of using
```
tokenizer = AutoTokenizer.from_pretrained("verissimomanoel/RobertaTwitterBR")
model = AutoModel.from_pretrained("verissimomanoel/RobertaTwitterBR")
```
|
verloop/Hinglish-Bert-Class | 2021-05-20T08:56:50.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".DS_Store",
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | verloop | 18 | transformers | |
verloop/Hinglish-Bert | 2021-05-20T08:58:33.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | verloop | 29 | transformers | |
verloop/Hinglish-DistilBert-Class | 2021-05-20T08:59:21.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".DS_Store",
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | verloop | 11 | transformers | |
verycuriousotter/asdf1 | 2021-05-19T18:46:56.000Z | [] | [
".gitattributes"
] | verycuriousotter | 0 | |||
vespa-engine/col-minilm | 2021-05-20T08:59:29.000Z | [
"pytorch",
"bert",
"arxiv:2004.12832",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | vespa-engine | 21 | transformers | # MS Marco Ranking with ColBERT on Vespa.ai
Model is based on [ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT](https://arxiv.org/abs/2004.12832).
This BERT model is based on [cross-encoder/ms-marco-MiniLM-L-6-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L-6-v2) and trained using the
original [ColBERT training routine](https://github.com/stanford-futuredata/ColBERT/).
This model has 22.3M trainable parameters and is approximately 2x faster than
[vespa-engine/colbert-medium](https://huggingface.co/vespa-engine/colbert-medium) and with better or on pair MRR@10 on dev.
The model weights have been tuned by training using a randomized sample of MS Marco training triplets
[MSMARCO-Passage-Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking).
To use this model with vespa.ai for MS Marco Passage Ranking, see
[MS Marco Ranking using Vespa.ai sample app](https://github.com/vespa-engine/sample-apps/tree/master/msmarco-ranking).
# MS Marco Passage Ranking
| MS Marco Passage Ranking Query Set | MRR@10 ColBERT on Vespa.ai |
|------------------------------------|----------------|
| Dev | 0.364 |
Recall@k On Dev (6980 queries)
|K | Recall@K |
|------------------------------------|----------------|
| 50 | 0.816 |
| 200 | 0.905 |
| 1000 | 0.939 |
The MRR@10 on dev is achieved by re-ranking 1K retrieved by a dense retriever based on
[sentence-transformers/msmarco-MiniLM-L-6-v3](https://huggingface.co/sentence-transformers/msmarco-MiniLM-L-6-v3). Re-ranking the original top 1000 dev
is 0.354 MRR@10 (Recall@1K 0.82).
The official baseline BM25 ranking model MRR@10 0.16 on eval and 0.167 on dev question set.
See [MS Marco Passage Ranking Leaderboard](https://microsoft.github.io/msmarco/).
## Export ColBERT query encoder to ONNX
We represent the ColBERT query encoder in the Vespa runtime, to map the textual query representation to the tensor representation. For this
we use Vespa's support for running ONNX models. One can use the following snippet to export the model for serving.
```python
from transformers import BertModel
from transformers import BertPreTrainedModel
from transformers import BertConfig
import torch
import torch.nn as nn
class VespaColBERT(BertPreTrainedModel):
def __init__(self,config):
super().__init__(config)
self.bert = BertModel(config)
self.linear = nn.Linear(config.hidden_size, 32, bias=False)
self.init_weights()
def forward(self, input_ids, attention_mask):
Q = self.bert(input_ids,attention_mask=attention_mask)[0]
Q = self.linear(Q)
return torch.nn.functional.normalize(Q, p=2, dim=2)
colbert_query_encoder = VespaColBERT.from_pretrained("vespa-engine/col-minilm")
#Export model to ONNX for serving in Vespa
input_names = ["input_ids", "attention_mask"]
output_names = ["contextual"]
#input, max 32 query term
input_ids = torch.ones(1,32, dtype=torch.int64)
attention_mask = torch.ones(1,32,dtype=torch.int64)
args = (input_ids, attention_mask)
torch.onnx.export(colbert_query_encoder,
args=args,
f="query_encoder_colbert.onnx",
input_names = input_names,
output_names = output_names,
dynamic_axes = {
"input_ids": {0: "batch"},
"attention_mask": {0: "batch"},
"contextual": {0: "batch"},
},
opset_version=11)
```
# Representing the model on Vespa.ai
See [Ranking with ONNX models](https://docs.vespa.ai/documentation/onnx.html) and [MS Marco Ranking sample app](https://github.com/vespa-engine/sample-apps/tree/master/msmarco-ranking)
|
|
vespa-engine/colbert-medium | 2021-05-20T08:59:43.000Z | [
"pytorch",
"bert",
"arxiv:2004.12832",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | vespa-engine | 75 | transformers | # MS Marco Ranking with ColBERT on Vespa.ai
Model is based on [ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT](https://arxiv.org/abs/2004.12832).
This BERT model is based on [google/bert_uncased_L-8_H-512_A-8](https://huggingface.co/google/bert_uncased_L-8_H-512_A-8) and trained using the
original [ColBERT training routine](https://github.com/stanford-futuredata/ColBERT/).
The model weights have been tuned by training using the `triples.train.small.tar.gz from` [MSMARCO-Passage-Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking).
To use this model with vespa.ai for MS Marco Passage Ranking, see
[MS Marco Ranking using Vespa.ai sample app](https://github.com/vespa-engine/sample-apps/tree/master/msmarco-ranking).
# MS Marco Passage Ranking
| MS Marco Passage Ranking Query Set | MRR@10 ColBERT on Vespa.ai |
|------------------------------------|----------------|
| Dev | 0.354 |
| Eval | 0.347 |
The official baseline BM25 ranking model MRR@10 0.16 on eval and 0.167 on dev question set.
See [MS Marco Passage Ranking Leaderboard](https://microsoft.github.io/msmarco/).
## Export ColBERT query encoder to ONNX
We represent the ColBERT query encoder in the Vespa runtime, to map the textual query representation to the tensor representation. For this
we use Vespa's support for running ONNX models. One can use the following snippet to export the model for serving.
```python
from transformers import BertModel
from transformers import BertPreTrainedModel
from transformers import BertConfig
import torch
import torch.nn as nn
class VespaColBERT(BertPreTrainedModel):
def __init__(self,config):
super().__init__(config)
self.bert = BertModel(config)
self.linear = nn.Linear(config.hidden_size, 32, bias=False)
self.init_weights()
def forward(self, input_ids, attention_mask):
Q = self.bert(input_ids,attention_mask=attention_mask)[0]
Q = self.linear(Q)
return torch.nn.functional.normalize(Q, p=2, dim=2)
colbert_query_encoder = VespaColBERT.from_pretrained("vespa-engine/colbert-medium")
#Export model to ONNX for serving in Vespa
input_names = ["input_ids", "attention_mask"]
output_names = ["contextual"]
#input, max 32 query term
input_ids = torch.ones(1,32, dtype=torch.int64)
attention_mask = torch.ones(1,32,dtype=torch.int64)
args = (input_ids, attention_mask)
torch.onnx.export(colbert_query_encoder,
args=args,
f="query_encoder_colbert.onnx",
input_names = input_names,
output_names = output_names,
dynamic_axes = {
"input_ids": {0: "batch"},
"attention_mask": {0: "batch"},
"contextual": {0: "batch"},
},
opset_version=11)
```
# Representing the model on Vespa.ai
See [Ranking with ONNX models](https://docs.vespa.ai/documentation/onnx.html) and [MS Marco Ranking sample app](https://github.com/vespa-engine/sample-apps/tree/master/msmarco-ranking)
|
|
vg/myT5 | 2021-04-03T15:48:36.000Z | [] | [
".gitattributes"
] | vg | 0 | |||
vicd/sentiment | 2021-05-20T22:54:45.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | vicd | 63 | transformers | |
vicgalle/xlm-roberta-large-xnli-anli | 2021-03-04T17:05:03.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"multilingual",
"dataset:mnli",
"dataset:xnli",
"dataset:anli",
"transformers",
"zero-shot-classification",
"nli",
"license:mit",
"pipeline_tag:zero-shot-classification"
] | zero-shot-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json"
] | vicgalle | 6,746 | transformers | ---
language: multilingual
tags:
- zero-shot-classification
- nli
- pytorch
datasets:
- mnli
- xnli
- anli
license: mit
pipeline_tag: zero-shot-classification
widget:
- text: "De pugna erat fantastic. Nam Crixo decem quam dilexit et praeciderunt caput aemulus."
candidate_labels: "violent, peaceful"
- text: "La película empezaba bien pero terminó siendo un desastre."
candidate_labels: "positivo, negativo, neutral"
- text: "La película empezó siendo un desastre pero en general fue bien."
candidate_labels: "positivo, negativo, neutral"
- text: "¿A quién vas a votar en 2020?"
candidate_labels: "Europa, elecciones, política, ciencia, deportes"
---
### XLM-RoBERTa-large-XNLI-ANLI
XLM-RoBERTa-large model finetunned over several NLI datasets, ready to use for zero-shot classification.
Here are the accuracies for several test datasets:
| | XNLI-es | XNLI-fr | ANLI-R1 | ANLI-R2 | ANLI-R3 |
|-----------------------------|---------|---------|---------|---------|---------|
| xlm-roberta-large-xnli-anli | 93.7% | 93.2% | 68.5% | 53.6% | 49.0% |
The model can be loaded with the zero-shot-classification pipeline like so:
```
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="vicgalle/xlm-roberta-large-xnli-anli")
```
You can then use this pipeline to classify sequences into any of the class names you specify:
```
sequence_to_classify = "Algún día iré a ver el mundo"
candidate_labels = ['viaje', 'cocina', 'danza']
classifier(sequence_to_classify, candidate_labels)
#{'sequence': 'Algún día iré a ver el mundo',
#'labels': ['viaje', 'danza', 'cocina'],
#'scores': [0.9991760849952698, 0.0004178212257102132, 0.0004059972707182169]}
``` |
vietstar87/viet | 2020-12-19T18:43:38.000Z | [] | [
".gitattributes"
] | vietstar87 | 0 | |||
vigneshv7/data_classification | 2021-02-02T17:29:07.000Z | [] | [
".gitattributes",
"README.md"
] | vigneshv7 | 0 | bert-classification-model |
||
vijayethuraj/model_name | 2021-03-07T12:02:02.000Z | [] | [
".gitattributes"
] | vijayethuraj | 0 | |||
vinai/bertweet-base | 2021-05-20T22:56:38.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"masked-lm",
"arxiv:1911.02116",
"arxiv:2005.10200",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"bpe.codes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"vocab.txt"
] | vinai | 74,383 | transformers | # <a name="introduction"></a> BERTweet: A pre-trained language model for English Tweets
- BERTweet is the first public large-scale language model pre-trained for English Tweets. BERTweet is trained based on the [RoBERTa](https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.md) pre-training procedure, using the same model configuration as [BERT-base](https://github.com/google-research/bert).
- The corpus used to pre-train BERTweet consists of 850M English Tweets (16B word tokens ~ 80GB), containing 845M Tweets streamed from 01/2012 to 08/2019 and 5M Tweets related to the **COVID-19** pandemic.
- BERTweet does better than its competitors RoBERTa-base and [XLM-R-base](https://arxiv.org/abs/1911.02116) and outperforms previous state-of-the-art models on three downstream Tweet NLP tasks of Part-of-speech tagging, Named entity recognition and text classification.
The general architecture and experimental results of BERTweet can be found in our [paper](https://arxiv.org/abs/2005.10200):
@inproceedings{bertweet,
title = {{BERTweet: A pre-trained language model for English Tweets}},
author = {Dat Quoc Nguyen and Thanh Vu and Anh Tuan Nguyen},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020}
}
**Please CITE** our paper when BERTweet is used to help produce published results or is incorporated into other software.
For further information or requests, please go to [BERTweet's homepage](https://github.com/VinAIResearch/BERTweet)!
### <a name="install2"></a> Installation
- Python 3.6+, and PyTorch 1.1.0+ (or TensorFlow 2.0+)
- Install `transformers`:
- `git clone https://github.com/huggingface/transformers.git`
- `cd transformers`
- `pip3 install --upgrade .`
- Install `emoji`: `pip3 install emoji`
### <a name="models2"></a> Pre-trained models
Model | #params | Arch. | Pre-training data
---|---|---|---
`vinai/bertweet-base` | 135M | base | 845M English Tweets (cased)
`vinai/bertweet-covid19-base-cased` | 135M | base | 23M COVID-19 English Tweets (cased)
`vinai/bertweet-covid19-base-uncased` | 135M | base | 23M COVID-19 English Tweets (uncased)
Two pre-trained models `vinai/bertweet-covid19-base-cased` and `vinai/bertweet-covid19-base-uncased` are resulted by further pre-training the pre-trained model `vinai/bertweet-base` on a corpus of 23M COVID-19 English Tweets for 40 epochs.
### <a name="usage2"></a> Example usage
```python
import torch
from transformers import AutoModel, AutoTokenizer
bertweet = AutoModel.from_pretrained("vinai/bertweet-base")
tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base")
# INPUT TWEET IS ALREADY NORMALIZED!
line = "SC has first two presumptive cases of coronavirus , DHEC confirms HTTPURL via @USER :cry:"
input_ids = torch.tensor([tokenizer.encode(line)])
with torch.no_grad():
features = bertweet(input_ids) # Models outputs are now tuples
## With TensorFlow 2.0+:
# from transformers import TFAutoModel
# bertweet = TFAutoModel.from_pretrained("vinai/bertweet-base")
```
### <a name="preprocess"></a> Normalize raw input Tweets
Before applying `fastBPE` to the pre-training corpus of 850M English Tweets, we tokenized these Tweets using `TweetTokenizer` from the NLTK toolkit and used the `emoji` package to translate emotion icons into text strings (here, each icon is referred to as a word token). We also normalized the Tweets by converting user mentions and web/url links into special tokens `@USER` and `HTTPURL`, respectively. Thus it is recommended to also apply the same pre-processing step for BERTweet-based downstream applications w.r.t. the raw input Tweets. BERTweet provides this pre-processing step by enabling the `normalization` argument.
```python
import torch
from transformers import AutoTokenizer
# Load the AutoTokenizer with a normalization mode if the input Tweet is raw
tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base", normalization=True)
# from transformers import BertweetTokenizer
# tokenizer = BertweetTokenizer.from_pretrained("vinai/bertweet-base", normalization=True)
line = "SC has first two presumptive cases of coronavirus, DHEC confirms https://postandcourier.com/health/covid19/sc-has-first-two-presumptive-cases-of-coronavirus-dhec-confirms/article_bddfe4ae-5fd3-11ea-9ce4-5f495366cee6.html?utm_medium=social&utm_source=twitter&utm_campaign=user-share… via @postandcourier"
input_ids = torch.tensor([tokenizer.encode(line)])
```
|
vinai/bertweet-covid19-base-cased | 2021-05-20T22:58:01.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"masked-lm",
"arxiv:1911.02116",
"arxiv:2005.10200",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"bpe.codes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"vocab.txt"
] | vinai | 686 | transformers | # <a name="introduction"></a> BERTweet: A pre-trained language model for English Tweets
- BERTweet is the first public large-scale language model pre-trained for English Tweets. BERTweet is trained based on the [RoBERTa](https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.md) pre-training procedure, using the same model configuration as [BERT-base](https://github.com/google-research/bert).
- The corpus used to pre-train BERTweet consists of 850M English Tweets (16B word tokens ~ 80GB), containing 845M Tweets streamed from 01/2012 to 08/2019 and 5M Tweets related to the **COVID-19** pandemic.
- BERTweet does better than its competitors RoBERTa-base and [XLM-R-base](https://arxiv.org/abs/1911.02116) and outperforms previous state-of-the-art models on three downstream Tweet NLP tasks of Part-of-speech tagging, Named entity recognition and text classification.
The general architecture and experimental results of BERTweet can be found in our [paper](https://arxiv.org/abs/2005.10200):
@inproceedings{bertweet,
title = {{BERTweet: A pre-trained language model for English Tweets}},
author = {Dat Quoc Nguyen and Thanh Vu and Anh Tuan Nguyen},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020}
}
**Please CITE** our paper when BERTweet is used to help produce published results or is incorporated into other software.
For further information or requests, please go to [BERTweet's homepage](https://github.com/VinAIResearch/BERTweet)!
### <a name="install2"></a> Installation
- Python 3.6+, and PyTorch 1.1.0+ (or TensorFlow 2.0+)
- Install `transformers`:
- `git clone https://github.com/huggingface/transformers.git`
- `cd transformers`
- `pip3 install --upgrade .`
- Install `emoji`: `pip3 install emoji`
### <a name="models2"></a> Pre-trained models
Model | #params | Arch. | Pre-training data
---|---|---|---
`vinai/bertweet-base` | 135M | base | 845M English Tweets (cased)
`vinai/bertweet-covid19-base-cased` | 135M | base | 23M COVID-19 English Tweets (cased)
`vinai/bertweet-covid19-base-uncased` | 135M | base | 23M COVID-19 English Tweets (uncased)
Two pre-trained models `vinai/bertweet-covid19-base-cased` and `vinai/bertweet-covid19-base-uncased` are resulted by further pre-training the pre-trained model `vinai/bertweet-base` on a corpus of 23M COVID-19 English Tweets for 40 epochs.
### <a name="usage2"></a> Example usage
```python
import torch
from transformers import AutoModel, AutoTokenizer
bertweet = AutoModel.from_pretrained("vinai/bertweet-covid19-base-cased")
tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-covid19-base-cased")
# INPUT TWEET IS ALREADY NORMALIZED!
line = "SC has first two presumptive cases of coronavirus , DHEC confirms HTTPURL via @USER :cry:"
input_ids = torch.tensor([tokenizer.encode(line)])
with torch.no_grad():
features = bertweet(input_ids) # Models outputs are now tuples
## With TensorFlow 2.0+:
# from transformers import TFAutoModel
# bertweet = TFAutoModel.from_pretrained("vinai/bertweet-covid19-base-cased")
```
### <a name="preprocess"></a> Normalize raw input Tweets
Before applying `fastBPE` to the pre-training corpus of 850M English Tweets, we tokenized these Tweets using `TweetTokenizer` from the NLTK toolkit and used the `emoji` package to translate emotion icons into text strings (here, each icon is referred to as a word token). We also normalized the Tweets by converting user mentions and web/url links into special tokens `@USER` and `HTTPURL`, respectively. Thus it is recommended to also apply the same pre-processing step for BERTweet-based downstream applications w.r.t. the raw input Tweets. BERTweet provides this pre-processing step by enabling the `normalization` argument.
```python
import torch
from transformers import AutoTokenizer
# Load the AutoTokenizer with a normalization mode if the input Tweet is raw
tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-covid19-base-cased", normalization=True)
# from transformers import BertweetTokenizer
# tokenizer = BertweetTokenizer.from_pretrained("vinai/bertweet-covid19-base-cased", normalization=True)
line = "SC has first two presumptive cases of coronavirus, DHEC confirms https://postandcourier.com/health/covid19/sc-has-first-two-presumptive-cases-of-coronavirus-dhec-confirms/article_bddfe4ae-5fd3-11ea-9ce4-5f495366cee6.html?utm_medium=social&utm_source=twitter&utm_campaign=user-share… via @postandcourier"
input_ids = torch.tensor([tokenizer.encode(line)])
```
|
vinai/bertweet-covid19-base-uncased | 2021-05-20T22:59:24.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"masked-lm",
"arxiv:1911.02116",
"arxiv:2005.10200",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"bpe.codes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"vocab.txt"
] | vinai | 1,142 | transformers | # <a name="introduction"></a> BERTweet: A pre-trained language model for English Tweets
- BERTweet is the first public large-scale language model pre-trained for English Tweets. BERTweet is trained based on the [RoBERTa](https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.md) pre-training procedure, using the same model configuration as [BERT-base](https://github.com/google-research/bert).
- The corpus used to pre-train BERTweet consists of 850M English Tweets (16B word tokens ~ 80GB), containing 845M Tweets streamed from 01/2012 to 08/2019 and 5M Tweets related to the **COVID-19** pandemic.
- BERTweet does better than its competitors RoBERTa-base and [XLM-R-base](https://arxiv.org/abs/1911.02116) and outperforms previous state-of-the-art models on three downstream Tweet NLP tasks of Part-of-speech tagging, Named entity recognition and text classification.
The general architecture and experimental results of BERTweet can be found in our [paper](https://arxiv.org/abs/2005.10200):
@inproceedings{bertweet,
title = {{BERTweet: A pre-trained language model for English Tweets}},
author = {Dat Quoc Nguyen and Thanh Vu and Anh Tuan Nguyen},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020}
}
**Please CITE** our paper when BERTweet is used to help produce published results or is incorporated into other software.
For further information or requests, please go to [BERTweet's homepage](https://github.com/VinAIResearch/BERTweet)!
### <a name="install2"></a> Installation
- Python 3.6+, and PyTorch 1.1.0+ (or TensorFlow 2.0+)
- Install `transformers`:
- `git clone https://github.com/huggingface/transformers.git`
- `cd transformers`
- `pip3 install --upgrade .`
- Install `emoji`: `pip3 install emoji`
### <a name="models2"></a> Pre-trained models
Model | #params | Arch. | Pre-training data
---|---|---|---
`vinai/bertweet-base` | 135M | base | 845M English Tweets (cased)
`vinai/bertweet-covid19-base-cased` | 135M | base | 23M COVID-19 English Tweets (cased)
`vinai/bertweet-covid19-base-uncased` | 135M | base | 23M COVID-19 English Tweets (uncased)
Two pre-trained models `vinai/bertweet-covid19-base-cased` and `vinai/bertweet-covid19-base-uncased` are resulted by further pre-training the pre-trained model `vinai/bertweet-base` on a corpus of 23M COVID-19 English Tweets for 40 epochs.
### <a name="usage2"></a> Example usage
```python
import torch
from transformers import AutoModel, AutoTokenizer
bertweet = AutoModel.from_pretrained("vinai/bertweet-covid19-base-uncased")
tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-covid19-base-uncased")
# INPUT TWEET IS ALREADY NORMALIZED!
line = "SC has first two presumptive cases of coronavirus , DHEC confirms HTTPURL via @USER :cry:"
input_ids = torch.tensor([tokenizer.encode(line)])
with torch.no_grad():
features = bertweet(input_ids) # Models outputs are now tuples
## With TensorFlow 2.0+:
# from transformers import TFAutoModel
# bertweet = TFAutoModel.from_pretrained("vinai/bertweet-covid19-base-uncased")
```
### <a name="preprocess"></a> Normalize raw input Tweets
Before applying `fastBPE` to the pre-training corpus of 850M English Tweets, we tokenized these Tweets using `TweetTokenizer` from the NLTK toolkit and used the `emoji` package to translate emotion icons into text strings (here, each icon is referred to as a word token). We also normalized the Tweets by converting user mentions and web/url links into special tokens `@USER` and `HTTPURL`, respectively. Thus it is recommended to also apply the same pre-processing step for BERTweet-based downstream applications w.r.t. the raw input Tweets. BERTweet provides this pre-processing step by enabling the `normalization` argument.
```python
import torch
from transformers import AutoTokenizer
# Load the AutoTokenizer with a normalization mode if the input Tweet is raw
tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-covid19-base-uncased", normalization=True)
# from transformers import BertweetTokenizer
# tokenizer = BertweetTokenizer.from_pretrained("vinai/bertweet-covid19-base-uncased", normalization=True)
line = "SC has first two presumptive cases of coronavirus, DHEC confirms https://postandcourier.com/health/covid19/sc-has-first-two-presumptive-cases-of-coronavirus-dhec-confirms/article_bddfe4ae-5fd3-11ea-9ce4-5f495366cee6.html?utm_medium=social&utm_source=twitter&utm_campaign=user-share… via @postandcourier"
input_ids = torch.tensor([tokenizer.encode(line)])
```
|
vinai/phobert-base | 2021-05-20T23:00:40.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"masked-lm",
"arxiv:2003.00744",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"bpe.codes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"vocab.txt"
] | vinai | 129,460 | transformers | # <a name="introduction"></a> PhoBERT: Pre-trained language models for Vietnamese
Pre-trained PhoBERT models are the state-of-the-art language models for Vietnamese ([Pho](https://en.wikipedia.org/wiki/Pho), i.e. "Phở", is a popular food in Vietnam):
- Two PhoBERT versions of "base" and "large" are the first public large-scale monolingual language models pre-trained for Vietnamese. PhoBERT pre-training approach is based on [RoBERTa](https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.md) which optimizes the [BERT](https://github.com/google-research/bert) pre-training procedure for more robust performance.
- PhoBERT outperforms previous monolingual and multilingual approaches, obtaining new state-of-the-art performances on four downstream Vietnamese NLP tasks of Part-of-speech tagging, Dependency parsing, Named-entity recognition and Natural language inference.
The general architecture and experimental results of PhoBERT can be found in our EMNLP-2020 Findings [paper](https://arxiv.org/abs/2003.00744):
@article{phobert,
title = {{PhoBERT: Pre-trained language models for Vietnamese}},
author = {Dat Quoc Nguyen and Anh Tuan Nguyen},
journal = {Findings of EMNLP},
year = {2020}
}
**Please CITE** our paper when PhoBERT is used to help produce published results or is incorporated into other software.
For further information or requests, please go to [PhoBERT's homepage](https://github.com/VinAIResearch/PhoBERT)!
### Installation <a name="install2"></a>
- Python 3.6+, and PyTorch 1.1.0+ (or TensorFlow 2.0+)
- Install `transformers`:
- `git clone https://github.com/huggingface/transformers.git`
- `cd transformers`
- `pip3 install --upgrade .`
### Pre-trained models <a name="models2"></a>
Model | #params | Arch. | Pre-training data
---|---|---|---
`vinai/phobert-base` | 135M | base | 20GB of texts
`vinai/phobert-large` | 370M | large | 20GB of texts
### Example usage <a name="usage2"></a>
```python
import torch
from transformers import AutoModel, AutoTokenizer
phobert = AutoModel.from_pretrained("vinai/phobert-base")
tokenizer = AutoTokenizer.from_pretrained("vinai/phobert-base")
# INPUT TEXT MUST BE ALREADY WORD-SEGMENTED!
line = "Tôi là sinh_viên trường đại_học Công_nghệ ."
input_ids = torch.tensor([tokenizer.encode(line)])
with torch.no_grad():
features = phobert(input_ids) # Models outputs are now tuples
## With TensorFlow 2.0+:
# from transformers import TFAutoModel
# phobert = TFAutoModel.from_pretrained("vinai/phobert-base")
```
|
vinai/phobert-large | 2021-05-20T23:03:41.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"masked-lm",
"arxiv:2003.00744",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"bpe.codes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"vocab.txt"
] | vinai | 1,508 | transformers | # <a name="introduction"></a> PhoBERT: Pre-trained language models for Vietnamese
Pre-trained PhoBERT models are the state-of-the-art language models for Vietnamese ([Pho](https://en.wikipedia.org/wiki/Pho), i.e. "Phở", is a popular food in Vietnam):
- Two PhoBERT versions of "base" and "large" are the first public large-scale monolingual language models pre-trained for Vietnamese. PhoBERT pre-training approach is based on [RoBERTa](https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.md) which optimizes the [BERT](https://github.com/google-research/bert) pre-training procedure for more robust performance.
- PhoBERT outperforms previous monolingual and multilingual approaches, obtaining new state-of-the-art performances on four downstream Vietnamese NLP tasks of Part-of-speech tagging, Dependency parsing, Named-entity recognition and Natural language inference.
The general architecture and experimental results of PhoBERT can be found in our EMNLP-2020 Findings [paper](https://arxiv.org/abs/2003.00744):
@article{phobert,
title = {{PhoBERT: Pre-trained language models for Vietnamese}},
author = {Dat Quoc Nguyen and Anh Tuan Nguyen},
journal = {Findings of EMNLP},
year = {2020}
}
**Please CITE** our paper when PhoBERT is used to help produce published results or is incorporated into other software.
For further information or requests, please go to [PhoBERT's homepage](https://github.com/VinAIResearch/PhoBERT)!
### Installation <a name="install2"></a>
- Python 3.6+, and PyTorch 1.1.0+ (or TensorFlow 2.0+)
- Install `transformers`:
- `git clone https://github.com/huggingface/transformers.git`
- `cd transformers`
- `pip3 install --upgrade .`
### Pre-trained models <a name="models2"></a>
Model | #params | Arch. | Pre-training data
---|---|---|---
`vinai/phobert-base` | 135M | base | 20GB of texts
`vinai/phobert-large` | 370M | large | 20GB of texts
### Example usage <a name="usage2"></a>
```python
import torch
from transformers import AutoModel, AutoTokenizer
phobert = AutoModel.from_pretrained("vinai/phobert-large")
tokenizer = AutoTokenizer.from_pretrained("vinai/phobert-large")
# INPUT TEXT MUST BE ALREADY WORD-SEGMENTED!
line = "Tôi là sinh_viên trường đại_học Công_nghệ ."
input_ids = torch.tensor([tokenizer.encode(line)])
with torch.no_grad():
features = phobert(input_ids) # Models outputs are now tuples
## With TensorFlow 2.0+:
# from transformers import TFAutoModel
# phobert = TFAutoModel.from_pretrained("vinai/phobert-large")
```
|
vincentlu073/legal-zh-multi-span-bio-182-50 | 2020-11-11T21:25:10.000Z | [] | [
".gitattributes"
] | vincentlu073 | 0 | |||
vincentlu073/legal-zh-multi-span-bio | 2021-05-20T09:00:04.000Z | [
"pytorch",
"bert",
"transformers"
] | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | vincentlu073 | 8 | transformers | ||
vinita/finetuned_squad | 2021-03-23T15:51:35.000Z | [] | [
".gitattributes"
] | vinita | 0 | |||
vinita/model_name | 2021-03-23T15:53:48.000Z | [] | [
".gitattributes"
] | vinita | 0 | |||
vinita/my-squad-model | 2021-03-23T15:54:44.000Z | [] | [
".gitattributes"
] | vinita | 0 | |||
visualjoyce/chengyubert_2stage_stage1_wwm_ext | 2021-05-20T09:00:46.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
] | [
".gitattributes",
"bert_config.json",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"vocab.txt"
] | visualjoyce | 201 | transformers | ||
vmicheli/lm-butlers-gpt | 2021-05-23T13:37:59.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"arxiv:2104.07972",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | vmicheli | 9 | transformers | GPT model developed in [Language Models are Few-Shot Butlers](https://arxiv.org/abs/2104.07972). |
vnlongbk/vietium | 2021-04-29T10:23:00.000Z | [] | [
".gitattributes"
] | vnlongbk | 0 | |||
voidful/albert_chinese_base | 2020-12-11T22:04:22.000Z | [
"pytorch",
"albert",
"zh",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"vocab.txt"
] | voidful | 2,597 | transformers | ---
language: zh
---
# albert_chinese_base
This a albert_chinese_base model from [Google's github](https://github.com/google-research/ALBERT)
converted by huggingface's [script](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py)
## Attention (注意)
Since sentencepiece is not used in albert_chinese_base model
you have to call BertTokenizer instead of AlbertTokenizer !!!
we can eval it using an example on MaskedLM
由於 albert_chinese_base 模型沒有用 sentencepiece
用AlbertTokenizer會載不進詞表,因此需要改用BertTokenizer !!!
我們可以跑MaskedLM預測來驗證這個做法是否正確
## Justify (驗證有效性)
[colab trial](https://colab.research.google.com/drive/1Wjz48Uws6-VuSHv_-DcWLilv77-AaYgj)
```python
from transformers import *
import torch
from torch.nn.functional import softmax
pretrained = 'voidful/albert_chinese_base'
tokenizer = BertTokenizer.from_pretrained(pretrained)
model = AlbertForMaskedLM.from_pretrained(pretrained)
inputtext = "今天[MASK]情很好"
maskpos = tokenizer.encode(inputtext, add_special_tokens=True).index(103)
input_ids = torch.tensor(tokenizer.encode(inputtext, add_special_tokens=True)).unsqueeze(0) # Batch size 1
outputs = model(input_ids, masked_lm_labels=input_ids)
loss, prediction_scores = outputs[:2]
logit_prob = softmax(prediction_scores[0, maskpos]).data.tolist()
predicted_index = torch.argmax(prediction_scores[0, maskpos]).item()
predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]
print(predicted_token,logit_prob[predicted_index])
```
Result: `感 0.36333346366882324`
|
|
voidful/albert_chinese_large | 2020-12-11T22:04:25.000Z | [
"pytorch",
"albert",
"zh",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"vocab.txt"
] | voidful | 530 | transformers | ---
language: zh
---
# albert_chinese_large
This a albert_chinese_large model from [Google's github](https://github.com/google-research/ALBERT)
converted by huggingface's [script](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py)
## Attention (注意)
Since sentencepiece is not used in albert_chinese_large model
you have to call BertTokenizer instead of AlbertTokenizer !!!
we can eval it using an example on MaskedLM
由於 albert_chinese_large 模型沒有用 sentencepiece
用AlbertTokenizer會載不進詞表,因此需要改用BertTokenizer !!!
我們可以跑MaskedLM預測來驗證這個做法是否正確
## Justify (驗證有效性)
[colab trial](https://colab.research.google.com/drive/1Wjz48Uws6-VuSHv_-DcWLilv77-AaYgj)
```python
from transformers import *
import torch
from torch.nn.functional import softmax
pretrained = 'voidful/albert_chinese_large'
tokenizer = BertTokenizer.from_pretrained(pretrained)
model = AlbertForMaskedLM.from_pretrained(pretrained)
inputtext = "今天[MASK]情很好"
maskpos = tokenizer.encode(inputtext, add_special_tokens=True).index(103)
input_ids = torch.tensor(tokenizer.encode(inputtext, add_special_tokens=True)).unsqueeze(0) # Batch size 1
outputs = model(input_ids, masked_lm_labels=input_ids)
loss, prediction_scores = outputs[:2]
logit_prob = softmax(prediction_scores[0, maskpos]).data.tolist()
predicted_index = torch.argmax(prediction_scores[0, maskpos]).item()
predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]
print(predicted_token,logit_prob[predicted_index])
```
Result: `心 0.9422469735145569`
|
|
voidful/albert_chinese_small | 2020-12-11T22:04:28.000Z | [
"pytorch",
"albert",
"zh",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"vocab.txt"
] | voidful | 463 | transformers | ---
language: zh
---
# albert_chinese_small
This a albert_chinese_small model from [brightmart/albert_zh project](https://github.com/brightmart/albert_zh), albert_small_google_zh model
converted by huggingface's [script](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py)
## Attention (注意)
Since sentencepiece is not used in albert_chinese_small model
you have to call BertTokenizer instead of AlbertTokenizer !!!
we can eval it using an example on MaskedLM
由於 albert_chinese_small 模型沒有用 sentencepiece
用AlbertTokenizer會載不進詞表,因此需要改用BertTokenizer !!!
我們可以跑MaskedLM預測來驗證這個做法是否正確
## Justify (驗證有效性)
[colab trial](https://colab.research.google.com/drive/1Wjz48Uws6-VuSHv_-DcWLilv77-AaYgj)
```python
from transformers import *
import torch
from torch.nn.functional import softmax
pretrained = 'voidful/albert_chinese_small'
tokenizer = BertTokenizer.from_pretrained(pretrained)
model = AlbertForMaskedLM.from_pretrained(pretrained)
inputtext = "今天[MASK]情很好"
maskpos = tokenizer.encode(inputtext, add_special_tokens=True).index(103)
input_ids = torch.tensor(tokenizer.encode(inputtext, add_special_tokens=True)).unsqueeze(0) # Batch size 1
outputs = model(input_ids, masked_lm_labels=input_ids)
loss, prediction_scores = outputs[:2]
logit_prob = softmax(prediction_scores[0, maskpos]).data.tolist()
predicted_index = torch.argmax(prediction_scores[0, maskpos]).item()
predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]
print(predicted_token,logit_prob[predicted_index])
```
Result: `感 0.6390823125839233`
|
|
voidful/albert_chinese_tiny | 2020-12-11T22:04:32.000Z | [
"pytorch",
"albert",
"zh",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"vocab.txt"
] | voidful | 9,038 | transformers | ---
language: zh
---
# albert_chinese_tiny
This a albert_chinese_tiny model from [brightmart/albert_zh project](https://github.com/brightmart/albert_zh), albert_tiny_google_zh model
converted by huggingface's [script](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py)
## Attention (注意)
Since sentencepiece is not used in albert_chinese_tiny model
you have to call BertTokenizer instead of AlbertTokenizer !!!
we can eval it using an example on MaskedLM
由於 albert_chinese_tiny 模型沒有用 sentencepiece
用AlbertTokenizer會載不進詞表,因此需要改用BertTokenizer !!!
我們可以跑MaskedLM預測來驗證這個做法是否正確
## Justify (驗證有效性)
[colab trial](https://colab.research.google.com/drive/1Wjz48Uws6-VuSHv_-DcWLilv77-AaYgj)
```python
from transformers import *
import torch
from torch.nn.functional import softmax
pretrained = 'voidful/albert_chinese_tiny'
tokenizer = BertTokenizer.from_pretrained(pretrained)
model = AlbertForMaskedLM.from_pretrained(pretrained)
inputtext = "今天[MASK]情很好"
maskpos = tokenizer.encode(inputtext, add_special_tokens=True).index(103)
input_ids = torch.tensor(tokenizer.encode(inputtext, add_special_tokens=True)).unsqueeze(0) # Batch size 1
outputs = model(input_ids, masked_lm_labels=input_ids)
loss, prediction_scores = outputs[:2]
logit_prob = softmax(prediction_scores[0, maskpos]).data.tolist()
predicted_index = torch.argmax(prediction_scores[0, maskpos]).item()
predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]
print(predicted_token,logit_prob[predicted_index])
```
Result: `感 0.40312355756759644`
|
|
voidful/albert_chinese_xlarge | 2020-12-11T22:04:35.000Z | [
"pytorch",
"albert",
"zh",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"vocab.txt"
] | voidful | 153 | transformers | ---
language: zh
---
# albert_chinese_xlarge
This a albert_chinese_xlarge model from [Google's github](https://github.com/google-research/ALBERT)
converted by huggingface's [script](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py)
## Attention (注意)
Since sentencepiece is not used in albert_chinese_xlarge model
you have to call BertTokenizer instead of AlbertTokenizer !!!
we can eval it using an example on MaskedLM
由於 albert_chinese_xlarge 模型沒有用 sentencepiece
用AlbertTokenizer會載不進詞表,因此需要改用BertTokenizer !!!
我們可以跑MaskedLM預測來驗證這個做法是否正確
## Justify (驗證有效性)
[colab trial](https://colab.research.google.com/drive/1Wjz48Uws6-VuSHv_-DcWLilv77-AaYgj)
```python
from transformers import *
import torch
from torch.nn.functional import softmax
pretrained = 'voidful/albert_chinese_xlarge'
tokenizer = BertTokenizer.from_pretrained(pretrained)
model = AlbertForMaskedLM.from_pretrained(pretrained)
inputtext = "今天[MASK]情很好"
maskpos = tokenizer.encode(inputtext, add_special_tokens=True).index(103)
input_ids = torch.tensor(tokenizer.encode(inputtext, add_special_tokens=True)).unsqueeze(0) # Batch size 1
outputs = model(input_ids, masked_lm_labels=input_ids)
loss, prediction_scores = outputs[:2]
logit_prob = softmax(prediction_scores[0, maskpos]).data.tolist()
predicted_index = torch.argmax(prediction_scores[0, maskpos]).item()
predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]
print(predicted_token,logit_prob[predicted_index])
```
Result: `心 0.9942440390586853`
|
|
voidful/albert_chinese_xxlarge | 2020-12-11T22:04:38.000Z | [
"pytorch",
"albert",
"zh",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"vocab.txt"
] | voidful | 2,793 | transformers | ---
language: zh
---
# albert_chinese_xxlarge
This a albert_chinese_xxlarge model from [Google's github](https://github.com/google-research/ALBERT)
converted by huggingface's [script](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py)
## Attention (注意)
Since sentencepiece is not used in albert_chinese_xxlarge model
you have to call BertTokenizer instead of AlbertTokenizer !!!
we can eval it using an example on MaskedLM
由於 albert_chinese_xxlarge 模型沒有用 sentencepiece
用AlbertTokenizer會載不進詞表,因此需要改用BertTokenizer !!!
我們可以跑MaskedLM預測來驗證這個做法是否正確
## Justify (驗證有效性)
[colab trial](https://colab.research.google.com/drive/1Wjz48Uws6-VuSHv_-DcWLilv77-AaYgj)
```python
from transformers import *
import torch
from torch.nn.functional import softmax
pretrained = 'voidful/albert_chinese_xxlarge'
tokenizer = BertTokenizer.from_pretrained(pretrained)
model = AlbertForMaskedLM.from_pretrained(pretrained)
inputtext = "今天[MASK]情很好"
maskpos = tokenizer.encode(inputtext, add_special_tokens=True).index(103)
input_ids = torch.tensor(tokenizer.encode(inputtext, add_special_tokens=True)).unsqueeze(0) # Batch size 1
outputs = model(input_ids, masked_lm_labels=input_ids)
loss, prediction_scores = outputs[:2]
logit_prob = softmax(prediction_scores[0, maskpos]).data.tolist()
predicted_index = torch.argmax(prediction_scores[0, maskpos]).item()
predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]
print(predicted_token,logit_prob[predicted_index])
```
Result: `心 0.995713472366333`
|
|
voidful/bart-distractor-generation-both | 2021-04-04T16:20:20.000Z | [
"pytorch",
"bart",
"seq2seq",
"en",
"dataset:race",
"transformers",
"distractor",
"generation",
"text2text-generation",
"pipeline_tag:text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | voidful | 248 | transformers | ---
language: en
tags:
- bart
- distractor
- generation
- seq2seq
datasets:
- race
metrics:
- bleu
- rouge
pipeline_tag: text2text-generation
widget:
- text: "When you ' re having a holiday , one of the main questions to ask is which hotel or apartment to choose . However , when it comes to France , you have another special choice : treehouses . In France , treehouses are offered to travelers as a new choice in many places . The price may be a little higher , but you do have a chance to _ your childhood memories . Alain Laurens , one of France ' s top treehouse designers , said , ' Most of the people might have the experience of building a den when they were young . And they like that feeling of freedom when they are children . ' Its fairy - tale style gives travelers a special feeling . It seems as if they are living as a forest king and enjoying the fresh air in the morning . Another kind of treehouse is the ' star cube ' . It gives travelers the chance of looking at the stars shining in the sky when they are going to sleep . Each ' star cube ' not only offers all the comfortable things that a hotel provides for travelers , but also gives them a chance to look for stars by using a telescope . The glass roof allows you to look at the stars from your bed . </s> The passage mainly tells us </s> treehouses in france."
---
# bart-distractor-generation-both
## Model description
This model is a sequence-to-sequence distractor generator which takes an answer, question and context as an input, and generates a distractor as an output. It is based on a pretrained `bart-base` model.
This model trained with Parallel MLM & Answer Negative Regularization refer to the [Paper](https://www.aclweb.org/anthology/2020.findings-emnlp.393/).
For details, please see https://github.com/voidful/BDG.
## Intended uses & limitations
The model is trained to generate examinations-style multiple choice distractor. The model performs best with full sentence answers.
#### How to use
The model takes concatenated context, question and answers as an input sequence, and will generate a full distractor sentence as an output sequence. The max sequence length is 1024 tokens. Inputs should be organised into the following format:
```
context </s> question </s> answer
```
The input sequence can then be encoded and passed as the `input_ids` argument in the model's `generate()` method.
For details, please see https://github.com/voidful/BDG.
#### Limitations and bias
The model is limited to generating distractor in the same style as those found in [RACE](https://www.aclweb.org/anthology/D17-1082/). The generated distractors can potentially be leading or reflect biases that are present in the context. If the context is too short or completely absent, or if the context, question and answer do not match, the generated distractor is likely to be incoherent. |
voidful/bart-distractor-generation-pm | 2021-04-04T16:20:25.000Z | [
"pytorch",
"bart",
"seq2seq",
"en",
"dataset:race",
"transformers",
"distractor",
"generation",
"text2text-generation",
"pipeline_tag:text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | voidful | 459 | transformers | ---
language: en
tags:
- bart
- distractor
- generation
- seq2seq
datasets:
- race
metrics:
- bleu
- rouge
pipeline_tag: text2text-generation
widget:
- text: "When you ' re having a holiday , one of the main questions to ask is which hotel or apartment to choose . However , when it comes to France , you have another special choice : treehouses . In France , treehouses are offered to travelers as a new choice in many places . The price may be a little higher , but you do have a chance to _ your childhood memories . Alain Laurens , one of France ' s top treehouse designers , said , ' Most of the people might have the experience of building a den when they were young . And they like that feeling of freedom when they are children . ' Its fairy - tale style gives travelers a special feeling . It seems as if they are living as a forest king and enjoying the fresh air in the morning . Another kind of treehouse is the ' star cube ' . It gives travelers the chance of looking at the stars shining in the sky when they are going to sleep . Each ' star cube ' not only offers all the comfortable things that a hotel provides for travelers , but also gives them a chance to look for stars by using a telescope . The glass roof allows you to look at the stars from your bed . </s> The passage mainly tells us </s> treehouses in france."
---
# bart-distractor-generation-pm
## Model description
This model is a sequence-to-sequence distractor generator which takes an answer, question and context as an input, and generates a distractor as an output. It is based on a pretrained `bart-base` model.
This model trained with Parallel MLM refer to the [Paper](https://www.aclweb.org/anthology/2020.findings-emnlp.393/).
For details, please see https://github.com/voidful/BDG.
## Intended uses & limitations
The model is trained to generate examinations-style multiple choice distractor. The model performs best with full sentence answers.
#### How to use
The model takes concatenated context, question and answers as an input sequence, and will generate a full distractor sentence as an output sequence. The max sequence length is 1024 tokens. Inputs should be organised into the following format:
```
context </s> question </s> answer
```
The input sequence can then be encoded and passed as the `input_ids` argument in the model's `generate()` method.
#### Limitations and bias
The model is limited to generating distractor in the same style as those found in [RACE](https://www.aclweb.org/anthology/D17-1082/). The generated distractors can potentially be leading or reflect biases that are present in the context. If the context is too short or completely absent, or if the context, question and answer do not match, the generated distractor is likely to be incoherent. |
voidful/bart-distractor-generation | 2021-04-04T16:18:19.000Z | [
"pytorch",
"bart",
"seq2seq",
"en",
"dataset:race",
"transformers",
"distractor",
"generation",
"text2text-generation",
"pipeline_tag:text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | voidful | 6,405 | transformers | ---
language: en
tags:
- bart
- distractor
- generation
- seq2seq
datasets:
- race
metrics:
- bleu
- rouge
pipeline_tag: text2text-generation
widget:
- text: "When you ' re having a holiday , one of the main questions to ask is which hotel or apartment to choose . However , when it comes to France , you have another special choice : treehouses . In France , treehouses are offered to travelers as a new choice in many places . The price may be a little higher , but you do have a chance to _ your childhood memories . Alain Laurens , one of France ' s top treehouse designers , said , ' Most of the people might have the experience of building a den when they were young . And they like that feeling of freedom when they are children . ' Its fairy - tale style gives travelers a special feeling . It seems as if they are living as a forest king and enjoying the fresh air in the morning . Another kind of treehouse is the ' star cube ' . It gives travelers the chance of looking at the stars shining in the sky when they are going to sleep . Each ' star cube ' not only offers all the comfortable things that a hotel provides for travelers , but also gives them a chance to look for stars by using a telescope . The glass roof allows you to look at the stars from your bed . </s> The passage mainly tells us </s> treehouses in france."
---
# bart-distractor-generation
## Model description
This model is a sequence-to-sequence distractor generator which takes an answer, question and context as an input, and generates a distractor as an output. It is based on a pretrained `bart-base` model.
For details, please see https://github.com/voidful/BDG.
## Intended uses & limitations
The model is trained to generate examinations-style multiple choice distractor. The model performs best with full sentence answers.
#### How to use
The model takes concatenated context, question and answers as an input sequence, and will generate a full distractor sentence as an output sequence. The max sequence length is 1024 tokens. Inputs should be organised into the following format:
```
context </s> question </s> answer
```
The input sequence can then be encoded and passed as the `input_ids` argument in the model's `generate()` method.
For details, please see https://github.com/voidful/BDG.
#### Limitations and bias
The model is limited to generating distractor in the same style as those found in [RACE](https://www.aclweb.org/anthology/D17-1082/). The generated distractors can potentially be leading or reflect biases that are present in the context. If the context is too short or completely absent, or if the context, question and answer do not match, the generated distractor is likely to be incoherent. |
voidful/bart-eqg-question-generator | 2021-04-07T10:02:16.000Z | [
"pytorch",
"bart",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | voidful | 120 | transformers |
Subsets and Splits