modelId
stringlengths
4
112
lastModified
stringlengths
24
24
tags
list
pipeline_tag
stringclasses
21 values
files
list
publishedBy
stringlengths
2
37
downloads_last_month
int32
0
9.44M
library
stringclasses
15 values
modelCard
large_stringlengths
0
100k
mrm8488/scibert_scivocab-finetuned-CORD19
2021-05-20T00:48:35.000Z
[ "pytorch", "jax", "bert", "masked-lm", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "config.json", "flax_model.msgpack", "optimizer.pt", "pytorch_model.bin", "scheduler.pt", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
39
transformers
mrm8488/spanbert-base-finetuned-squadv1
2021-05-20T00:49:33.000Z
[ "pytorch", "jax", "bert", "en", "arxiv:1907.10529", "transformers" ]
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin" ]
mrm8488
13
transformers
--- language: en thumbnail: --- # SpanBERT base fine-tuned on SQuAD v1 [SpanBERT](https://github.com/facebookresearch/SpanBERT) created by [Facebook Research](https://github.com/facebookresearch) and fine-tuned on [SQuAD 1.1](https://rajpurkar.github.io/SQuAD-explorer/explore/1.1/dev/) for **Q&A** downstream task ([by them](https://github.com/facebookresearch/SpanBERT#finetuned-models-squad-1120-relation-extraction-coreference-resolution)). ## Details of SpanBERT [SpanBERT: Improving Pre-training by Representing and Predicting Spans](https://arxiv.org/abs/1907.10529) ## Details of the downstream task (Q&A) - Dataset 📚 🧐 ❓ [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/) ## Model fine-tuning 🏋️‍ You can get the fine-tuning script [here](https://github.com/facebookresearch/SpanBERT) ```bash python code/run_squad.py \ --do_train \ --do_eval \ --model spanbert-base-cased \ --train_file train-v1.1.json \ --dev_file dev-v1.1.json \ --train_batch_size 32 \ --eval_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 4 \ --max_seq_length 512 \ --doc_stride 128 \ --eval_metric f1 \ --output_dir squad_output \ --fp16 ``` ## Results Comparison 📝 | | SQuAD 1.1 | SQuAD 2.0 | Coref | TACRED | | ---------------------- | ------------- | --------- | ------- | ------ | | | F1 | F1 | avg. F1 | F1 | | BERT (base) | 88.5 | 76.5 | 73.1 | 67.7 | | SpanBERT (base) | **92.4** (this one) | [83.6](https://huggingface.co/mrm8488/spanbert-base-finetuned-squadv2) | 77.4 | [68.2](https://huggingface.co/mrm8488/spanbert-base-finetuned-tacred) | | BERT (large) | 91.3 | 83.3 | 77.1 | 66.4 | | SpanBERT (large) | [94.6](https://huggingface.co/mrm8488/spanbert-large-finetuned-squadv1) | [88.7](https://huggingface.co/mrm8488/spanbert-large-finetuned-squadv2) | 79.6 | [70.8](https://huggingface.co/mrm8488/spanbert-large-finetuned-tacred) | Note: The numbers marked as * are evaluated on the development sets because those models were not submitted to the official SQuAD leaderboard. All the other numbers are test numbers. ## Model in action Fast usage with **pipelines**: ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="mrm8488/spanbert-base-finetuned-squadv1", tokenizer="SpanBERT/spanbert-base-cased" ) qa_pipeline({ 'context': "Manuel Romero has been working very hard in the repository hugginface/transformers lately", 'question': "How has been working Manuel Romero lately?" }) # Output: {'answer': 'very hard in the repository hugginface/transformers', 'end': 82, 'score': 0.327230326857725, 'start': 31} ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/spanbert-base-finetuned-squadv2
2021-05-20T00:51:05.000Z
[ "pytorch", "jax", "bert", "en", "arxiv:1907.10529", "transformers" ]
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin" ]
mrm8488
29
transformers
--- language: en thumbnail: --- # SpanBERT base fine-tuned on SQuAD v2 [SpanBERT](https://github.com/facebookresearch/SpanBERT) created by [Facebook Research](https://github.com/facebookresearch) and fine-tuned on [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) for **Q&A** downstream task ([by them](https://github.com/facebookresearch/SpanBERT#finetuned-models-squad-1120-relation-extraction-coreference-resolution)). ## Details of SpanBERT [SpanBERT: Improving Pre-training by Representing and Predicting Spans](https://arxiv.org/abs/1907.10529) ## Details of the downstream task (Q&A) - Dataset 📚 🧐 ❓ [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD2.0 | train | 130k | | SQuAD2.0 | eval | 12.3k | ## Model fine-tuning 🏋️‍ You can get the fine-tuning script [here](https://github.com/facebookresearch/SpanBERT) ```bash python code/run_squad.py \ --do_train \ --do_eval \ --model spanbert-base-cased \ --train_file train-v2.0.json \ --dev_file dev-v2.0.json \ --train_batch_size 32 \ --eval_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 4 \ --max_seq_length 512 \ --doc_stride 128 \ --eval_metric best_f1 \ --output_dir squad2_output \ --version_2_with_negative \ --fp16 ``` ## Results Comparison 📝 | | SQuAD 1.1 | SQuAD 2.0 | Coref | TACRED | | ---------------------- | ------------- | --------- | ------- | ------ | | | F1 | F1 | avg. F1 | F1 | | BERT (base) | 88.5 | 76.5 | 73.1 | 67.7 | | SpanBERT (base) | [92.4](https://huggingface.co/mrm8488/spanbert-base-finetuned-squadv1) | **83.6** (this one) | 77.4 | [68.2](https://huggingface.co/mrm8488/spanbert-base-finetuned-tacred) | | BERT (large) | 91.3 | 83.3 | 77.1 | 66.4 | | SpanBERT (large) | [94.6](https://huggingface.co/mrm8488/spanbert-large-finetuned-squadv1) | [88.7](https://huggingface.co/mrm8488/spanbert-large-finetuned-squadv2) | 79.6 | [70.8](https://huggingface.co/mrm8488/spanbert-large-finetuned-tacred) | Note: The numbers marked as * are evaluated on the development sets because those models were not submitted to the official SQuAD leaderboard. All the other numbers are test numbers. ## Model in action Fast usage with **pipelines**: ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="mrm8488/spanbert-base-finetuned-squadv2", tokenizer="SpanBERT/spanbert-base-cased" ) qa_pipeline({ 'context': "Manuel Romero has been working very hard in the repository hugginface/transformers lately", 'question': "How has been working Manuel Romero lately?" }) # Output: {'answer': 'very hard', 'end': 40, 'score': 0.9052708846768347, 'start': 31} ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/spanbert-base-finetuned-tacred
2021-05-20T00:53:07.000Z
[ "pytorch", "jax", "bert", "en", "arxiv:1907.10529", "transformers" ]
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
mrm8488
89
transformers
--- language: en thumbnail: --- # SpanBERT base fine-tuned on TACRED [SpanBERT](https://github.com/facebookresearch/SpanBERT) created by [Facebook Research](https://github.com/facebookresearch) and fine-tuned on [TACRED](https://nlp.stanford.edu/projects/tacred/) dataset by [them](https://github.com/facebookresearch/SpanBERT#finetuned-models-squad-1120-relation-extraction-coreference-resolution) ## Details of SpanBERT [SpanBERT: Improving Pre-training by Representing and Predicting Spans](https://arxiv.org/abs/1907.10529) ## Dataset 📚 [TACRED](https://nlp.stanford.edu/projects/tacred/) A large-scale relation extraction dataset with 106k+ examples over 42 TAC KBP relation types. ## Model fine-tuning 🏋️‍ You can get the fine-tuning script [here](https://github.com/facebookresearch/SpanBERT) ```bash python code/run_tacred.py \ --do_train \ --do_eval \ --data_dir <TACRED_DATA_DIR> \ --model spanbert-base-cased \ --train_batch_size 32 \ --eval_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 10 \ --max_seq_length 128 \ --output_dir tacred_dir \ --fp16 ``` ## Results Comparison 📝 | | SQuAD 1.1 | SQuAD 2.0 | Coref | TACRED | | ---------------------- | ------------- | --------- | ------- | ------ | | | F1 | F1 | avg. F1 | F1 | | BERT (base) | 88.5* | 76.5* | 73.1 | 67.7 | | SpanBERT (base) | [92.4*](https://huggingface.co/mrm8488/spanbert-base-finetuned-squadv1) | [83.6*](https://huggingface.co/mrm8488/spanbert-base-finetuned-squadv2) | 77.4 | **68.2** (this one) | | BERT (large) | 91.3 | 83.3 | 77.1 | 66.4 | | SpanBERT (large) | [94.6](https://huggingface.co/mrm8488/spanbert-large-finetuned-squadv1) | [88.7](https://huggingface.co/mrm8488/spanbert-large-finetuned-squadv2) | 79.6 | [70.8](https://huggingface.co/mrm8488/spanbert-base-finetuned-tacred) | Note: The numbers marked as * are evaluated on the development sets because those models were not submitted to the official SQuAD leaderboard. All the other numbers are test numbers. > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/spanbert-finetuned-squadv1
2021-05-20T00:55:17.000Z
[ "pytorch", "jax", "bert", "question-answering", "en", "arxiv:1907.10529", "transformers" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
301
transformers
--- language: en thumbnail: --- # SpanBERT (spanbert-base-cased) fine-tuned on SQuAD v1.1 [SpanBERT](https://github.com/facebookresearch/SpanBERT) created by [Facebook Research](https://github.com/facebookresearch) and fine-tuned on [SQuAD 1.1](https://rajpurkar.github.io/SQuAD-explorer/) for **Q&A** downstream task. ## Details of SpanBERT A pre-training method that is designed to better represent and predict spans of text. [SpanBERT: Improving Pre-training by Representing and Predicting Spans](https://arxiv.org/abs/1907.10529) ## Details of the downstream task (Q&A) - Dataset [SQuAD 1.1](https://rajpurkar.github.io/SQuAD-explorer/) contains 100,000+ question-answer pairs on 500+ articles. | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD1.1 | train | 87.7k | | SQuAD1.1 | eval | 10.6k | ## Model training The model was trained on a Tesla P100 GPU and 25GB of RAM. The script for fine tuning can be found [here](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py) ## Results: | Metric | # Value | | ------ | --------- | | **EM** | **85.49** | | **F1** | **91.98** | ### Raw metrics: ```json { "exact": 85.49668874172185, "f1": 91.9845699540379, "total": 10570, "HasAns_exact": 85.49668874172185, "HasAns_f1": 91.9845699540379, "HasAns_total": 10570, "best_exact": 85.49668874172185, "best_exact_thresh": 0.0, "best_f1": 91.9845699540379, "best_f1_thresh": 0.0 } ``` ## Comparison: | Model | EM | F1 score | | ----------------------------------------------------------------------------------------- | --------- | --------- | | [SpanBert official repo](https://github.com/facebookresearch/SpanBERT#pre-trained-models) | - | 92.4\* | | [spanbert-finetuned-squadv1](https://huggingface.co/mrm8488/spanbert-finetuned-squadv1) | **85.49** | **91.98** | ## Model in action Fast usage with **pipelines**: ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="mrm8488/spanbert-finetuned-squadv1", tokenizer="mrm8488/spanbert-finetuned-squadv1" ) qa_pipeline({ 'context': "Manuel Romero has been working hardly in the repository hugginface/transformers lately", 'question': "Who has been working hard for hugginface/transformers lately?" }) ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/spanbert-finetuned-squadv2
2021-05-20T00:56:45.000Z
[ "pytorch", "jax", "tfsavedmodel", "bert", "question-answering", "en", "arxiv:1907.10529", "transformers" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "saved_model.tar.gz", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
1,446
transformers
--- language: en thumbnail: --- # SpanBERT (spanbert-base-cased) fine-tuned on SQuAD v2 [SpanBERT](https://github.com/facebookresearch/SpanBERT) created by [Facebook Research](https://github.com/facebookresearch) and fine-tuned on [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) for **Q&A** downstream task. ## Details of SpanBERT [SpanBERT: Improving Pre-training by Representing and Predicting Spans](https://arxiv.org/abs/1907.10529) ## Details of the downstream task (Q&A) - Dataset [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD2.0 | train | 130k | | SQuAD2.0 | eval | 12.3k | ## Model training The model was trained on a Tesla P100 GPU and 25GB of RAM. The script for fine tuning can be found [here](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py) ## Results: | Metric | # Value | | ------ | --------- | | **EM** | **78.80** | | **F1** | **82.22** | ### Raw metrics: ```json { "exact": 78.80064010780762, "f1": 82.22801347271162, "total": 11873, "HasAns_exact": 78.74493927125506, "HasAns_f1": 85.60951483831069, "HasAns_total": 5928, "NoAns_exact": 78.85618166526493, "NoAns_f1": 78.85618166526493, "NoAns_total": 5945, "best_exact": 78.80064010780762, "best_exact_thresh": 0.0, "best_f1": 82.2280134727116, "best_f1_thresh": 0.0 } ``` ## Comparison: | Model | EM | F1 score | | ----------------------------------------------------------------------------------------- | --------- | --------- | | [SpanBert official repo](https://github.com/facebookresearch/SpanBERT#pre-trained-models) | - | 83.6\* | | [spanbert-finetuned-squadv2](https://huggingface.co/mrm8488/spanbert-finetuned-squadv2) | **78.80** | **82.22** | ## Model in action Fast usage with **pipelines**: ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="mrm8488/spanbert-finetuned-squadv2", tokenizer="mrm8488/spanbert-finetuned-squadv2" ) qa_pipeline({ 'context': "Manuel Romero has been working hardly in the repository hugginface/transformers lately", 'question': "Who has been working hard for hugginface/transformers lately?" }) # Output: {'answer': 'Manuel Romero','end': 13,'score': 6.836378586818937e-09, 'start': 0} ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/spanbert-large-finetuned-squadv1
2021-05-20T00:58:31.000Z
[ "pytorch", "jax", "bert", "en", "arxiv:1907.10529", "transformers" ]
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin" ]
mrm8488
17
transformers
--- language: en thumbnail: --- # SpanBERT large fine-tuned on SQuAD v1 [SpanBERT](https://github.com/facebookresearch/SpanBERT) created by [Facebook Research](https://github.com/facebookresearch) and fine-tuned on [SQuAD 1.1](https://rajpurkar.github.io/SQuAD-explorer/explore/1.1/dev/) for **Q&A** downstream task ([by them](https://github.com/facebookresearch/SpanBERT#finetuned-models-squad-1120-relation-extraction-coreference-resolution)). ## Details of SpanBERT [SpanBERT: Improving Pre-training by Representing and Predicting Spans](https://arxiv.org/abs/1907.10529) ## Details of the downstream task (Q&A) - Dataset 📚 🧐 ❓ [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/) ## Model fine-tuning 🏋️‍ You can get the fine-tuning script [here](https://github.com/facebookresearch/SpanBERT) ```bash python code/run_squad.py \ --do_train \ --do_eval \ --model spanbert-large-cased \ --train_file train-v1.1.json \ --dev_file dev-v1.1.json \ --train_batch_size 32 \ --eval_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 4 \ --max_seq_length 512 \ --doc_stride 128 \ --eval_metric f1 \ --output_dir squad_output \ --fp16 ``` ## Results Comparison 📝 | | SQuAD 1.1 | SQuAD 2.0 | Coref | TACRED | | ---------------------- | ------------- | --------- | ------- | ------ | | | F1 | F1 | avg. F1 | F1 | | BERT (base) | 88.5* | 76.5* | 73.1 | 67.7 | | SpanBERT (base) | [92.4*](https://huggingface.co/mrm8488/spanbert-base-finetuned-squadv1) | [83.6*](https://huggingface.co/mrm8488/spanbert-base-finetuned-squadv2) | 77.4 | [68.2](https://huggingface.co/mrm8488/spanbert-base-finetuned-tacred) | | BERT (large) | 91.3 | 83.3 | 77.1 | 66.4 | | SpanBERT (large) | **94.6** (this) | [88.7](https://huggingface.co/mrm8488/spanbert-large-finetuned-squadv2) | 79.6 | [70.8](https://huggingface.co/mrm8488/spanbert-large-finetuned-tacred) | Note: The numbers marked as * are evaluated on the development sets because those models were not submitted to the official SQuAD leaderboard. All the other numbers are test numbers. ## Model in action Fast usage with **pipelines**: ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="mrm8488/spanbert-large-finetuned-squadv1", tokenizer="SpanBERT/spanbert-large-cased" ) qa_pipeline({ 'context': "Manuel Romero has been working very hard in the repository hugginface/transformers lately", 'question': "How has been working Manuel Romero lately?" }) # Output: {'answer': 'very hard in the repository hugginface/transformers', 'end': 82, 'score': 0.327230326857725, 'start': 31} ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/spanbert-large-finetuned-squadv2
2021-05-20T00:59:58.000Z
[ "pytorch", "jax", "bert", "en", "arxiv:1907.10529", "transformers" ]
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin" ]
mrm8488
233
transformers
--- language: en thumbnail: --- # SpanBERT large fine-tuned on SQuAD v2 [SpanBERT](https://github.com/facebookresearch/SpanBERT) created by [Facebook Research](https://github.com/facebookresearch) and fine-tuned on [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) for **Q&A** downstream task ([by them](https://github.com/facebookresearch/SpanBERT#finetuned-models-squad-1120-relation-extraction-coreference-resolution)). ## Details of SpanBERT [SpanBERT: Improving Pre-training by Representing and Predicting Spans](https://arxiv.org/abs/1907.10529) ## Details of the downstream task (Q&A) - Dataset 📚 🧐 ❓ [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD2.0 | train | 130k | | SQuAD2.0 | eval | 12.3k | ## Model fine-tuning 🏋️‍ You can get the fine-tuning script [here](https://github.com/facebookresearch/SpanBERT) ```bash python code/run_squad.py \ --do_train \ --do_eval \ --model spanbert-large-cased \ --train_file train-v2.0.json \ --dev_file dev-v2.0.json \ --train_batch_size 32 \ --eval_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 4 \ --max_seq_length 512 \ --doc_stride 128 \ --eval_metric best_f1 \ --output_dir squad2_output \ --version_2_with_negative \ --fp16 ``` ## Results Comparison 📝 | | SQuAD 1.1 | SQuAD 2.0 | Coref | TACRED | | ---------------------- | ------------- | --------- | ------- | ------ | | | F1 | F1 | avg. F1 | F1 | | BERT (base) | 88.5* | 76.5* | 73.1 | 67.7 | | SpanBERT (base) | [92.4*](https://huggingface.co/mrm8488/spanbert-base-finetuned-squadv1) | [83.6*](https://huggingface.co/mrm8488/spanbert-base-finetuned-squadv2) | 77.4 | [68.2](https://huggingface.co/mrm8488/spanbert-base-finetuned-tacred) | | BERT (large) | 91.3 | 83.3 | 77.1 | 66.4 | | SpanBERT (large) | [94.6](https://huggingface.co/mrm8488/spanbert-large-finetuned-squadv1) | **88.7** (this) | 79.6 | [70.8](https://huggingface.co/mrm8488/spanbert-large-finetuned-tacred) | Note: The numbers marked as * are evaluated on the development sets because those models were not submitted to the official SQuAD leaderboard. All the other numbers are test numbers. ## Model in action Fast usage with **pipelines**: ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="mrm8488/spanbert-large-finetuned-squadv2", tokenizer="SpanBERT/spanbert-large-cased" ) qa_pipeline({ 'context': "Manuel Romero has been working very hard in the repository hugginface/transformers lately", 'question': "How has been working Manuel Romero lately?" }) # Output: {'answer': 'very hard', 'end': 40, 'score': 0.9052708846768347, 'start': 31} ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/spanbert-large-finetuned-tacred
2021-05-20T01:01:51.000Z
[ "pytorch", "jax", "bert", "en", "arxiv:1907.10529", "transformers" ]
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
mrm8488
148
transformers
--- language: en thumbnail: --- # SpanBERT large fine-tuned on TACRED [SpanBERT](https://github.com/facebookresearch/SpanBERT) created by [Facebook Research](https://github.com/facebookresearch) and fine-tuned on [TACRED](https://nlp.stanford.edu/projects/tacred/) dataset by [them](https://github.com/facebookresearch/SpanBERT#finetuned-models-squad-1120-relation-extraction-coreference-resolution) ## Details of SpanBERT [SpanBERT: Improving Pre-training by Representing and Predicting Spans](https://arxiv.org/abs/1907.10529) ## Dataset 📚 [TACRED](https://nlp.stanford.edu/projects/tacred/) A large-scale relation extraction dataset with 106k+ examples over 42 TAC KBP relation types. ## Model fine-tuning 🏋️‍ You can get the fine-tuning script [here](https://github.com/facebookresearch/SpanBERT) ```bash python code/run_tacred.py \ --do_train \ --do_eval \ --data_dir <TACRED_DATA_DIR> \ --model spanbert-large-cased \ --train_batch_size 32 \ --eval_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 10 \ --max_seq_length 128 \ --output_dir tacred_dir \ --fp16 ``` ## Results Comparison 📝 | | SQuAD 1.1 | SQuAD 2.0 | Coref | TACRED | | ---------------------- | ------------- | --------- | ------- | ------ | | | F1 | F1 | avg. F1 | F1 | | BERT (base) | 88.5* | 76.5* | 73.1 | 67.7 | | SpanBERT (base) | [92.4*](https://huggingface.co/mrm8488/spanbert-base-finetuned-squadv1) | [83.6*](https://huggingface.co/mrm8488/spanbert-base-finetuned-squadv2) | 77.4 | [68.2](https://huggingface.co/mrm8488/spanbert-base-finetuned-tacred) | | BERT (large) | 91.3 | 83.3 | 77.1 | 66.4 | | SpanBERT (large) | [94.6](https://huggingface.co/mrm8488/spanbert-large-finetuned-squadv1) | [88.7](https://huggingface.co/mrm8488/spanbert-large-finetuned-squadv2) | 79.6 | **70.8** (this one) | Note: The numbers marked as * are evaluated on the development sets because those models were not submitted to the official SQuAD leaderboard. All the other numbers are test numbers. > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/squeezebert-finetuned-squadv1
2020-12-11T21:55:22.000Z
[ "pytorch", "squeezebert", "question-answering", "en", "dataset:squad", "arxiv:2006.11316", "arxiv:2004.02984", "transformers" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "nbest_predictions_.json", "predictions_.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
21
transformers
--- language: en datasets: - squad --- # SqueezeBERT + SQuAD (v1.1) [squeezebert-uncased](https://huggingface.co/squeezebert/squeezebert-uncased) fine-tuned on [SQUAD v1.1](https://rajpurkar.github.io/SQuAD-explorer/explore/1.1/dev/) for **Q&A** downstream task. ## Details of SqueezeBERT This model, `squeezebert-uncased`, is a pretrained model for the English language using a masked language modeling (MLM) and Sentence Order Prediction (SOP) objective. SqueezeBERT was introduced in [this paper](https://arxiv.org/abs/2006.11316). This model is case-insensitive. The model architecture is similar to BERT-base, but with the pointwise fully-connected layers replaced with [grouped convolutions](https://blog.yani.io/filter-group-tutorial/). The authors found that SqueezeBERT is 4.3x faster than `bert-base-uncased` on a Google Pixel 3 smartphone. More about the model [here](https://arxiv.org/abs/2004.02984) ## Details of the downstream task (Q&A) - Dataset 📚 🧐 ❓ **S**tanford **Q**uestion **A**nswering **D**ataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. SQuAD v1.1 contains **100,000+** question-answer pairs on **500+** articles. ## Model training 🏋️‍ The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command: ```bash python /content/transformers/examples/question-answering/run_squad.py \ --model_type bert \ --model_name_or_path squeezebert/squeezebert-uncased \ --do_eval \ --do_train \ --do_lower_case \ --train_file /content/dataset/train-v1.1.json \ --predict_file /content/dataset/dev-v1.1.json \ --per_gpu_train_batch_size 16 \ --learning_rate 3e-5 \ --num_train_epochs 15 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /content/output_dir \ --overwrite_output_dir \ --save_steps 2000 ``` ## Test set Results 🧾 | Metric | # Value | | ------ | --------- | | **EM** | **76.66** | | **F1** | **85.83** | Model Size: **195 MB** ### Model in action 🚀 Fast usage with **pipelines**: ```python from transformers import pipeline QnA_pipeline = pipeline('question-answering', model='mrm8488/squeezebert-finetuned-squadv1') QnA_pipeline({ 'context': 'A new strain of flu that has the potential to become a pandemic has been identified in China by scientists.', 'question': 'Who did identified it ?' }) # Output: {'answer': 'scientists.', 'end': 106, 'score': 0.6988425850868225, 'start': 96} ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/squeezebert-finetuned-squadv2
2020-12-11T21:55:26.000Z
[ "pytorch", "squeezebert", "question-answering", "en", "dataset:squad_v2", "arxiv:2006.11316", "arxiv:2004.02984", "transformers" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "nbest_predictions_.json", "null_odds_.json", "predictions_.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
446
transformers
--- language: en datasets: - squad_v2 --- # SqueezeBERT + SQuAD v2 [squeezebert-uncased](https://huggingface.co/squeezebert/squeezebert-uncased) fine-tuned on [SQUAD v2](https://rajpurkar.github.io/SQuAD-explorer/explore/v2.0/dev/) for **Q&A** downstream task. ## Details of SqueezeBERT This model, `squeezebert-uncased`, is a pretrained model for the English language using a masked language modeling (MLM) and Sentence Order Prediction (SOP) objective. SqueezeBERT was introduced in [this paper](https://arxiv.org/abs/2006.11316). This model is case-insensitive. The model architecture is similar to BERT-base, but with the pointwise fully-connected layers replaced with [grouped convolutions](https://blog.yani.io/filter-group-tutorial/). The authors found that SqueezeBERT is 4.3x faster than `bert-base-uncased` on a Google Pixel 3 smartphone. More about the model [here](https://arxiv.org/abs/2004.02984) ## Details of the downstream task (Q&A) - Dataset 📚 🧐 ❓ **SQuAD2.0** combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. ## Model training 🏋️‍ The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command: ```bash python /content/transformers/examples/question-answering/run_squad.py \ --model_type bert \ --model_name_or_path squeezebert/squeezebert-uncased \ --do_train \ --do_eval \ --do_lower_case \ --train_file /content/dataset/train-v2.0.json \ --predict_file /content/dataset/dev-v2.0.json \ --per_gpu_train_batch_size 16 \ --learning_rate 3e-5 \ --num_train_epochs 15 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /content/output_dir \ --overwrite_output_dir \ --version_2_with_negative \ --save_steps 2000 ``` ## Test set Results 🧾 | Metric | # Value | | ------ | --------- | | **EM** | **69.98** | | **F1** | **74.14** | Model Size: **195 MB** ### Model in action 🚀 Fast usage with **pipelines**: ```python from transformers import pipeline QnA_pipeline = pipeline('question-answering', model='mrm8488/squeezebert-finetuned-squadv2') QnA_pipeline({ 'context': 'A new strain of flu that has the potential to become a pandemic has been identified in China by scientists.', 'question': 'Who did identified it ?' }) # Output: {'answer': 'scientists.', 'end': 106, 'score': 0.9768241047859192, 'start': 96} ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/t5-base-finetuned-AESLC-summarization
2020-07-21T21:13:47.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
24
transformers
mrm8488/t5-base-finetuned-Reddit-TIFU-TLDR
2020-08-03T14:57:58.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
117
transformers
mrm8488/t5-base-finetuned-boolq
2020-08-13T18:07:07.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
212
transformers
mrm8488/t5-base-finetuned-break_data-question-retrieval
2020-12-11T21:55:29.000Z
[ "pytorch", "t5", "seq2seq", "en", "dataset:break_data", "arxiv:1910.10683", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
28
transformers
--- language: en datasets: - break_data --- # T5-base fine-tuned on break_data / QDMR-high-level 📋➡️❓ [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [break_data](https://huggingface.co/nlp/viewer/?dataset=break_data&config=QDMR-high-level) dataset for **Question Retrieval from its decomposition**. The inverse process of [this model](https://huggingface.co/mrm8488/t5-base-finetuned-break_data). ## Details of T5 📜 ➡️ 📜 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the downstream task (Question Retrieval from its decomposition) - Dataset 📚 Break is a human annotated dataset of natural language questions and their Question Decomposition Meaning Representations (QDMRs). Break consists of 83,978 examples sampled from 10 question answering datasets over text, images and databases. This repository contains the Break dataset along with information on the exact data format. | Dataset | Split | # samples | | -------- | ----- | --------- | | break_data | train | 17503 | | break_data | valid | 3130 | Check out more about this dataset and others in [NLP Viewer](https://huggingface.co/nlp/viewer/) ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this awesome one](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) by [Suraj Patil](https://twitter.com/psuraj28). The main change is at preprocessing ```inputs``` and ```targets``` we feed to the model. We do it as a *paraphrasing task*. ## Model in Action 🚀 ```python # Tip: By now, install transformers from source from transformers import AutoModelForSeq2SeqLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-break_data-question-retrieval") model = AutoModelForSeq2SeqLM.from_pretrained("mrm8488/t5-base-finetuned-break_data-question-retrieval") def get_nautural_question(decomposition): input_text = 'translate QDMRs to Natural Language %s </s>' % decomposition features = tokenizer([input_text], return_tensors='pt') output = model.generate(input_ids=features['input_ids'], attention_mask=features['attention_mask'], max_length=64) return tokenizer.decode(output[0]) decomposition = "return the city that was the birthplace of Bernard Berrian ;return the city that was the home of Pablo Picasso ;return the city of both #1 and #2" # Ground Truth: What city was the birthplace of Bernard Berrianand the home of Pablo Picasso? get_nautural_question(decomposition) # output: 'What city was the birthplace of Bernard Berrian and the home of Pablo Picasso?' ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/t5-base-finetuned-break_data
2020-12-11T21:55:33.000Z
[ "pytorch", "t5", "seq2seq", "en", "dataset:break_data", "arxiv:1910.10683", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
25
transformers
--- language: en datasets: - break_data --- # T5-base fine-tuned on break_data / QDMR-high-level ❓➡️📋 [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [break_data](https://huggingface.co/nlp/viewer/?dataset=break_data&config=QDMR-high-level) dataset for **QDMRs**. ## Details of T5 📜 ➡️ 📜 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the downstream task (QDMRs) - Dataset 📚 Break is a human annotated dataset of natural language questions and their Question Decomposition Meaning Representations (QDMRs). Break consists of 83,978 examples sampled from 10 question answering datasets over text, images and databases. This repository contains the Break dataset along with information on the exact data format. | Dataset | Split | # samples | | -------- | ----- | --------- | | break_data | train | 17503 | | break_data | valid | 3130 | Check out more about this dataset and others in [NLP Viewer](https://huggingface.co/nlp/viewer/) ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this awesome one](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) by [Suraj Patil](https://twitter.com/psuraj28). The main change is at preprocessing ```inputs``` and ```targets``` we feed to the model. We do it as a *paraphrasing task*. ## Model in Action 🚀 ```python # Tip: By now, install transformers from source from transformers import AutoModelForSeq2SeqLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-break_data") model = AutoModelForSeq2SeqLM.from_pretrained("mrm8488/t5-base-finetuned-break_data") def get_decomposition(question): input_text = "paraphrase: %s </s>" % question features = tokenizer([input_text], return_tensors='pt') output = model.generate(input_ids=features['input_ids'], attention_mask=features['attention_mask'], max_length=32) return tokenizer.decode(output[0]) question = "The composer of Sands Theme plays what type of guitar?" get_decomposition(question) # output: 'return Sands Theme ;return composer of #1 ;return guitar that #2 plays' ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/t5-base-finetuned-common_gen
2021-05-04T14:16:20.000Z
[ "pytorch", "t5", "seq2seq", "en", "dataset:common_gen", "arxiv:1910.10683", "arxiv:1911.03705", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "log_history.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
727
transformers
--- language: en datasets: - common_gen widget: - text: "tree plant ground hole dig" --- # T5-base fine-tuned on CommonGen [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [CommonGen](https://inklab.usc.edu/CommonGen/index.html) for *Generative Commonsense Reasoning*. ## Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the dataset 📚 CommonGen is a constrained text generation task, associated with a benchmark dataset, to explicitly test machines for the ability of generative commonsense reasoning. Given a set of common concepts; the task is to generate a coherent sentence describing an everyday scenario using these concepts. CommonGen is challenging because it inherently requires 1) relational reasoning using background commonsense knowledge, and 2) compositional generalization ability to work on unseen concept combinations. Our dataset, constructed through a combination of crowd-sourcing from AMT and existing caption corpora, consists of 30k concept-sets and 50k sentences in total. | Dataset | Split | # samples | | -------- | ----- | --------- | | common_gen | train | 67389 | | common_gen | valid | 4018 | | common_gen | test | 1497 | ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this awesome one](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) by [Suraj Patil](https://twitter.com/psuraj28) ## Metrics 📋 | Metric | Score | |--------|-------| |ROUGE-2 | 17.10 | |ROUGE-L | 39.47 | |BLEU | WIP | The metrics above slightly improves results shown in the [paper](https://arxiv.org/abs/1911.03705) for the same model and metrics. ## Model in Action 🚀 ```python from transformers import AutoModelWithLMHead, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-common_gen") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-common_gen") def gen_sentence(words, max_length=32): input_text = words features = tokenizer([input_text], return_tensors='pt') output = model.generate(input_ids=features['input_ids'], attention_mask=features['attention_mask'], max_length=max_length) return tokenizer.decode(output[0]) words = "tree plant ground hole dig" gen_sentence(words) # output: digging a hole in the ground to plant trees ``` [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mrm8488/shared_colab_notebooks/blob/master/T5_base_finetuned_common_gen.ipynb) > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/t5-base-finetuned-disaster-tweets
2020-07-06T20:09:18.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
mrm8488
17
transformers
mrm8488/t5-base-finetuned-e2m-intent
2020-12-11T21:55:39.000Z
[ "pytorch", "t5", "seq2seq", "en", "dataset:event2Mind", "arxiv:1910.10683", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
244
transformers
--- language: en datasets: - event2Mind --- # T5-base fine-tuned on event2Mind for **Intent Prediction** 🤔 [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [event2Mind](https://huggingface.co/nlp/viewer/?dataset=event2Mind) dataset for **Intent Prediction**. ## Details of T5 📜 ➡️ 📜 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the downstream task (Intent Prediction) - Dataset 📚 Dataset ID: ```event2Mind``` from [Huggingface/NLP](https://github.com/huggingface/nlp) | Dataset | Split | # samples | | -------- | ----- | --------- | | event2Mind | train | 46472 | | event2Mind | valid | 1960 | Events without **intent** were not used! Check out more about this dataset and others in [NLP Viewer](https://huggingface.co/nlp/viewer/) ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this awesome one](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) by [Suraj Patil](https://twitter.com/psuraj28). ## Model in Action 🚀 ```python # Tip: By now, install transformers from source from transformers import AutoModelWithLMHead, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-e2m-intent") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-e2m-intent") def get_intent(event, max_length=16): input_text = "%s </s>" % event features = tokenizer([input_text], return_tensors='pt') output = model.generate(input_ids=features['input_ids'], attention_mask=features['attention_mask'], max_length=max_length) return tokenizer.decode(output[0]) event = "PersonX takes PersonY home" get_intent(event) # output: 'to be helpful' ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/t5-base-finetuned-emotion
2021-05-18T23:18:42.000Z
[ "pytorch", "t5", "seq2seq", "en", "dataset:emotion", "arxiv:1910.10683", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
mrm8488
3,885
transformers
--- language: en datasets: - emotion widget: - text: "I wish you were here but it is impossible" --- # T5-base fine-tuned for Emotion Recognition 😂😢😡😃😯 [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) base fine-tuned on [emotion recognition](https://github.com/dair-ai/emotion_dataset) dataset for **Emotion Recognition** downstream task. ## Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the downstream task (Sentiment Recognition) - Dataset 📚 [Elvis Saravia](https://twitter.com/omarsar0) has gathered a great [dataset](https://github.com/dair-ai/emotion_dataset) for emotion recognition. It allows to classifiy the text into one of the following **6** emotions: - sadness 😢 - joy 😃 - love 🥰 - anger 😡 - fear 😱 - surprise 😯 ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this Colab Notebook](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) created by [Suraj Patil](https://github.com/patil-suraj), so all credits to him! ## Test set metrics 🧾 | |precision | recall | f1-score |support| |----------|----------|---------|----------|-------| |anger | 0.93| 0.92| 0.93| 275| |fear | 0.91| 0.87| 0.89| 224| |joy | 0.97| 0.94| 0.95| 695| |love | 0.80| 0.91| 0.85| 159| |sadness | 0.97| 0.97| 0.97| 521| |surpirse | 0.73| 0.89| 0.80| 66| | | |accuracy| | | 0.93| 2000| |macro avg| 0.89| 0.92| 0.90| 2000| |weighted avg| 0.94| 0.93| 0.93| 2000| ## Model in Action 🚀 ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-emotion") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-emotion") def get_emotion(text): input_ids = tokenizer.encode(text + '</s>', return_tensors='pt') output = model.generate(input_ids=input_ids, max_length=2) dec = [tokenizer.decode(ids) for ids in output] label = dec[0] return label get_emotion("i feel as if i havent blogged in ages are at least truly blogged i am doing an update cute") # Output: 'joy' get_emotion("i have a feeling i kinda lost my best friend") # Output: 'sadness' ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/t5-base-finetuned-imdb-sentiment
2020-12-11T21:55:46.000Z
[ "pytorch", "t5", "seq2seq", "en", "dataset:imdb", "arxiv:1910.10683", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
mrm8488
355
transformers
--- language: en datasets: - imdb --- # T5-base fine-tuned for Sentiment Anlalysis 🎞️👍👎 [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) base fine-tuned on [IMDB](https://huggingface.co/datasets/imdb) dataset for **Sentiment Analysis** downstream task. ## Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67) ## Details of the downstream task (Sentiment analysis) - Dataset 📚 [IMDB](https://huggingface.co/datasets/imdb) This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We provide a set of **25,000** highly polar movie reviews for training, and **25,000** for testing. ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this Colab Notebook](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) created by [Suraj Patil](https://github.com/patil-suraj), so all credits to him! ## Test set metrics 🧾 |precision | recall | f1-score |support| |----------|----------|---------|----------|-------| |negative | 0.95 | 0.95| 0.95| 12500| |positive | 0.95 | 0.95| 0.95| 12500| |----------|----------|---------|----------|-------| |accuracy| | | 0.95| 25000| |macro avg| 0.95| 0.95| 0.95| 25000| |weighted avg| 0.95| 0.95| 0.95 | 25000| ## Model in Action 🚀 ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-imdb-sentiment") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-imdb-sentiment") def get_sentiment(text): input_ids = tokenizer.encode(text + '</s>', return_tensors='pt') output = model.generate(input_ids=input_ids, max_length=2) dec = [tokenizer.decode(ids) for ids in output] label = dec[0] return label get_sentiment("I dislike a lot that film") # Output: 'negative' ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/t5-base-finetuned-math-calculus-differentiate
2020-08-24T20:58:10.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
23
transformers
mrm8488/t5-base-finetuned-math-linear-algebra-1d
2020-08-18T17:40:51.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
28
transformers
mrm8488/t5-base-finetuned-math-linear-algebra-2d
2020-08-19T16:39:10.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
18
transformers
mrm8488/t5-base-finetuned-math-list-prime-factors
2020-08-28T12:58:25.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "optimizer.pt", "pytorch_model.bin", "scheduler.pt", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
11
transformers
mrm8488/t5-base-finetuned-math-qa-test
2020-06-01T09:17:23.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
32
transformers
mrm8488/t5-base-finetuned-math-seq-next-term
2020-08-20T14:51:11.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
15
transformers
mrm8488/t5-base-finetuned-multinews-512
2020-08-31T14:09:11.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
13
transformers
mrm8488/t5-base-finetuned-news-titles-classification
2020-06-22T17:33:24.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
mrm8488
17
transformers
mrm8488/t5-base-finetuned-qasc-sc
2020-11-01T10:04:34.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
14
transformers
mrm8488/t5-base-finetuned-qasc
2020-12-11T21:55:50.000Z
[ "pytorch", "t5", "seq2seq", "en", "dataset:qasc", "arxiv:1910.10683", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
23
transformers
--- language: en datasets: - qasc --- # T5-base fine-tuned on QASC [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [QASC](https://allenai.org/data/qasc) for **QA** (via *sentence composition*) downstream task. ## Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the dataset 📚 **Question Answering via Sentence Composition** (QASC) is a question-answering dataset with a focus on sentence composition. It consists of 9,980 8-way multiple-choice questions about grade school science (8,134 train, 926 dev, 920 test), and comes with a corpus of 17M sentences. ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this awesome one](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) by [Suraj Patil](https://twitter.com/psuraj28). The **context** passed to the *encoder* is the combination of the 2 *facts* (`fact1` and `fact2`). The **question** is just the `formatted_question` field. The **answer** passed to the *decoder* is the`text` right answer instead of the `label` (A, B, C... See `choices` field). More details about the dataset format/fields [here](https://huggingface.co/nlp/viewer/?dataset=qasc) ## Metrics on validation set 📋 | Metric | Score | |--------|-------| |Accuracy (EM) | **97.73**| ## Model in Action 🚀 ```python from transformers import AutoModelWithLMHead, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-qasc") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-qasc") def get_response(question, context, max_length=64): input_text = 'question: %s context: %s' % (question, context) features = tokenizer([input_text], return_tensors='pt') output = model.generate(input_ids=features['input_ids'], attention_mask=features['attention_mask'], max_length=max_length) return tokenizer.decode(output[0]) fact_1 = 'a watch is used for measuring time' fact_2 = 'Times are measured in seconds.' context = fact_1 + ' ' + fact_2 question = 'What can be used to measure seconds? (A) Watch (B) seconds (C) fluid (D) Ruler (E) goggles (F) glasses (G) Drill (H) Scale' get_response(question, context) # output: 'Watch' ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/t5-base-finetuned-quarel
2020-12-11T21:55:53.000Z
[ "pytorch", "t5", "seq2seq", "en", "dataset:quarel", "arxiv:1910.10683", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
13
transformers
--- language: en datasets: - quarel --- # T5-base fine-tuned on QuaRel [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [QuaRel](https://allenai.org/data/quarel) for **QA** downstream task. ## Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the dataset 📚 **QuaRel**: *[A Dataset and Models for Answering Questions about Qualitative Relationships](https://www.semanticscholar.org/paper/QuaRel%3A-A-Dataset-and-Models-for-Answering-about-Tafjord-Clark/51004bc6461a572e1189a0e3b32b441155d760ce)* Many natural language questions require recognizing and reasoning with qualitative relationships (e.g., in science, economics, and medicine), but are challenging to answer with corpus-based methods. Qualitative modeling provides tools that support such reasoning, but the semantic parsing task of mapping questions into those models has formidable challenges. We present QuaRel, a dataset of diverse story questions involving qualitative relationships that characterize these challenges, and techniques that begin to address them. The dataset has 2771 questions relating 19 different types of quantities. For example, "Jenny observes that the robot vacuum cleaner moves slower on the living room carpet than on the bedroom carpet. Which carpet has more friction?" We contribute (1) a simple and flexible conceptual framework for representing these kinds of questions; (2) the QuaRel dataset, including logical forms, exemplifying the parsing challenges; and (3) two novel models for this task, built as extensions of type-constrained semantic parsing. The first of these models (called QuaSP+) significantly outperforms off-the-shelf tools on QuaRel. The second (QuaSP+Zero) demonstrates zero-shot capability, i.e., the ability to handle new qualitative relationships without requiring additional training data, something not possible with previous models. This work thus makes inroads into answering complex, qualitative questions that require reasoning, and scaling to new relationships at low cost ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this awesome one](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) by [Suraj Patil](https://twitter.com/psuraj28). The **context** passed to the *encoder* is the `logical_form_pretty` field (example: `qrel(speed, higher, ice) -> qrel(smoothness, higher, snow) ; qrel(smoothness, higher, ice`) . The **question** is just the `question` field. The **answer** passed to the *decoder* is obtained from `question`using the `answer_index` field. More details about the dataset format/fields [here](https://huggingface.co/nlp/viewer/?dataset=quarel) ## Metrics on validation set 📋 | Metric | Score | |--------|-------| |Accuracy (EM) | **67.98**| ## Model in Action 🚀 ```python from transformers import AutoModelWithLMHead, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-quarel") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-quarel") def get_response(question, context, max_length=32): input_text = 'question: %s context: %s' % (question, context) features = tokenizer([input_text], return_tensors='pt') output = model.generate(input_ids=features['input_ids'], attention_mask=features['attention_mask'], max_length=max_length) return tokenizer.decode(output[0]) question = 'As the train left the station it crossed the bridge and being farther away it looked (A) larger (B) smaller' context = 'qrel(distance, higher, Train on a bridge) -> qrel(apparentSize, higher, Train on a bridge) ; qrel(apparentSize, lower, Train on a bridge)' get_response(question, context) # output: 'smaller' ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/t5-base-finetuned-quartz
2020-12-11T21:55:56.000Z
[ "pytorch", "t5", "seq2seq", "en", "dataset:quartz", "arxiv:1910.10683", "transformers", "question-answering", "pipeline_tag:question-answering", "text2text-generation" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
89
transformers
--- language: en datasets: - quartz pipeline_tag: question-answering --- # T5-base fine-tuned on QuaRTz [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [QuaRTz](https://allenai.org/data/quartz) for **QA** downstream task. ## Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the dataset 📚 **QuaRTz** is a crowdsourced dataset of 3864 multiple-choice questions about open domain qualitative relationships. Each question is paired with one of 405 different background sentences (sometimes short paragraphs). The QuaRTz dataset V1 contains 3864 questions about open domain qualitative relationships. Each question is paired with one of 405 different background sentences (sometimes short paragraphs). The dataset is split into: |Set | Samples| |-----|--------| |Train | 2696 | |Valid | 384 | |Test | 784 | ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this awesome one](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) by [Suraj Patil](https://twitter.com/psuraj28). The *question*, *context* (`para` field) and *options* (`choices` field) are concatenated and passed to the **encoder**. The **decoder** receives the right *answer* (by querying `answerKey` field). More details about the dataset fields/format [here](https://huggingface.co/nlp/viewer/?dataset=quartz) ## Results 📋 |Set | Metric | Score | |-----|--------|-------| |Validation | Accuracy (EM) | **83.59**| |Test | Accuracy (EM) | **81.50**| ## Model in Action 🚀 ```python from transformers import AutoModelWithLMHead, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-quartz") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-quartz") def get_response(question, fact, opts, max_length=16): input_text = 'question: %s context: %s options: %s' % (question, fact, opts) features = tokenizer([input_text], return_tensors='pt') output = model.generate(input_ids=features['input_ids'], attention_mask=features['attention_mask'], max_length=max_length) return tokenizer.decode(output[0]) fact = 'The sooner cancer is detected the easier it is to treat.' question = 'John was a doctor in a cancer ward and knew that early detection was key. The cancer being detected quickly makes the cancer treatment' opts = 'Easier, Harder' get_response(question, fact, opts) # output: 'Easier' ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/t5-base-finetuned-question-generation-ap
2020-12-26T12:41:16.000Z
[ "pytorch", "t5", "seq2seq", "en", "dataset:squad", "arxiv:1910.10683", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
2,859
transformers
--- language: en datasets: - squad --- # T5-base fine-tuned on SQuAD for **Question Generation** [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [SQuAD v1.1](https://rajpurkar.github.io/SQuAD-explorer/) for **Question Generation** by just prepending the *answer* to the *context*. ## Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the downstream task (Q&A) - Dataset 📚 🧐 ❓ Dataset ID: ```squad``` from [Huggingface/NLP](https://github.com/huggingface/nlp) | Dataset | Split | # samples | | -------- | ----- | --------- | | squad | train | 87599 | | squad | valid | 10570 | How to load it from [nlp](https://github.com/huggingface/nlp) ```python train_dataset = nlp.load_dataset('squad', split=nlp.Split.TRAIN) valid_dataset = nlp.load_dataset('squad', split=nlp.Split.VALIDATION) ``` Check out more about this dataset and others in [NLP Viewer](https://huggingface.co/nlp/viewer/) ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this awesome one](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) by [Suraj Patil](https://twitter.com/psuraj28) He also made a great research on [**Question Generation**](https://github.com/patil-suraj/question_generation) ## Model in Action 🚀 ```python # Tip: By now, install transformers from source from transformers import AutoModelWithLMHead, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-question-generation-ap") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-question-generation-ap") def get_question(answer, context, max_length=64): input_text = "answer: %s context: %s </s>" % (answer, context) features = tokenizer([input_text], return_tensors='pt') output = model.generate(input_ids=features['input_ids'], attention_mask=features['attention_mask'], max_length=max_length) return tokenizer.decode(output[0]) context = "Manuel has created RuPERTa-base with the support of HF-Transformers and Google" answer = "Manuel" get_question(answer, context) # output: question: Who created the RuPERTa-base? ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/t5-base-finetuned-quoref
2020-11-04T19:59:31.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
9
transformers
mrm8488/t5-base-finetuned-race
2020-11-07T02:18:58.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
57
transformers
mrm8488/t5-base-finetuned-sarcasm-twitter
2020-12-11T21:56:03.000Z
[ "pytorch", "t5", "seq2seq", "en", "arxiv:1910.10683", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
mrm8488
1,461
transformers
--- language: en --- # T5-base fine-tuned for Sarcasm Detection 🙄 [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) base fine-tuned on [ Twitter Sarcasm Dataset](https://github.com/EducationalTestingService/sarcasm) for **Sequence classification (as text generation)** downstream task. ## Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the downstream task (Sequence Classification as Text generation) - Dataset 📚 [ Twitter Sarcasm Dataset](https://github.com/EducationalTestingService/sarcasm) For Twitter training and testing datasets are provided for sarcasm detection tasks in jsonlines format. Each line contains a JSON object with the following fields : - ***label*** : `SARCASM` or `NOT_SARCASM` - **NOT** in test data - ***id***: String identifier for sample. This id will be required when making submissions. - **ONLY** in test data - ***response*** : the sarcastic response, whether a sarcastic Tweet - ***context*** : the conversation context of the ***response*** - Note, the context is an ordered list of dialogue, i.e., if the context contains three elements, `c1`, `c2`, `c3`, in that order, then `c2` is a reply to `c1` and `c3` is a reply to `c2`. Further, if the sarcastic response is `r`, then `r` is a reply to `c3`. For instance, for the following training example : `"label": "SARCASM", "response": "Did Kelly just call someone else messy? Baaaahaaahahahaha", "context": ["X is looking a First Lady should . #classact, "didn't think it was tailored enough it looked messy"]` The response tweet, "Did Kelly..." is a reply to its immediate context "didn't think it was tailored..." which is a reply to "X is looking...". Your goal is to predict the label of the "response" while also using the context (i.e, the immediate or the full context). ***Dataset size statistics*** : | | Train | Val | Test | |---------|-------|------|------| | Twitter | 4050 | 450 | 500 | The datasets was preprocessed to convert it to a **text-to-text** (classfication as generation task). ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this Colab Notebook](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) created by [Suraj Patil](https://github.com/patil-suraj), so all credits to him! ## Test set metrics 🧾 | | precision| recall | f1-score |support| |----------|----------|---------|----------|-------| | derison | 0.84 | 0.80 | 0.82 | 246 | | normal | 0.82 | 0.85 | 0.83 | 254 | | | |accuracy| | | 0.83| 500| |macro avg| 0.83| 0.83| 0.83| 500| |weighted avg| 0.83| 0.83| 0.83| 500| ## Model in Action 🚀 ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-sarcasm-twitter") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-sarcasm-twitter") def eval_conversation(text): input_ids = tokenizer.encode(text + '</s>', return_tensors='pt') output = model.generate(input_ids=input_ids, max_length=3) dec = [tokenizer.decode(ids) for ids in output] label = dec[0] return label # For similarity with the training dataset we should replace users mentions in twits for @USER token and urls for URL token. twit1 = "Trump just suspended the visa program that allowed me to move to the US to start @USER!" + " Unfortunately, I won’t be able to vote in a few months but if you can, please vote him out, " + "he's destroying what made America great in so many different ways!" twit2 = "@USER @USER @USER We have far more cases than any other country, " + "so leaving remote workers in would be disastrous. Makes Trump sense." twit3 = "My worry is that i wouldn’t be surprised if half the country actually agrees with this move..." me = "Trump doing so??? It must be a mistake... XDDD" conversation = twit1 + twit2 eval_conversation(conversation) #Output: 'derison' conversation = twit1 + twit3 eval_conversation(conversation) #Output: 'normal' conversation = twit1 + me eval_conversation(conversation) #Output: 'derison' # We will get 'normal' when sarcasm is not detected and 'derison' when detected ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/t5-base-finetuned-spa-squadv1
2020-05-26T13:23:11.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
76
transformers
mrm8488/t5-base-finetuned-span-sentiment-extraction
2020-12-11T21:56:06.000Z
[ "pytorch", "t5", "seq2seq", "en", "arxiv:1910.10683", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
mrm8488
1,083
transformers
--- language: en thumbnail: --- # T5-base fine-tuned for Sentiment Span Extraction All credits to [Lorenzo Ampil](https://twitter.com/AND__SO) [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) base fine-tuned on [Tweet Sentiment Extraction Dataset](https://www.kaggle.com/c/tweet-sentiment-extraction) for **Span Sentiment Extraction** downstream task. ## Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ## Details of the downstream task (Span Sentiment Extraction) - Dataset 📚 [Tweet Sentiment Extraction Dataset](https://www.kaggle.com/c/tweet-sentiment-extraction) "My ridiculous dog is amazing." [sentiment: positive] With all of the tweets circulating every second it is hard to tell whether the sentiment behind a specific tweet will impact a company, or a person's, brand for being viral (positive), or devastate profit because it strikes a negative tone. Capturing sentiment in language is important in these times where decisions and reactions are created and updated in seconds. But, which words actually lead to the sentiment description? In this competition you will need to pick out the part of the tweet (word or phrase) that reflects the sentiment. Help build your skills in this important area with this broad dataset of tweets. Work on your technique to grab a top spot in this competition. What words in tweets support a positive, negative, or neutral sentiment? How can you help make that determination using machine learning tools? In this competition we've extracted support phrases from Figure Eight's Data for Everyone platform. The dataset is titled Sentiment Analysis: Emotion in Text tweets with existing sentiment labels, used here under creative commons attribution 4.0. international licence. Your objective in this competition is to construct a model that can do the same - look at the labeled sentiment for a given tweet and figure out what word or phrase best supports it. Disclaimer: The dataset for this competition contains text that may be considered profane, vulgar, or offensive. | Dataset | Split | # samples | | -------- | ----- | --------- | | TSE | train | 23907 | | TSE | eval | 3573 | ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this Colab Notebook](https://github.com/enzoampil/t5-intro/blob/master/t5_qa_training_pytorch_span_extraction.ipynb) created by [Lorenzo Ampil](https://github.com/enzoampil), so all credits to him! ## Model in Action 🚀 ```python from transformers import AutoModelWithLMHead, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-span-sentiment-extraction") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-span-sentiment-extraction") def get_sentiment_span(text): input_ids = tokenizer.encode(text, return_tensors="pt", add_special_tokens=True) # Batch size 1 generated_ids = model.generate(input_ids=input_ids, num_beams=1, max_length=80).squeeze() predicted_span = tokenizer.decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True) return predicted_span get_sentiment_span("question: negative context: My bike was put on hold...should have known that.... argh total bummer") # output: 'argh total bummer' get_sentiment_span("question: positive context: On the monday, so i wont be able to be with you! i love you") # output: 'i love you' ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/t5-base-finetuned-squadv2
2020-12-11T21:56:10.000Z
[ "pytorch", "t5", "seq2seq", "en", "dataset:squad_v2", "arxiv:1910.10683", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
93
transformers
--- language: en datasets: - squad_v2 --- # T5-base fine-tuned on SQuAD v2 [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [SQuAD v2](https://rajpurkar.github.io/SQuAD-explorer/) for **Q&A** downstream task. ## Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the downstream task (Q&A) - Dataset 📚 🧐 ❓ Dataset ID: ```squad_v2``` from [Huggingface/NLP](https://github.com/huggingface/nlp) | Dataset | Split | # samples | | -------- | ----- | --------- | | squad_v2 | train | 130319 | | squad_v2 | valid | 11873 | How to load it from [nlp](https://github.com/huggingface/nlp) ```python train_dataset = nlp.load_dataset('squad_v2', split=nlp.Split.TRAIN) valid_dataset = nlp.load_dataset('squad_v2', split=nlp.Split.VALIDATION) ``` Check out more about this dataset and others in [NLP Viewer](https://huggingface.co/nlp/viewer/) ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this one](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) ## Results 📝 | Metric | # Value | | ------ | --------- | | **EM** | **77.64** | | **F1** | **81.32** | ## Model in Action 🚀 ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-squadv2") model = AutoModelForSeq2SeqLM.from_pretrained("mrm8488/t5-base-finetuned-squadv2") def get_answer(question, context): input_text = "question: %s context: %s" % (question, context) features = tokenizer([input_text], return_tensors='pt') output = model.generate(input_ids=features['input_ids'], attention_mask=features['attention_mask']) return tokenizer.decode(output[0]) context = "Manuel have created RuPERTa-base with the support of HF-Transformers and Google" question = "Who has supported Manuel?" get_answer(question, context) # output: 'HF-Transformers and Google' ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/t5-base-finetuned-summarize-news
2020-12-11T21:56:13.000Z
[ "pytorch", "t5", "seq2seq", "en", "arxiv:1910.10683", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
mrm8488
1,780
transformers
--- language: en thumbnail: --- # T5-base fine-tuned fo News Summarization 📖✏️🧾 All credits to [Abhishek Kumar Mishra](https://github.com/abhimishra91) [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) base fine-tuned on [News Summary](https://www.kaggle.com/sunnysai12345/news-summary) dataset for **summarization** downstream task. ## Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the downstream task (Summarization) - Dataset 📚 [News Summary](https://www.kaggle.com/sunnysai12345/news-summary) The dataset consists of **4515 examples** and contains Author_name, Headlines, Url of Article, Short text, Complete Article. I gathered the summarized news from Inshorts and only scraped the news articles from Hindu, Indian times and Guardian. Time period ranges from febrauary to august 2017. ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this Colab Notebook](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_summarization_wandb.ipynb) created by [Abhishek Kumar Mishra](https://github.com/abhimishra91), so all credits to him! I also trained the model for more epochs (6). ## Model in Action 🚀 ```python from transformers import AutoModelWithLMHead, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-summarize-news") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-summarize-news") def summarize(text, max_length=150): input_ids = tokenizer.encode(text, return_tensors="pt", add_special_tokens=True) generated_ids = model.generate(input_ids=input_ids, num_beams=2, max_length=max_length, repetition_penalty=2.5, length_penalty=1.0, early_stopping=True) preds = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True) for g in generated_ids] return preds[0] ``` Given the following article from **NYT** (2020/06/09) with title *George Floyd’s death energized a movement. He will be buried in Houston today*: After the sound and the fury, weeks of demonstrations and anguished calls for racial justice, the man whose death gave rise to an international movement, and whose last words — “I can’t breathe” — have been a rallying cry, will be laid to rest on Tuesday at a private funeral in Houston.George Floyd, who was 46, will then be buried in a grave next to his mother’s.The service, scheduled to begin at 11 a.m. at the Fountain of Praise church, comes after five days of public memorials in Minneapolis, North Carolina and Houston and two weeks after a Minneapolis police officer was caught on video pressing his knee into Mr. Floyd’s neck for nearly nine minutes before Mr. Floyd died. That officer, Derek Chauvin, has been charged with second-degree murder and second-degree manslaughter. His bail was set at $1.25 million in a court appearance on Monday. The outpouring of anger and outrage after Mr. Floyd’s death — and the speed at which protests spread from tense, chaotic demonstrations in the city where he died to an international movement from Rome to Rio de Janeiro — has reflected the depth of frustration borne of years of watching black people die at the hands of the police or vigilantes while calls for change went unmet. ``` summarize('After the sound and the fury, weeks of demonstrations and anguished calls for racial justice, the man whose death gave rise to an international movement, and whose last words — “I can’t breathe” — have been a rallying cry, will be laid to rest on Tuesday at a private funeral in Houston.George Floyd, who was 46, will then be buried in a grave next to his mother’s.The service, scheduled to begin at 11 a.m. at the Fountain of Praise church, comes after five days of public memorials in Minneapolis, North Carolina and Houston and two weeks after a Minneapolis police officer was caught on video pressing his knee into Mr. Floyd’s neck for nearly nine minutes before Mr. Floyd died. That officer, Derek Chauvin, has been charged with second-degree murder and second-degree manslaughter. His bail was set at $1.25 million in a court appearance on Monday. The outpouring of anger and outrage after Mr. Floyd’s death — and the speed at which protests spread from tense, chaotic demonstrations in the city where he died to an international movement from Rome to Rio de Janeiro — has reflected the depth of frustration borne of years of watching black people die at the hands of the police or vigilantes while calls for change went unmet.', 80) ``` We would obtain: At a private funeral in Houston. Floyd, who was 46 years old when his death occurred, will be buried next to the grave of his mother. A Minnesota police officer was caught on video pressing his knee into Mr's neck for nearly nine minutes before his death. The officer has been charged with second-degree manslaughter and $1.2 million bail is set at > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/t5-base-finetuned-swag
2020-06-18T01:12:47.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
mrm8488
14
transformers
mrm8488/t5-base-finetuned-tab_fact
2021-01-28T20:41:17.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
mrm8488
7
transformers
mrm8488/t5-base-finetuned-wikiSQL-sql-to-en
2020-12-11T21:56:17.000Z
[ "pytorch", "t5", "seq2seq", "en", "dataset:wikisql", "arxiv:1910.10683", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
44
transformers
--- language: en datasets: - wikisql --- # T5-base fine-tuned on WikiSQL for SQL to English translation [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [WikiSQL](https://github.com/salesforce/WikiSQL) for **SQL** to **English** **translation** task. ## Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the Dataset 📚 Dataset ID: ```wikisql``` from [Huggingface/NLP](https://huggingface.co/nlp/viewer/?dataset=wikisql) | Dataset | Split | # samples | | -------- | ----- | --------- | | wikisql | train | 56355 | | wikisql | valid | 14436 | How to load it from [nlp](https://github.com/huggingface/nlp) ```python train_dataset = nlp.load_dataset('wikisql', split=nlp.Split.TRAIN) valid_dataset = nlp.load_dataset('wikisql', split=nlp.Split.VALIDATION) ``` Check out more about this dataset and others in [NLP Viewer](https://huggingface.co/nlp/viewer/) ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this Colab Notebook](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) created by [Suraj Patil](https://github.com/patil-suraj), so all credits to him! ## Model in Action 🚀 ```python from transformers import AutoModelWithLMHead, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-wikiSQL-sql-to-en") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-wikiSQL-sql-to-en") def get_explanation(query): input_text = "translate Sql to English: %s </s>" % query features = tokenizer([input_text], return_tensors='pt') output = model.generate(input_ids=features['input_ids'], attention_mask=features['attention_mask']) return tokenizer.decode(output[0]) query = "SELECT COUNT Params form model where location=HF-Hub" get_explanation(query) # output: 'How many parameters form model for HF-hub?' ``` Play with it in a Colab: <img src="https://camo.githubusercontent.com/52feade06f2fecbf006889a904d221e6a730c194/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667" alt="Open In Colab" data-canonical-src="https://colab.research.google.com/assets/colab-badge.svg"> > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/t5-base-finetuned-wikiSQL
2020-12-11T21:56:20.000Z
[ "pytorch", "t5", "seq2seq", "en", "dataset:wikisql", "arxiv:1910.10683", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
3,213
transformers
--- language: en datasets: - wikisql --- # T5-base fine-tuned on WikiSQL [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [WikiSQL](https://github.com/salesforce/WikiSQL) for **English** to **SQL** **translation**. ## Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the Dataset 📚 Dataset ID: ```wikisql``` from [Huggingface/NLP](https://huggingface.co/nlp/viewer/?dataset=wikisql) | Dataset | Split | # samples | | -------- | ----- | --------- | | wikisql | train | 56355 | | wikisql | valid | 14436 | How to load it from [nlp](https://github.com/huggingface/nlp) ```python train_dataset = nlp.load_dataset('wikisql', split=nlp.Split.TRAIN) valid_dataset = nlp.load_dataset('wikisql', split=nlp.Split.VALIDATION) ``` Check out more about this dataset and others in [NLP Viewer](https://huggingface.co/nlp/viewer/) ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this Colab Notebook](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) created by [Suraj Patil](https://github.com/patil-suraj), so all credits to him! ## Model in Action 🚀 ```python from transformers import AutoModelWithLMHead, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-wikiSQL") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-wikiSQL") def get_sql(query): input_text = "translate English to SQL: %s </s>" % query features = tokenizer([input_text], return_tensors='pt') output = model.generate(input_ids=features['input_ids'], attention_mask=features['attention_mask']) return tokenizer.decode(output[0]) query = "How many models were finetuned using BERT as base model?" get_sql(query) # output: 'SELECT COUNT Model fine tuned FROM table WHERE Base model = BERT' ``` Other examples from validation dataset: ![validation examples](https://pbs.twimg.com/media/Ec5vaG5XsAINty_?format=png&name=900x900) > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/t5-small-finetuned-AESLC-summarization
2020-07-22T08:42:23.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "eval_results.txt", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
14
transformers
mrm8488/t5-small-finetuned-boolq
2020-08-13T18:47:32.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
61
transformers
mrm8488/t5-small-finetuned-common_gen
2020-10-19T11:14:16.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
14
transformers
mrm8488/t5-small-finetuned-emotion
2020-12-11T21:56:24.000Z
[ "pytorch", "t5", "seq2seq", "en", "dataset:emotion", "arxiv:1910.10683", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
mrm8488
441
transformers
--- language: en datasets: - emotion --- # T5-small fine-tuned for Emotion Recognition 😂😢😡😃😯 [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) [small](https://huggingface.co/t5-small) fine-tuned on [emotion recognition](https://github.com/dair-ai/emotion_dataset) dataset for **Emotion Recognition** downstream task. ## Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the downstream task (Sentiment Recognition) - Dataset 📚 [Elvis Saravia](https://twitter.com/omarsar0) has gathered a great [dataset](https://github.com/dair-ai/emotion_dataset) for emotion recognition. It allows to classifiy the text into one of the following **6** emotions: - sadness 😢 - joy 😃 - love 🥰 - anger 😡 - fear 😱 - surprise 😯 ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this Colab Notebook](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) created by [Suraj Patil](https://github.com/patil-suraj), so all credits to him! ## Test set metrics 🧾 | |precision | recall | f1-score |support| |----------|----------|---------|----------|-------| |anger | 0.92| 0.93| 0.92| 275| |fear | 0.90| 0.90| 0.90| 224| |joy | 0.97| 0.91| 0.94| 695| |love | 0.75| 0.89| 0.82| 159| |sadness | 0.96| 0.97| 0.96| 581| |surpirse | 0.73| 0.80| 0.76| 66| | | |accuracy| | | 0.92| 2000| |macro avg| 0.87| 0.90| 0.88| 2000| |weighted avg| 0.93| 0.92| 0.92| 2000| Confusion Matrix ![CM](https://i.imgur.com/JBtAwPx.png) ## Model in Action 🚀 ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-small-finetuned-emotion") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-small-finetuned-emotion") def get_emotion(text): input_ids = tokenizer.encode(text + '</s>', return_tensors='pt') output = model.generate(input_ids=input_ids, max_length=2) dec = [tokenizer.decode(ids) for ids in output] label = dec[0] return label get_emotion("i feel as if i havent blogged in ages are at least truly blogged i am doing an update cute") # Output: 'joy' get_emotion("i have a feeling i kinda lost my best friend") # Output: 'sadness' ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/t5-small-finetuned-imdb-sentiment
2020-12-11T21:56:27.000Z
[ "pytorch", "t5", "seq2seq", "en", "dataset:imdb", "arxiv:1910.10683", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
mrm8488
44
transformers
--- language: en datasets: - imdb --- # T5-small fine-tuned for Sentiment Anlalysis 🎞️👍👎 [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) [small](https://huggingface.co/t5-small) fine-tuned on [IMDB](https://huggingface.co/datasets/imdb) dataset for **Sentiment Analysis** downstream task. ## Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67) ## Details of the downstream task (Sentiment analysis) - Dataset 📚 [IMDB](https://huggingface.co/datasets/imdb) This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. It provides a set of **25,000** highly polar movie reviews for training, and **25,000** for testing. ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this Colab Notebook](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) created by [Suraj Patil](https://github.com/patil-suraj), so all credits to him! ## Test set metrics 🧾 | |precision | recall | f1-score |support| |----------|----------|---------|----------|-------| |negative | 0.92 | 0.93| 0.92| 12500| |positive | 0.93 | 0.92| 0.92| 12500| |----------|----------|---------|----------|-------| |accuracy| | | 0.92| 25000| |macro avg| 0.92| 0.92| 0.92| 25000| |weighted avg| 0.92| 0.92| 0.92| 25000| ## Model in Action 🚀 ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-small-finetuned-imdb-sentiment") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-small-finetuned-imdb-sentiment") def get_sentiment(text): input_ids = tokenizer.encode(text + '</s>', return_tensors='pt') output = model.generate(input_ids=input_ids, max_length=2) dec = [tokenizer.decode(ids) for ids in output] label = dec[0] return label get_sentiment("I dislike a lot that film") # Output: 'negative' ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/t5-small-finetuned-quora-for-paraphrasing
2020-12-11T21:56:30.000Z
[ "pytorch", "t5", "seq2seq", "en", "dataset:quora", "arxiv:1910.10683", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
1,598
transformers
--- language: en datasets: - quora --- # T5-base fine-tuned on Quora question pair dataset for Question Paraphrasing ❓↔️❓ [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [Quodra question pair](https://huggingface.co/nlp/viewer/?dataset=quora) dataset for **Question Paraphrasing** task. ## Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the downstream task (Question Paraphrasing) - Dataset 📚❓↔️❓ Dataset ID: ```quora``` from [Huggingface/NLP](https://github.com/huggingface/nlp) | Dataset | Split | # samples | | -------- | ----- | --------- | | quora | train | 404290 | | quora after filter repeated questions | train | 149263 | Check out more about this dataset and others in [NLP Viewer](https://huggingface.co/nlp/viewer/) ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this one](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) ## Model in Action 🚀 ```python from transformers import AutoModelWithLMHead, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-small-finetuned-quora-for-paraphrasing") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-small-finetuned-quora-for-paraphrasing") def paraphrase(text, max_length=128): input_ids = tokenizer.encode(text, return_tensors="pt", add_special_tokens=True) generated_ids = model.generate(input_ids=input_ids, num_return_sequences=5, num_beams=5, max_length=max_length, no_repeat_ngram_size=2, repetition_penalty=3.5, length_penalty=1.0, early_stopping=True) preds = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True) for g in generated_ids] return preds preds = paraphrase("paraphrase: What is the best framework for dealing with a huge text dataset?") for pred in preds: print(pred) # Output: ''' What is the best framework for dealing with a huge text dataset? What is the best framework for dealing with a large text dataset? What is the best framework to deal with a huge text dataset? What are the best frameworks for dealing with a huge text dataset? What is the best framework for dealing with huge text datasets? ''' ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/t5-small-finetuned-squadv1
2020-12-11T21:56:34.000Z
[ "pytorch", "t5", "seq2seq", "en", "dataset:squad", "arxiv:1910.10683", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
42
transformers
--- language: en datasets: - squad --- # T5-small fine-tuned on SQuAD [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) [(small)](https://huggingface.co/t5-small) fine-tuned on [SQuAD v1.1](https://rajpurkar.github.io/SQuAD-explorer/) for **Q&A** downstream task. ## Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the downstream task (Q&A) - Dataset 📚 🧐 ❓ Dataset ID: ```squad``` from [Huggingface/NLP](https://github.com/huggingface/nlp) | Dataset | Split | # samples | | -------- | ----- | --------- | | squad | train | 87599 | | squad | valid | 10570 | How to load it from [nlp](https://github.com/huggingface/nlp) ```python train_dataset = nlp.load_dataset('squad, split=nlp.Split.TRAIN) valid_dataset = nlp.load_dataset('squad', split=nlp.Split.VALIDATION) ``` Check out more about this dataset and others in [NLP Viewer](https://huggingface.co/nlp/viewer/) ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this awesome one](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) by [Suraj Patil](https://twitter.com/psuraj28) ## Results 📝 | Metric | # Value | | ------ | --------- | | **EM** | **76.95** | | **F1** | **85.71** | ## Model in Action 🚀 ```python from transformers import AutoModelWithLMHead, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-small-finetuned-squadv1") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-small-finetuned-squadv1") def get_answer(question, context): input_text = "question: %s context: %s </s>" % (question, context) features = tokenizer([input_text], return_tensors='pt') output = model.generate(input_ids=features['input_ids'], attention_mask=features['attention_mask']) return tokenizer.decode(output[0]) context = "Manuel have created RuPERTa-base (a Spanish RoBERTa) with the support of HF-Transformers and Google" question = "Who has supported Manuel?" get_answer(question, context) # output: 'HF-Transformers and Google' ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/t5-small-finetuned-squadv2
2021-05-06T16:25:28.000Z
[ "pytorch", "t5", "seq2seq", "en", "dataset:squad_v2", "arxiv:1910.10683", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
31
transformers
--- language: en datasets: - squad_v2 --- # T5-small fine-tuned on SQuAD v2 [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) [(small)](https://huggingface.co/t5-small) fine-tuned on [SQuAD v2](https://rajpurkar.github.io/SQuAD-explorer/) for **Q&A** downstream task. ## Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the downstream task (Q&A) - Dataset 📚 🧐 ❓ Dataset ID: ```squad_v2``` from [Huggingface/NLP](https://github.com/huggingface/nlp) | Dataset | Split | # samples | | -------- | ----- | --------- | | squad_v2 | train | 130319 | | squad_v2 | valid | 11873 | How to load it from [nlp](https://github.com/huggingface/nlp) ```python train_dataset = nlp.load_dataset('squad_v2', split=nlp.Split.TRAIN) valid_dataset = nlp.load_dataset('squad_v2', split=nlp.Split.VALIDATION) ``` Check out more about this dataset and others in [NLP Viewer](https://huggingface.co/nlp/viewer/) ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this awesome one](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) by [Suraj Patil](https://twitter.com/psuraj28) ## Results 📝 | Metric | # Value | | ------ | --------- | | **EM** | **69.46** | | **F1** | **73.01** | ## Model in Action 🚀 ```python from transformers import AutoModelWithLMHead, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-small-finetuned-squadv2") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-small-finetuned-squadv2") def get_answer(question, context): input_text = "question: %s context: %s </s>" % (question, context) features = tokenizer([input_text], return_tensors='pt') output = model.generate(input_ids=features['input_ids'], attention_mask=features['attention_mask']) return tokenizer.decode(output[0]) context = "Manuel has created RuPERTa-base (a Spanish RoBERTa) with the support of HF-Transformers and Google" question = "Who has supported Manuel?" get_answer(question, context) # output: 'HF-Transformers and Google' ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/t5-small-finetuned-translation-es-to-pt
2020-08-04T16:39:37.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
21
transformers
mrm8488/t5-small-finetuned-wikiSQL
2020-12-11T21:56:40.000Z
[ "pytorch", "t5", "seq2seq", "en", "dataset:wikisql", "arxiv:1910.10683", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
375
transformers
--- language: en datasets: - wikisql --- # T5-small fine-tuned on WikiSQL [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) [small](https://huggingface.co/t5-small) fine-tuned on [WikiSQL](https://github.com/salesforce/WikiSQL) for **English** to **SQL** **translation**. ## Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the Dataset 📚 Dataset ID: ```wikisql``` from [Huggingface/NLP](https://huggingface.co/nlp/viewer/?dataset=wikisql) | Dataset | Split | # samples | | -------- | ----- | --------- | | wikisql | train | 56355 | | wikisql | valid | 14436 | How to load it from [nlp](https://github.com/huggingface/nlp) ```python train_dataset = nlp.load_dataset('wikisql', split=nlp.Split.TRAIN) valid_dataset = nlp.load_dataset('wikisql', split=nlp.Split.VALIDATION) ``` Check out more about this dataset and others in [NLP Viewer](https://huggingface.co/nlp/viewer/) ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this Colab Notebook](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) created by [Suraj Patil](https://github.com/patil-suraj), so all credits to him! ## Model in Action 🚀 ```python from transformers import AutoModelWithLMHead, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-small-finetuned-wikiSQL") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-small-finetuned-wikiSQL") def get_sql(query): input_text = "translate English to SQL: %s </s>" % query features = tokenizer([input_text], return_tensors='pt') output = model.generate(input_ids=features['input_ids'], attention_mask=features['attention_mask']) return tokenizer.decode(output[0]) query = "How many millions of params there are in HF-hub?" get_sql(query) # output: 'SELECT COUNT Params FROM table WHERE Location = HF-hub' ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/umberto-wikipedia-uncased-v1-finetuned-squadv1-it
2020-12-11T21:56:44.000Z
[ "pytorch", "camembert", "question-answering", "it", "transformers" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "optimizer.pt", "pytorch_model.bin", "scheduler.pt", "sentencepiece.bpe.model", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin" ]
mrm8488
53
transformers
--- language: it --- # UmBERTo Wikipedia Uncased + italian SQuAD v1 📚 🧐 ❓ [UmBERTo-Wikipedia-Uncased](https://huggingface.co/Musixmatch/umberto-wikipedia-uncased-v1) fine-tuned on [Italian SQUAD v1 dataset](https://github.com/crux82/squad-it) for **Q&A** downstream task. ## Details of the downstream task (Q&A) - Model 🧠 [UmBERTo](https://github.com/musixmatchresearch/umberto) is a Roberta-based Language Model trained on large Italian Corpora and uses two innovative approaches: SentencePiece and Whole Word Masking. UmBERTo-Wikipedia-Uncased Training is trained on a relative small corpus (~7GB) extracted from Wikipedia-ITA. ## Details of the downstream task (Q&A) - Dataset 📚 [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/explore/1.1/dev/) [Rajpurkar et al. 2016] is a large scale dataset for training of question answering systems on factoid questions. It contains more than 100,000 question-answer pairs about passages from 536 articles chosen from various domains of Wikipedia. **SQuAD-it** is derived from the SQuAD dataset and it is obtained through semi-automatic translation of the SQuAD dataset into Italian. It represents a large-scale dataset for open question answering processes on factoid questions in Italian. The dataset contains more than 60,000 question/answer pairs derived from the original English dataset. ## Model training 🏋️‍ The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command: ```bash python transformers/examples/question-answering/run_squad.py \ --model_type bert \ --model_name_or_path 'Musixmatch/umberto-wikipedia-uncased-v1' \ --do_eval \ --do_train \ --do_lower_case \ --train_file '/content/dataset/SQuAD_it-train.json' \ --predict_file '/content/dataset/SQuAD_it-test.json' \ --per_gpu_train_batch_size 16 \ --learning_rate 3e-5 \ --num_train_epochs 10 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /content/drive/My\ Drive/umberto-uncased-finetuned-squadv1-it \ --overwrite_output_dir \ --save_steps 1000 ``` With 10 epochs the model overfits the train dataset so I evaluated the different checkpoints created during training (every 1000 steps) and chose the best (In this case the one created at 17000 steps). ## Test set Results 🧾 | Metric | # Value | | ------ | --------- | | **EM** | **60.50** | | **F1** | **72.41** | ```json { 'exact': 60.50729399395453, 'f1': 72.4141113348361, 'total': 7609, 'HasAns_exact': 60.50729399395453, 'HasAns_f1': 72.4141113348361, 'HasAns_total': 7609, 'best_exact': 60.50729399395453, 'best_exact_thresh': 0.0, 'best_f1': 72.4141113348361, 'best_f1_thresh': 0.0 } ``` ## Comparison ⚖️ | Model | EM | F1 score | | -------------------------------------------------------------------------------------------------------------------------------- | --------- | --------- | | [DrQA-it trained on SQuAD-it ](https://github.com/crux82/squad-it/blob/master/README.md#evaluating-a-neural-model-over-squad-it) | 56.1 | 65.9 | | This one |60.50 |72.41 | | [bert-italian-finedtuned-squadv1-it-alfa](https://huggingface.co/mrm8488/bert-italian-finedtuned-squadv1-it-alfa) |**62.51** |**74.16** | | **62.51** | **74.16** | ### Model in action 🚀 Fast usage with **pipelines**: ```python from transformers import pipeline QnA_pipeline = pipeline('question-answering', model='mrm8488/umberto-wikipedia-uncased-v1-finetuned-squadv1-it') QnA_pipeline({ 'context': 'Marco Aurelio era un imperatore romano che praticava lo stoicismo come filosofia di vita .', 'question': 'Quale filosofia seguì Marco Aurelio ?' }) # Output: {'answer': 'stoicismo', 'end': 65, 'score': 0.9477770241566028, 'start': 56} ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/wav2vec2-large-xlsr-53-breton
2021-03-26T16:52:03.000Z
[ "pytorch", "wav2vec2", "br", "dataset:common_voice", "transformers", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "config.json", "preprocessor_config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "trainer_state.json", "training_args.bin", "vocab.json" ]
mrm8488
9
transformers
--- language: br datasets: - common_voice tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Breton Manuel Romero results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice br type: common_voice args: br metrics: - name: Test WER type: wer value: 46.49 --- # Wav2Vec2-Large-XLSR-53-breton Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Breton using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "br", split="test[:2%]"). processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-breton") model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-breton") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Breton test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "br", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-breton") model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-breton") model.to("cuda") chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”\\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 46.49 % ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found ???
mrm8488/wav2vec2-large-xlsr-53-esperanto
2021-03-31T16:14:54.000Z
[ "pytorch", "wav2vec2", "eo", "dataset:common_voice", "transformers", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "config.json", "preprocessor_config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
mrm8488
6
transformers
--- language: eo datasets: - common_voice tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Esperanto Manuel Romero results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice eo type: common_voice args: eo metrics: - name: Test WER type: wer value: 15.86 --- # Wav2Vec2-Large-XLSR-53-esperanto Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Esperanto using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "eo", split="test[:2%]"). processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-esperanto") model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-esperanto") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Ukrainian test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "eo", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-esperanto") model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-esperanto") model.to("cuda") chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”\\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 15.86 % ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found ???
mrm8488/wav2vec2-large-xlsr-53-euskera
2021-03-26T16:52:34.000Z
[ "pytorch", "wav2vec2", "eu", "dataset:common_voice", "transformers", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "config.json", "preprocessor_config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "trainer_state.json", "training_args.bin", "vocab.json" ]
mrm8488
7
transformers
--- language: eu datasets: - common_voice tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Euskera Manuel Romero results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice eu type: common_voice args: eu metrics: - name: Test WER type: wer value: 24.03 --- # Wav2Vec2-Large-XLSR-53-euskera Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Euskera using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "eu", split="test[:2%]"). processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-euskera") model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-euskera") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Euskera test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "eu", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-euskera") model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-euskera") model.to("cuda") chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”\\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 24.03 % ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found ???
mrm8488/wav2vec2-large-xlsr-53-spanish
2021-03-26T17:02:31.000Z
[ "pytorch", "wav2vec2", "es", "dataset:common_voice", "transformers", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "added_tokens.json", "config.json", "preprocessor_config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "trainer_state.json", "training_args.bin", "vocab.json" ]
mrm8488
182
transformers
--- language: es datasets: - common_voice tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Spanish Manuel Romero results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice es type: common_voice args: es metrics: - name: Test WER type: wer value: ??? --- # Wav2Vec2-Large-XLSR-53-Spanish Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Spanish using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "es", split="test[:2%]"). processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-spanish") model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-spanish") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Ukrainian test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "es", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-spanish") model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-spanish") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: % ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found ???
mrm8488/wav2vec2-large-xlsr-53-ukrainian
2021-03-26T16:50:49.000Z
[ "pytorch", "wav2vec2", "uk", "dataset:common_voice", "transformers", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "config.json", "preprocessor_config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "trainer_state.json", "training_args.bin", "vocab.json" ]
mrm8488
11
transformers
--- language: uk datasets: - common_voice tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Ukrainian Manuel Romero results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice uk type: common_voice args: uk metrics: - name: Test WER type: wer value: 41.82 --- # Wav2Vec2-Large-XLSR-53-ukrainian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Ukrainian using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "uk", split="test[:2%]"). processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-ukrainian") model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-ukrainian") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Ukrainian test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "uk", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-ukrainian") model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-ukrainian") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 41.82 % ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found ???
mrm8488/wav2vec2-large-xlsr-53-ukranian
2021-03-25T06:37:35.000Z
[ "xlsr-fine-tuning-week" ]
[ ".gitattributes", "README.md" ]
mrm8488
0
--- tags: - xlsr-fine-tuning-week --- # Wav2Vec2
mrm8488/xlm-multi-finetuned-xquadv1
2020-12-11T21:56:48.000Z
[ "pytorch", "xlm", "question-answering", "multilingual", "arxiv:1901.07291", "arxiv:1910.11856", "transformers" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "merges.txt", "nbest_predictions_.json", "predictions_.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
mrm8488
34
transformers
--- language: multilingual thumbnail: --- # [XLM](https://github.com/facebookresearch/XLM/) (multilingual version) fine-tuned for multilingual Q&A Released from `Facebook` together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau and fine-tuned on [XQuAD](https://github.com/deepmind/xquad) for multilingual (`11 different languages`) **Q&A** downstream task. ## Details of the language model('xlm-mlm-100-1280') [Language model](https://github.com/facebookresearch/XLM/#ii-cross-lingual-language-model-pretraining-xlm) | Languages | --------- | | 100 | It includes the following languages: <details> en-es-fr-de-zh-ru-pt-it-ar-ja-id-tr-nl-pl-simple-fa-vi-sv-ko-he-ro-no-hi-uk-cs-fi-hu-th-da-ca-el-bg-sr-ms-bn-hr-sl-zh_yue-az-sk-eo-ta-sh-lt-et-ml-la-bs-sq-arz-af-ka-mr-eu-tl-ang-gl-nn-ur-kk-be-hy-te-lv-mk-zh_classical-als-is-wuu-my-sco-mn-ceb-ast-cy-kn-br-an-gu-bar-uz-lb-ne-si-war-jv-ga-zh_min_nan-oc-ku-sw-nds-ckb-ia-yi-fy-scn-gan-tt-am </details> ## Details of the downstream task (multilingual Q&A) - Dataset Deepmind [XQuAD](https://github.com/deepmind/xquad) Languages covered: - Arabic: `ar` - German: `de` - Greek: `el` - English: `en` - Spanish: `es` - Hindi: `hi` - Russian: `ru` - Thai: `th` - Turkish: `tr` - Vietnamese: `vi` - Chinese: `zh` As the dataset is based on SQuAD v1.1, there are no unanswerable questions in the data. We chose this setting so that models can focus on cross-lingual transfer. We show the average number of tokens per paragraph, question, and answer for each language in the table below. The statistics were obtained using [Jieba](https://github.com/fxsjy/jieba) for Chinese and the [Moses tokenizer](https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer/tokenizer.perl) for the other languages. | | en | es | de | el | ru | tr | ar | vi | th | zh | hi | | --------- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | Paragraph | 142.4 | 160.7 | 139.5 | 149.6 | 133.9 | 126.5 | 128.2 | 191.2 | 158.7 | 147.6 | 232.4 | | Question | 11.5 | 13.4 | 11.0 | 11.7 | 10.0 | 9.8 | 10.7 | 14.8 | 11.5 | 10.5 | 18.7 | | Answer | 3.1 | 3.6 | 3.0 | 3.3 | 3.1 | 3.1 | 3.1 | 4.5 | 4.1 | 3.5 | 5.6 | Citation: <details> ```bibtex @article{Artetxe:etal:2019, author = {Mikel Artetxe and Sebastian Ruder and Dani Yogatama}, title = {On the cross-lingual transferability of monolingual representations}, journal = {CoRR}, volume = {abs/1910.11856}, year = {2019}, archivePrefix = {arXiv}, eprint = {1910.11856} } ``` </details> As XQuAD is just an evaluation dataset, I used Data augmentation techniques (scraping, neural machine translation, etc) to obtain more samples and split the dataset in order to have a train and test set. The test set was created in a way that contains the same number of samples for each language. Finally, I got: | Dataset | # samples | | ----------- | --------- | | XQUAD train | 50 K | | XQUAD test | 8 K | ## Model training The model was trained on a Tesla P100 GPU and 25GB of RAM. The script for fine tuning can be found [here](https://github.com/huggingface/transformers/blob/master/examples/distillation/run_squad_w_distillation.py) ## Model in action Fast usage with **pipelines**: ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="mrm8488/xlm-multi-finetuned-xquadv1", tokenizer="mrm8488/xlm-multi-finetuned-xquadv1" ) # English qa_pipeline({ 'context': "Manuel Romero has been working hardly in the repository hugginface/transformers lately", 'question': "Who has been working hard for hugginface/transformers lately?" }) #Output: {'answer': 'Manuel', 'end': 6, 'score': 8.531880747878265e-05, 'start': 0} # Russian qa_pipeline({ 'context': "Мануэль Ромеро в последнее время почти не работал в репозитории hugginface / transformers", 'question': "Кто в последнее время усердно работал над обнимашками / трансформерами?" }) #Output: {'answer': 'работал в репозитории hugginface /','end': 76, 'score': 0.00012340750456964894, 'start': 42} ``` Try it on a Colab (*Do not forget to change the model and tokenizer path in the Colab if necessary*): <a href="https://colab.research.google.com/github/mrm8488/shared_colab_notebooks/blob/master/Try_mrm8488_xquad_finetuned_uncased_model.ipynb" target="_parent"><img src="https://camo.githubusercontent.com/52feade06f2fecbf006889a904d221e6a730c194/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667" alt="Open In Colab" data-canonical-src="https://colab.research.google.com/assets/colab-badge.svg"></a> > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mromero/prueba
2021-05-15T14:13:40.000Z
[]
[ ".gitattributes", "README.md", "vocab.json" ]
mromero
0
mrshu/wav2vec2-large-xlsr-slovene
2021-03-23T18:26:29.000Z
[ "pytorch", "wav2vec2", "sl", "dataset:common_voice", "transformers", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "config.json", "optimizer.pt", "preprocessor_config.json", "pytorch_model.bin", "scheduler.pt", "special_tokens_map.json", "tokenizer_config.json", "trainer_state.json", "training_args.bin", "vocab.json" ]
mrshu
9
transformers
--- language: sl datasets: - common_voice tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Slovene results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice sl type: common_voice args: sl metrics: - name: Test WER type: wer value: 36.97 --- # Wav2Vec2-Large-XLSR-53-Slovene Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Slovene using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "sl", split="test[:2%]"). processor = Wav2Vec2Processor.from_pretrained("mrshu/wav2vec2-large-xlsr-slovene") model = Wav2Vec2ForCTC.from_pretrained("mrshu/wav2vec2-large-xlsr-slovene") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Slovene test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "sl", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("mrshu/wav2vec2-large-xlsr-slovene") model = Wav2Vec2ForCTC.from_pretrained("mrshu/wav2vec2-large-xlsr-slovene") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\«\»\)\(\„\'\–\’\—]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 36.97 % ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found [here](https://colab.research.google.com/drive/14uahdilysnFsiYniHxY9fyKjFGuYQe7p)
mudes/en-base
2021-05-20T01:03:44.000Z
[ "pytorch", "jax", "bert", "token-classification", "en", "arxiv:2102.09665", "arxiv:2104.04630", "transformers", "mudes", "license:apache-2.0" ]
token-classification
[ ".gitattributes", "README.md", "config.json", "eval_results.txt", "flax_model.msgpack", "model_args.json", "optimizer.pt", "pytorch_model.bin", "scheduler.pt", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mudes
301
transformers
--- language: en tags: - mudes license: apache-2.0 --- # MUDES - {Mu}ltilingual {De}tection of Offensive {S}pans We provide state-of-the-art models to detect toxic spans in social media texts. We introduce our framework in [this paper](https://arxiv.org/abs/2102.09665). We have evaluated our models on Toxic Spans task at SemEval 2021 (Task 5). Our participation in the task is detailed in [this paper](https://arxiv.org/abs/2104.04630). ## Usage You can use this model when you have [MUDES](https://github.com/TharinduDR/MUDES) installed: ```bash pip install mudes ``` Then you can use the model like this: ```python from mudes.app.mudes_app import MUDESApp app = MUDESApp("en-base", use_cuda=False) print(app.predict_toxic_spans("You motherfucking cunt", spans=True)) ``` ## System Demonstration An experimental demonstration interface called MUDES-UI has been released on [GitHub](https://github.com/TharinduDR/MUDES-UI) and can be checked out in [here](http://rgcl.wlv.ac.uk/mudes/). ## Citing & Authors If you find this model helpful, feel free to cite our publications ```bibtex @inproceedings{ranasinghemudes, title={{MUDES: Multilingual Detection of Offensive Spans}}, author={Tharindu Ranasinghe and Marcos Zampieri}, booktitle={Proceedings of NAACL}, year={2021} } ``` ```bibtex @inproceedings{ranasinghe2021semeval, title={{WLV-RIT at SemEval-2021 Task 5: A Neural Transformer Framework for Detecting Toxic Spans}}, author = {Ranasinghe, Tharindu and Sarkar, Diptanu and Zampieri, Marcos and Ororbia, Alex}, booktitle={Proceedings of SemEval}, year={2021} } ```
mudes/en-large
2021-05-20T18:36:06.000Z
[ "pytorch", "jax", "roberta", "token-classification", "en", "arxiv:2102.09665", "arxiv:2104.04630", "transformers", "mudes", "license:apache-2.0" ]
token-classification
[ ".gitattributes", "README.md", "config.json", "eval_results.txt", "flax_model.msgpack", "merges.txt", "model_args.json", "optimizer.pt", "pytorch_model.bin", "scheduler.pt", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
mudes
56
transformers
--- language: en tags: - mudes license: apache-2.0 --- # MUDES - {Mu}ltilingual {De}tection of Offensive {S}pans We provide state-of-the-art models to detect toxic spans in social media texts. We introduce our framework in [this paper](https://arxiv.org/abs/2102.09665). We have evaluated our models on Toxic Spans task at SemEval 2021 (Task 5). Our participation in the task is detailed in [this paper](https://arxiv.org/abs/2104.04630). ## Usage You can use this model when you have [MUDES](https://github.com/TharinduDR/MUDES) installed: ```bash pip install mudes ``` Then you can use the model like this: ```python from mudes.app.mudes_app import MUDESApp app = MUDESApp("en-large", use_cuda=False) print(app.predict_toxic_spans("You motherfucking cunt", spans=True)) ``` ## System Demonstration An experimental demonstration interface called MUDES-UI has been released on [GitHub](https://github.com/TharinduDR/MUDES-UI) and can be checked out in [here](http://rgcl.wlv.ac.uk/mudes/). ## Citing & Authors If you find this model helpful, feel free to cite our publications ```bibtex @inproceedings{ranasinghemudes, title={{MUDES: Multilingual Detection of Offensive Spans}}, author={Tharindu Ranasinghe and Marcos Zampieri}, booktitle={Proceedings of NAACL}, year={2021} } ``` ```bibtex @inproceedings{ranasinghe2021semeval, title={{WLV-RIT at SemEval-2021 Task 5: A Neural Transformer Framework for Detecting Toxic Spans}}, author = {Ranasinghe, Tharindu and Sarkar, Diptanu and Zampieri, Marcos and Ororbia, Alex}, booktitle={Proceedings of SemEval}, year={2021} } ```
mudes/multilingual-base
2021-05-07T16:27:58.000Z
[ "pytorch", "xlm-roberta", "token-classification", "multilingual", "arxiv:2102.09665", "arxiv:2104.04630", "transformers", "mudes", "license:apache-2.0" ]
token-classification
[ ".gitattributes", "README.md", "config.json", "eval_results.txt", "model_args.json", "optimizer.pt", "pytorch_model.bin", "scheduler.pt", "sentencepiece.bpe.model", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin" ]
mudes
85
transformers
--- language: multilingual tags: - mudes license: apache-2.0 --- # MUDES - {Mu}ltilingual {De}tection of Offensive {S}pans We provide state-of-the-art models to detect toxic spans in social media texts. We introduce our framework in [this paper](https://arxiv.org/abs/2102.09665). We have evaluated our models on Toxic Spans task at SemEval 2021 (Task 5). Our participation in the task is detailed in [this paper](https://arxiv.org/abs/2104.04630). ## Usage You can use this model when you have [MUDES](https://github.com/TharinduDR/MUDES) installed: ```bash pip install mudes ``` Then you can use the model like this: ```python from mudes.app.mudes_app import MUDESApp app = MUDESApp("multilingual-base", use_cuda=False) print(app.predict_toxic_spans("You motherfucking cunt", spans=True)) ``` ## System Demonstration An experimental demonstration interface called MUDES-UI has been released on [GitHub](https://github.com/TharinduDR/MUDES-UI) and can be checked out in [here](http://rgcl.wlv.ac.uk/mudes/). ## Citing & Authors If you find this model helpful, feel free to cite our publications ```bibtex @inproceedings{ranasinghemudes, title={{MUDES: Multilingual Detection of Offensive Spans}}, author={Tharindu Ranasinghe and Marcos Zampieri}, booktitle={Proceedings of NAACL}, year={2021} } ``` ```bibtex @inproceedings{ranasinghe2021semeval, title={{WLV-RIT at SemEval-2021 Task 5: A Neural Transformer Framework for Detecting Toxic Spans}}, author = {Ranasinghe, Tharindu and Sarkar, Diptanu and Zampieri, Marcos and Ororbia, Alex}, booktitle={Proceedings of SemEval}, year={2021} } ```
mudes/multilingual-large
2021-04-15T22:36:53.000Z
[ "pytorch", "xlm-roberta", "token-classification", "transformers" ]
token-classification
[ ".gitattributes", "README.md", "config.json", "eval_results.txt", "model_args.json", "optimizer.pt", "pytorch_model.bin", "scheduler.pt", "sentencepiece.bpe.model", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin" ]
mudes
22
transformers
# MUDES - {Mu}ltilingual {De}tection of Offensive {S}pans We provide state-of-the-art models to detect toxic spans in text. We have evaluated our models on Toxic Spans task at SemEval 2021 (Task 5). ## Usage You can use this model when you have [MUDES](https://github.com/TharinduDR/MUDES) installed: ```bash pip install mudes ``` Then you can use the model like this: ```python from mudes.app.mudes_app import MUDESApp app = MUDESApp("multilingual-large", use_cuda=False) print(app.predict_toxic_spans("You motherfucking cunt", spans=True)) ``` ## System Demonstration An experimental demonstration interface called MUDES-UI has been released on [GitHub](https://github.com/TharinduDR/MUDES-UI) and can be checked out in [here](http://rgcl.wlv.ac.uk/mudes/). ## Citing & Authors If you find this model helpful, feel free to cite our publication ```bash @inproceedings{ranasinghemudes, title={{MUDES: Multilingual Detection of Offensive Spans}}, author={Tharindu Ranasinghe and Marcos Zampieri}, booktitle={Proceedings of NAACL}, year={2021} } ``` ```bash @inproceedings{ranasinghe2021semeval, title={{WLV-RIT at SemEval-2021 Task 5: A Neural Transformer Framework for Detecting Toxic Spans}}, author = {Ranasinghe, Tharindu and Sarkar, Diptanu and Zampieri, Marcos and Ororbia, Alex}, booktitle={Proceedings of SemEval}, year={2021} } ```
mukherjeearnab/opsolBERT
2021-05-20T18:38:11.000Z
[ "pytorch", "jax", "roberta", "masked-lm", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "training_args.bin", "vocab.json" ]
mukherjeearnab
6
transformers
hello
mukund/privbert
2021-06-15T19:36:42.000Z
[ "pytorch", "tf", "roberta", "masked-lm", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.json" ]
mukund
13
transformers
# PrivBERT PrivBERT is a privacy policy language model. We pre-trained PrivBERT on ~1 million privacy policies starting with the pretrained Roberta model. The data is available at [https://privaseer.ist.psu.edu/data](https://privaseer.ist.psu.edu/data) ## Usage ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("mukund/privbert") model = AutoModel.from_pretrained("mukund/privbert") ``` ## License If you use this dataset in research, you must cite the below paper. ``` Mukund Srinath, Shomir Wilson and C. Lee Giles. Privacy at Scale: Introducing the PrivaSeer Corpus of Web Privacy Policies. In Proc. ACL 2021. ``` For research, teaching, and scholarship purposes, the model is available under a CC BY-NC-SA license. Please contact us for any requests regarding commercial use.
munggok/mt5-large-id-qgen-qa
2021-01-27T12:55:12.000Z
[ "pytorch", "t5", "seq2seq", "id", "dataset:Squad", "dataset:XQuad", "dataset:Tydiqa", "transformers", "license:mit", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
munggok
35
transformers
--- language: "id" license: "mit" datasets: - Squad - XQuad - Tydiqa widget: - text: "I love you" --- ## Prefix use Use prefix "question: {question} context: {context}" before input to generate the question answering e.g "question: siapa nama saya ? context: nama saya andi. saya tinggal di jakarta. istri saya bernama raisa" for generate question prefix generate questions: nama saya andi. saya tinggal di jakarta. istri saya bernama raisa ## Training data Squad XQuad Tydiqa
munggok/mt5-translate-en-id
2021-01-25T12:40:58.000Z
[ "pytorch", "t5", "seq2seq", "id", "dataset:OPUS", "dataset:CC-aligned", "transformers", "translation", "license:mit", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
munggok
50
transformers
--- tags: - translation language: "id" license: "mit" datasets: - OPUS - CC-aligned widget: - text: "I love you" --- ## MT5-Large-Translate-en-id ## Prefix use Use prefix "translate:" before input to generate the translation e.g "translate: i love you" ## Training data Opus (Open Subtittle and Wikimatrix) CCaligned (en-id sentence pair)
munggok/xlsr_indonesia
2021-03-18T09:53:35.000Z
[ "pytorch", "wav2vec2", "id", "dataset:common_voice", "transformers", "speech", "audio", "automatic-speech-recognition", "xlsr-fine-tuning-week", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "config.json", "optimizer.pt", "preprocessor_config.json", "pytorch_model.bin", "scheduler.pt", "special_tokens_map.json", "tokenizer_config.json", "trainer_state.json", "training_args.bin", "vocab.json" ]
munggok
9
transformers
--- language: id datasets: - common_voice tags: - speech - audio - automatic-speech-recognition - xlsr-fine-tuning-week license: apache-2.0 --- ## Evaluation on Common Voice ID Test ```python import torchaudio from datasets import load_dataset, load_metric from transformers import ( Wav2Vec2ForCTC, Wav2Vec2Processor, ) import torch import re import sys model_name = "munggok/xlsr_indonesia" device = "cuda" chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"]' # noqa: W605 model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device) processor = Wav2Vec2Processor.from_pretrained(model_name) ds = load_dataset("common_voice", "id", split="test", data_dir="./cv-corpus-6.1-2020-12-11") resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000) def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = resampler.forward(speech.squeeze(0)).numpy() batch["sampling_rate"] = resampler.new_freq batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ds = ds.map(map_to_array) def map_to_pred(batch): features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt") input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids) batch["target"] = batch["sentence"] return batch result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys())) wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` **Result**: 25.7 %
murali1996/bert-base-cased-spell-correction
2021-05-20T01:04:57.000Z
[ "pytorch", "jax", "bert", "transformers" ]
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
murali1996
274
transformers
`bert-base-cased` trained for spelling correction. See [neuspell](https://github.com/neuspell/neuspell) repository for more details about training and evaluating the model.
mushroomccc/test
2021-04-10T21:38:09.000Z
[]
[ ".gitattributes" ]
mushroomccc
0
mustafabaris/tr_kg_pos_conllu_bert
2021-05-20T01:07:02.000Z
[ "pytorch", "jax", "bert", "token-classification", "transformers" ]
token-classification
[ ".DS_Store", ".gitattributes", "config.json", "eval_results.txt", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "test_results.txt", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mustafabaris
7
transformers
mvc/pytorch_model
2021-04-07T12:23:20.000Z
[]
[ ".gitattributes" ]
mvc
0
mymusise/AIXX
2021-05-23T10:29:08.000Z
[ "tf", "gpt2", "lm-head", "causal-lm", "zh", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "README.md", "config.json", "spiece.model", "tf_model.h5" ]
mymusise
22
transformers
--- language: zh widget: - text: "今天是下雨天" - text: "走向森林" --- # EasternFantasyNoval # Overview - **Language model**: GPT2-Medium - **Model size**: 1.2GiB - **Language**: Chinese
mymusise/CPM-GPT2-FP16
2021-05-23T10:32:23.000Z
[ "tf", "gpt2", "lm-head", "causal-lm", "zh", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "README.md", "config.json", "spiece.model", "tf_model.h5" ]
mymusise
375
transformers
--- language: zh widget: - text: "今天是下雨天" - text: "走向森林" --- <h1 align="center"> CPM </h1> CPM(Chinese Pre-Trained Language Models), which has 2.6B parameters, made by the research team of Beijing Zhiyuan Institute of artificial intelligence and Tsinghua University @TsinghuaAI. [repo: CPM-Generate](https://github.com/TsinghuaAI/CPM-Generate) The One Thing You Need to Know is this model is not uploaded by official, the conver script is [here](https://github.com/mymusise/CPM-TF2Transformer/blob/main/transfor_CMP.ipynb) # Overview - **Language model**: CPM - **Model size**: 2.6B parameters - **Language**: Chinese # How to use How to use this model directly from the 🤗/transformers library: ```python from transformers import XLNetTokenizer, TFGPT2LMHeadModel import jieba # add spicel process class XLNetTokenizer(XLNetTokenizer): translator = str.maketrans(" \n", "\u2582\u2583") def _tokenize(self, text, *args, **kwargs): text = [x.translate(self.translator) for x in jieba.cut(text, cut_all=False)] text = " ".join(text) return super()._tokenize(text, *args, **kwargs) def _decode(self, *args, **kwargs): text = super()._decode(*args, **kwargs) text = text.replace(' ', '').replace('\u2582', ' ').replace('\u2583', '\n') return text tokenizer = XLNetTokenizer.from_pretrained('mymusise/CPM-GPT2-FP16') model = TFGPT2LMHeadModel.from_pretrained("mymusise/CPM-GPT2-FP16") ``` How to generate text ```python from transformers import TextGenerationPipeline text_generater = TextGenerationPipeline(model, tokenizer) texts = [ '今天天气不错', '天下武功, 唯快不', """ 我们在火星上发现了大量的神奇物种。有神奇的海星兽,身上是粉色的,有5条腿;有胆小的猫猫兽,橘色,有4条腿;有令人恐惧的蜈蚣兽,全身漆黑,36条腿;有纯洁的天使兽,全身洁白无瑕,有3条腿;有贪吃的汪汪兽,银色的毛发,有5条腿;有蛋蛋兽,紫色,8条腿。 请根据上文,列出一个表格,包含物种名、颜色、腿数量。 |物种名|颜色|腿数量| |亚古兽|金黄|2| |海星兽|粉色|5| |猫猫兽|橘色|4| |蜈蚣兽|漆黑|36| """ ] for text in texts: token_len = len(tokenizer._tokenize(text)) print(text_generater(text, max_length=token_len + 15, top_k=1, use_cache=True, prefix='')[0]['generated_text']) print(text_generater(text, max_length=token_len + 15, do_sample=True, top_k=5)[0]['generated_text']) ``` ![avatar](https://github.com/mymusise/CPM-TF2Transformer/raw/main/example-cpm.png) You can try it on [colab](https://colab.research.google.com/github/mymusise/CPM-TF2Transformer/blob/main/demo-fp16.ipynb) <a href="https://colab.research.google.com/github/mymusise/CPM-TF2Transformer/blob/main/demo-fp16.ipynb"> <img alt="Build" src="https://colab.research.google.com/assets/colab-badge.svg"> </a>
mymusise/CPM-GPT2
2021-05-23T10:39:52.000Z
[ "tf", "gpt2", "lm-head", "causal-lm", "zh", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "README.md", "config.json", "spiece.model", "tf_model.h5" ]
mymusise
18
transformers
--- language: zh widget: - text: "今天是下雨天" - text: "走向森林" --- <h1 align="center"> CPM </h1> CPM(Chinese Pre-Trained Language Models), which has 2.6B parameters, made by the research team of Beijing Zhiyuan Institute of artificial intelligence and Tsinghua University @TsinghuaAI. [repo: CPM-Generate](https://github.com/TsinghuaAI/CPM-Generate) The One Thing You Need to Know is this model is not uploaded by official, the conver script is [here](https://github.com/mymusise/CPM-TF2Transformer/blob/main/transfor_CMP.ipynb) # Overview - **Language model**: CPM - **Model size**: 2.6B parameters - **Language**: Chinese # How to use How to use this model directly from the 🤗/transformers library: ```python from transformers import XLNetTokenizer, TFGPT2LMHeadModel import jieba # add spicel process class XLNetTokenizer(XLNetTokenizer): translator = str.maketrans(" \n", "\u2582\u2583") def _tokenize(self, text, *args, **kwargs): text = [x.translate(self.translator) for x in jieba.cut(text, cut_all=False)] text = " ".join(text) return super()._tokenize(text, *args, **kwargs) def _decode(self, *args, **kwargs): text = super()._decode(*args, **kwargs) text = text.replace(' ', '').replace('\u2582', ' ').replace('\u2583', '\n') return text tokenizer = XLNetTokenizer.from_pretrained('mymusise/CPM-GPT2') model = TFGPT2LMHeadModel.from_pretrained("mymusise/CPM-GPT2") ``` How to generate text ```python from transformers import TextGenerationPipeline text_generater = TextGenerationPipeline(model, tokenizer) texts = [ '今天天气不错', '天下武功, 唯快不', """ 我们在火星上发现了大量的神奇物种。有神奇的海星兽,身上是粉色的,有5条腿;有胆小的猫猫兽,橘色,有4条腿;有令人恐惧的蜈蚣兽,全身漆黑,36条腿;有纯洁的天使兽,全身洁白无瑕,有3条腿;有贪吃的汪汪兽,银色的毛发,有5条腿;有蛋蛋兽,紫色,8条腿。 请根据上文,列出一个表格,包含物种名、颜色、腿数量。 |物种名|颜色|腿数量| |亚古兽|金黄|2| |海星兽|粉色|5| |猫猫兽|橘色|4| |蜈蚣兽|漆黑|36| """ ] for text in texts: token_len = len(tokenizer._tokenize(text)) print(text_generater(text, max_length=token_len + 15, top_k=1, use_cache=True, prefix='')[0]['generated_text']) print(text_generater(text, max_length=token_len + 15, do_sample=True, top_k=5)[0]['generated_text']) ``` ![avatar](https://github.com/mymusise/CPM-TF2Transformer/raw/main/example-cpm.png)
mymusise/CPM-Generate-distill
2021-05-23T10:40:31.000Z
[ "pytorch", "tf", "jax", "gpt2", "lm-head", "causal-lm", "zh", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "spiece.model", "tf_model.h5" ]
mymusise
1,805
transformers
--- language: zh widget: - text: "天下熙熙," - text: "天气不错," --- <h1 align="center"> CPM-Generate-distill </h1> CPM(Chinese Pre-Trained Language Models), which has 2.6B parameters, made by the research team of Beijing Zhiyuan Institute of artificial intelligence and Tsinghua University @TsinghuaAI. [repo: CPM-Generate](https://github.com/TsinghuaAI/CPM-Generate) The One Thing You Need to Know is this model is not uploaded by official, the conver script is [here](https://github.com/mymusise/CPM-TF2Transformer/blob/main/transfor_CMP.ipynb) And the `CPM-Generate-distill` is the distill model of `CPM`. # How to use How to use this model directly from the 🤗/transformers library: ```python from transformers import XLNetTokenizer, TFGPT2LMHeadModel from transformers import TextGenerationPipeline import jieba # add spicel process class XLNetTokenizer(XLNetTokenizer): translator = str.maketrans(" \n", "\u2582\u2583") def _tokenize(self, text, *args, **kwargs): text = [x.translate(self.translator) for x in jieba.cut(text, cut_all=False)] text = " ".join(text) return super()._tokenize(text, *args, **kwargs) def _decode(self, *args, **kwargs): text = super()._decode(*args, **kwargs) text = text.replace(' ', '').replace('\u2582', ' ').replace('\u2583', '\n') return text tokenizer = XLNetTokenizer.from_pretrained('mymusise/CPM-Generate-distill') model = TFGPT2LMHeadModel.from_pretrained("mymusise/CPM-Generate-distill") text_generater = TextGenerationPipeline(model, tokenizer) print(text_generater("天下熙熙,", max_length=15, top_k=1, use_cache=True, prefix='')) ``` ![avatar](https://github.com/mymusise/CPM-TF2Transformer/raw/main/example-cpm-distill.jpeg)
mymusise/EasternFantasyNoval-small
2021-05-23T10:41:05.000Z
[ "tf", "gpt2", "lm-head", "causal-lm", "zh", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "README.md", "config.json", "spiece.model", "tf_model.h5", "vocab.txt" ]
mymusise
152
transformers
--- language: zh widget: - text: "今天是下雨天" - text: "走向森林" --- # EasternFantasyNoval # Overview - **Language model**: GPT2-Medium - **Model size**: 1.2GiB - **Language**: Chinese
mymusise/EasternFantasyNoval
2021-05-23T10:42:00.000Z
[ "tf", "gpt2", "lm-head", "causal-lm", "zh", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "README.md", "config.json", "tf_model.h5", "vocab.txt" ]
mymusise
115
transformers
--- language: zh widget: - text: "今天是下雨天" - text: "走向森林" --- # EasternFantasyNoval # Overview - **Language model**: GPT2-Medium - **Model size**: 1.2GiB - **Language**: Chinese
mymusise/PanGu-Alpha
2021-04-30T08:47:50.000Z
[]
[ ".gitattributes" ]
mymusise
0
mymusise/gpt2-medium-chinese
2021-05-23T10:42:56.000Z
[ "tf", "gpt2", "lm-head", "causal-lm", "zh", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "README.md", "config.json", "tf_model.h5", "vocab.txt" ]
mymusise
638
transformers
--- language: zh widget: - text: "今天是下雨天" - text: "走向森林" --- # gpt2-medium-chinese # Overview - **Language model**: GPT2-Medium - **Model size**: 1.2GiB - **Language**: Chinese - **Training data**: [wiki2019zh_corpus](https://github.com/brightmart/nlp_chinese_corpus) - **Source code**: [gpt2-quickly](https://github.com/mymusise/gpt2-quickly) # Example ```python from transformers import BertTokenizer, TFGPT2LMHeadModel from transformers import TextGenerationPipeline tokenizer = BertTokenizer.from_pretrained("mymusise/EasternFantasyNoval") model = TFGPT2LMHeadModel.from_pretrained("mymusise/EasternFantasyNoval") text_generator = TextGenerationPipeline(model, tokenizer) print(text_generator("今日", max_length=64, repetition_penalty=1.3, do_sample=True, top_k=10)) print(text_generator("跨越山丘", max_length=64, repetition_penalty=1.3, do_sample=True, top_k=10)) ``` 输出 ```text [{'generated_text': '今日 , 他 的 作 品 也 在 各 种 报 刊 发 表 。 201 1 年 , 他 开 设 了 他 的 网 页 版 《 the dear 》 。 此 外 , 他 还 在 各 种 电 视 节 目 中 出 现 过 。 2017 年 1 月 , 他 被 任'}] [{'generated_text': '跨越山丘 , 其 中 有 三 分 之 二 的 地 区 被 划 入 山 区 。 最 高 峰 是 位 于 山 脚 上 的 大 岩 ( ) 。 其 中 的 山 脚 下 有 一 处 有 名 为 的 河 谷 , 因 其 高 度 在 其 中 , 而 得 名 。'}] ``` [Try it on colab](https://colab.research.google.com/github/mymusise/gpt2-quickly/blob/main/examples/gpt2_medium_chinese.ipynb)
mymusise/gpt2-small-chinese
2021-05-23T10:43:20.000Z
[ "tf", "gpt2", "lm-head", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "config.json", "tf_model.h5" ]
mymusise
42
transformers
myquyennguyen/ELECTRA_VN
2021-03-15T07:01:50.000Z
[ "tf", "electra", "transformers" ]
[ ".gitattributes", "config.json", "tf_model.h5" ]
myquyennguyen
8
transformers
myquyennguyen/vn-electra
2021-03-15T04:31:47.000Z
[]
[ ".gitattributes", "config.json" ]
myquyennguyen
6
mys/electra-base-turkish-cased-ner
2020-12-11T21:56:51.000Z
[ "pytorch", "tf", "electra", "token-classification", "tr", "transformers" ]
token-classification
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.txt" ]
mys
31
transformers
--- language: tr --- ## What is this A NER model for Turkish with 48 categories trained on the dataset [Shrinked TWNERTC Turkish NER Data](https://www.kaggle.com/behcetsenturk/shrinked-twnertc-turkish-ner-data-by-kuzgunlar) by Behçet Şentürk, which is itself a filtered and cleaned version of the following automatically labeled dataset: > Sahin, H. Bahadir; Eren, Mustafa Tolga; Tirkaz, Caglar; Sonmez, Ozan; Yildiz, Eray (2017), “English/Turkish Wikipedia Named-Entity Recognition and Text Categorization Dataset”, Mendeley Data, v1 http://dx.doi.org/10.17632/cdcztymf4k.1 ## Backbone model The backbone model is [electra-base-turkish-cased-discriminator](https://huggingface.co/dbmdz/electra-base-turkish-cased-discriminator), and I finetuned it for token classification. I'm continuing to figure out if it is possible to improve accuracy with this dataset, but it is already usable for non-critic applications. You can reach out to me on [Twitter](https://twitter.com/myusufsarigoz) for discussions and issues. I will also release a notebook to finetune NER models with Shrinked TWNERTC as well as sample inference code to demonstrate what's possible with this model.
mys/mt5-small-turkish-question-paraphrasing
2021-05-12T04:43:54.000Z
[ "pytorch", "mt5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "model_args.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
mys
24
transformers
## Overview This model is a finetuned version of [mt5-small](https://huggingface.co/google/mt5-small) for question paraphrasing task in Turkish. As a generator model, its capabilities are currently investigated and there is an ongoing effort to further improve it. You can raise an issue [in this GitHub repo](https://github.com/monatis/tqp) for any comments, suggestions or interesting findings when using this model. ## Usage You can generate 5 paraphrases for the input question with The simple code below. ```python from transformers import AutoTokenizer, T5ForConditionalGeneration model_name = "mys/mt5-small-turkish-question-paraphrasing" tokenizer = AutoTokenizer.from_pretrained(model_name) model = T5ForConditionalGeneration.from_pretrained(model_name) tokens = tokenizer.encode_plus("Yarın toplantı kaçta başlıyor?", return_tensors='pt') paraphrases = model.generate(tokens['input_ids'], max_length=128, num_return_sequences=5, num_beams=5) tokenizer.batch_decode(paraphrases, skip_special_tokens=True) ``` And the output will be something like: ```shell ['Yarın toplantı ne zaman başlıyor?', 'Yarın toplantı saat kaçta başlıyor?', 'Yarın toplantı saat kaçta başlar?', 'Yarın toplantı ne zaman başlayacak?', 'Yarın toplantı ne zaman başlar?'] ``` ## Dataset I used [TQP dataset V0.1](https://github.com/monatis/tqp) that I've published just recently. This model should be taken as as a baseline model for TQP dataset. A cleaning and further improvements in the dataset and an elaborate hyperparameter tuning may boost the performance. ## Citation If you find the dataset or model useful for your research, [consider citation](https://zenodo.org/record/4719801#.YIbI45AzZPZ).
myutman/ml4se-models
2020-12-27T10:17:13.000Z
[]
[ ".gitattributes" ]
myutman
0
naiyalee/DialoGPT-small-neku
2021-06-11T08:42:08.000Z
[ "pytorch", "gpt2", "lm-head", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.json" ]
naiyalee
24
transformers
nandinib1999/quote-generator
2021-05-23T10:44:12.000Z
[ "pytorch", "jax", "gpt2", "lm-head", "causal-lm", "en", "dataset:quotes-500K", "transformers", "text generation", "text-generation" ]
text-generation
[ ".gitattributes", "README.md", "added_tokens.json", "config.json", "eval_results_lm.txt", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
nandinib1999
112
transformers
--- language: - en thumbnail: tags: - text generation license: datasets: - quotes-500K metrics: - perplexity --- # Quotes Generator ## Model description This is a GPT2 model fine-tuned on the Quotes-500K dataset. ## Intended uses & limitations For a given user prompt, it can generate motivational quotes starting with it. #### How to use ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("nandinib1999/quote-generator") model = AutoModelWithLMHead.from_pretrained("nandinib1999/quote-generator") ``` ## Training data This is the distribution of the total dataset into training, validation and test dataset for the fine-tuning task. <table style="width:30%"> <tr> <th>train</th> <td>349796</td> </tr> <tr> <th>validation</th> <td>99942</td> </tr> <tr> <th>test</th> <td>49971</td> </tr> </table> ## Training procedure The model was fine-tuned using the Google Colab GPU for one epoch. The weights of the pre-trained GPT2 model were used as a base. ## Eval results <table style="width:30%"> <tr> <th>Epoch</th> <th>Perplexity</th> </tr> <tr> <td>1</td> <td>15.180</td> </tr> </table>
napsternxg/scibert_scivocab_cased_SDU21_AI
2021-05-20T01:08:08.000Z
[ "pytorch", "jax", "bert", "token-classification", "transformers" ]
token-classification
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
napsternxg
25
transformers
scibert_scivocab_cased submission for SDU21 Task 1 AI
napsternxg/scibert_scivocab_uncased_SDU21_AI
2021-05-20T01:09:06.000Z
[ "pytorch", "jax", "bert", "token-classification", "transformers" ]
token-classification
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
napsternxg
15
transformers
scibert_scivocab_uncased submission for SDU21 Task 1 AI
napsternxg/scibert_scivocab_uncased_ft_SDU21_AI
2021-05-20T01:09:59.000Z
[ "pytorch", "jax", "bert", "token-classification", "transformers" ]
token-classification
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
napsternxg
9
transformers
scibert_scivocab_uncased_ft MLM pretrained on SDU21 Task 1 + 2
napsternxg/scibert_scivocab_uncased_ft_mlm_SDU21_AI
2021-05-20T01:10:55.000Z
[ "pytorch", "jax", "bert", "masked-lm", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
napsternxg
24
transformers
scibert_scivocab_uncased_ft_mlm MLM pretrained on SDU21 Task 1 + 2
napsternxg/scibert_scivocab_uncased_ft_tv_SDU21_AI
2021-05-20T01:11:49.000Z
[ "pytorch", "jax", "bert", "token-classification", "transformers" ]
token-classification
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
napsternxg
9
transformers
scibert_scivocab_uncased_ft_tv MLM pretrained on SDU21 Task 1 + 2
napsternxg/scibert_scivocab_uncased_tv_SDU21_AI
2021-05-20T01:12:46.000Z
[ "pytorch", "jax", "bert", "token-classification", "transformers" ]
token-classification
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
napsternxg
9
transformers
scibert_scivocab_uncased_tv submission for SDU21 Task 1 AI