modelId
stringlengths
4
112
lastModified
stringlengths
24
24
tags
list
pipeline_tag
stringclasses
21 values
files
list
publishedBy
stringlengths
2
37
downloads_last_month
int32
0
9.44M
library
stringclasses
15 values
modelCard
large_stringlengths
0
100k
mrm8488/bert-tiny-4-finetuned-squadv2
2021-05-20T00:39:36.000Z
[ "pytorch", "jax", "bert", "question-answering", "transformers" ]
question-answering
[ ".gitattributes", "config.json", "flax_model.msgpack", "nbest_predictions_.json", "null_odds_.json", "optimizer.pt", "predictions_.json", "pytorch_model.bin", "scheduler.pt", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
12
transformers
mrm8488/bert-tiny-5-finetuned-squadv2
2021-05-20T00:39:55.000Z
[ "pytorch", "jax", "bert", "question-answering", "transformers" ]
question-answering
[ ".gitattributes", "config.json", "flax_model.msgpack", "nbest_predictions_.json", "null_odds_.json", "predictions_.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
16,676
transformers
mrm8488/bert-tiny-finetuned-sms-spam-detection
2021-05-20T00:40:14.000Z
[ "pytorch", "jax", "bert", "text-classification", "en", "dataset:sms_spam", "transformers", "sms", "spam", "detection" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
133
transformers
--- language: en tags: - sms - spam - detection datasets: - sms_spam widget: - text: "Camera - You are awarded a SiPix Digital Camera! call 09061221066 fromm landline. Delivery within 28 days." --- # BERT-Tiny fine-tuned on on sms_spam dataset for spam detection Validation accuray: **0.98**
mrm8488/bert-tiny-finetuned-squadv2
2021-05-20T00:40:32.000Z
[ "pytorch", "jax", "bert", "question-answering", "en", "arxiv:1908.08962", "transformers" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
137
transformers
--- language: en thumbnail: --- # BERT-Tiny fine-tuned on SQuAD v2 [BERT-Tiny](https://github.com/google-research/bert/) created by [Google Research](https://github.com/google-research) and fine-tuned on [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) for **Q&A** downstream task. **Mode size** (after training): **16.74 MB** ## Details of BERT-Tiny and its 'family' (from their documentation) Released on March 11th, 2020 This is model is a part of 24 smaller BERT models (English only, uncased, trained with WordPiece masking) referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962). The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. ## Details of the downstream task (Q&A) - Dataset [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD2.0 | train | 130k | | SQuAD2.0 | eval | 12.3k | ## Model training The model was trained on a Tesla P100 GPU and 25GB of RAM. The script for fine tuning can be found [here](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py) ## Results: | Metric | # Value | | ------ | --------- | | **EM** | **48.60** | | **F1** | **49.73** | | Model | EM | F1 score | SIZE (MB) | | ----------------------------------------------------------------------------------------- | --------- | --------- | --------- | | [bert-tiny-finetuned-squadv2](https://huggingface.co/mrm8488/bert-tiny-finetuned-squadv2) | 48.60 | 49.73 | **16.74** | | [bert-tiny-5-finetuned-squadv2](https://huggingface.co/mrm8488/bert-tiny-5-finetuned-squadv2) | **57.12** | **60.86** | 24.34 ## Model in action Fast usage with **pipelines**: ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="mrm8488/bert-tiny-finetuned-squadv2", tokenizer="mrm8488/bert-tiny-finetuned-squadv2" ) qa_pipeline({ 'context': "Manuel Romero has been working hardly in the repository hugginface/transformers lately", 'question': "Who has been working hard for hugginface/transformers lately?" }) # Output: ``` ```json { "answer": "Manuel Romero", "end": 13, "score": 0.05684709993458714, "start": 0 } ``` ### Yes! That was easy 🎉 Let's try with another example ```python qa_pipeline({ 'context': "Manuel Romero has been working hardly in the repository hugginface/transformers lately", 'question': "For which company has worked Manuel Romero?" }) # Output: ``` ```json { "answer": "hugginface/transformers", "end": 79, "score": 0.11613431826808274, "start": 56 } ``` ### It works!! 🎉 🎉 🎉 > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/bert-tiny-finetuned-yahoo_answers_topics
2021-05-20T00:40:50.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
198
transformers
mrm8488/bert-tiny-wrslb-finetuned-squadv1
2021-05-20T00:41:08.000Z
[ "pytorch", "jax", "bert", "question-answering", "transformers" ]
question-answering
[ ".gitattributes", "config.json", "flax_model.msgpack", "nbest_predictions_.json", "predictions_.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
19
transformers
mrm8488/bert-tiny2bert-tiny_shared-finetuned-wikisql
2020-11-12T20:30:55.000Z
[ "pytorch", "encoder-decoder", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
15
transformers
mrm8488/bert-uncased-finetuned-qnli
2021-05-20T00:42:00.000Z
[ "pytorch", "jax", "bert", "text-classification", "en", "transformers" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "eval_results.txt", "flax_model.msgpack", "pytorch_model.bin", "training_args.bin" ]
mrm8488
20
transformers
--- language: en thumbnail: --- # [BERT](https://huggingface.co/deepset/bert-base-cased-squad2) fine tuned on [QNLI](https://github.com/rhythmcao/QNLI)+ compression ([BERT-of-Theseus](https://github.com/JetRunner/BERT-of-Theseus)) I used a [Bert model fine tuned on **SQUAD v2**](https://huggingface.co/deepset/bert-base-cased-squad2) and then I fine tuned it on **QNLI** using **compression** (with a constant replacing rate) as proposed in **BERT-of-Theseus** ## Details of the downstream task (QNLI): ### Getting the dataset ```bash wget https://raw.githubusercontent.com/rhythmcao/QNLI/master/data/QNLI/train.tsv wget https://raw.githubusercontent.com/rhythmcao/QNLI/master/data/QNLI/test.tsv wget https://raw.githubusercontent.com/rhythmcao/QNLI/master/data/QNLI/dev.tsv mkdir QNLI_dataset mv *.tsv QNLI_dataset ``` ### Model training The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command: ```bash !python /content/BERT-of-Theseus/run_glue.py \ --model_name_or_path deepset/bert-base-cased-squad2 \ --task_name qnli \ --do_train \ --do_eval \ --do_lower_case \ --data_dir /content/QNLI_dataset \ --max_seq_length 128 \ --per_gpu_train_batch_size 32 \ --per_gpu_eval_batch_size 32 \ --learning_rate 2e-5 \ --save_steps 2000 \ --num_train_epochs 50 \ --output_dir /content/ouput_dir \ --evaluate_during_training \ --replacing_rate 0.7 \ --steps_for_replacing 2500 ``` ## Metrics: | Model | Accuracy | |-----------------|------| | BERT-base | 91.2 | | BERT-of-Theseus | 88.8 | | [bert-uncased-finetuned-qnli](https://huggingface.co/mrm8488/bert-uncased-finetuned-qnli) | 87.2 | DistillBERT | 85.3 | > [See all my models](https://huggingface.co/models?search=mrm8488) > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/bert2bert-medium_shared-question-generation
2020-12-27T20:27:57.000Z
[ "pytorch", "encoder-decoder", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
16
transformers
mrm8488/bert2bert-mini_shared-question-generation
2020-12-26T12:52:16.000Z
[ "pytorch", "encoder-decoder", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
14
transformers
mrm8488/bert2bert-multilingual_shared-question-generation
2020-12-29T19:10:07.000Z
[ "pytorch", "encoder-decoder", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "scheduler.pt", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
34
transformers
mrm8488/bert2bert-small_shared-question-generation
2020-12-26T12:28:08.000Z
[ "pytorch", "encoder-decoder", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
19
transformers
mrm8488/bert2bert-spanish-question-generation
2021-04-24T16:18:25.000Z
[ "pytorch", "encoder-decoder", "seq2seq", "es", "transformers", "spanish", "question", "generation", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
67
transformers
--- language: es tags: - spanish - question - generation widget: - text: "Manuel vive en Murcia, España" --- # Spanish Bert2Bert fine-tuned on SQuaD (es) for question generation
mrm8488/bert2bert_shared-finetuned-wikisql
2020-11-12T03:28:24.000Z
[ "pytorch", "encoder-decoder", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
15
transformers
mrm8488/bert2bert_shared-german-finetuned-summarization
2021-05-27T12:13:27.000Z
[ "pytorch", "encoder-decoder", "seq2seq", "de", "dataset:mlsum", "transformers", "summarization", "news", "text2text-generation" ]
summarization
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
357
transformers
--- tags: - summarization - news language: de datasets: - mlsum widget: - text: 'Wie geht man nach schrecklichen Ereignissen ambesten auf die Ängste und Sorgen von Kindern ein?Therapeuten haben eine klare Botschaft. Die Weltist voller Gefahren, Verbrechen und Schrecken -Krieg, Terrorismus, Umweltzerstörung und eben auchKindesmissbrauch. Soll man mit Kindern darüberreden, und wie? Die Antwort hängt auch vom Alterdes Kindes ab. Kinder, gerade kleine Kinder,brauchen Sicherheit, man muss sie nicht mitabstrakten Bedrohungen konfrontieren, die sieohnehin noch nicht ganz verstehen können. Ihreeigenen Ängste sollten Eltern lieber bei sichbehalten, raten Psychologen. Etwas anderes ist es,wenn Kinder schreckliche Ereignisse wie denaktuellen Fall in München mitbekommen. Dann sollteman natürlich auf die Ängste und Sorgen der Kindereingehen und mit ihnen sprechen. Man sollte aberklarmachen: Ja, es gibt kranke Menschen, die Bösestun, aber das ist die Ausnahme. Der Verbrecher istgefasst, er läuft nicht mehr frei herum,Polizisten passen auf. Die Botschaft sollte sein:Das ist nicht nah an dir dran, das bedroht dichnicht, empfehlen Familientherapeuten zum Umgangmit Ängsten von Kindern. Natürlich können auchVerhaltensregeln nicht schaden: Nein sagen, lautwerden und nicht mit Fremden mitgehen. AuchBilderbücher können helfen, solches Verhalten frühzu vermitteln, etwa "Das große und das kleineNein!" von Gisela Braun und Dorothee Wolters oder"Ich geh doch nicht mit Jedem mit!" von DagmarGeisler. Aber auch wenn jeder Vater, jede Mutterbeim Gedanken an derartige Verbrechen insSchlottern kommt: Die Statistik zeigt eindeutig,dass solche Fälle sehr selten sind.Kindesmissbrauch findet vor allem im nahensozialen Umfeld statt, in der Familie, in Vereinenoder bei älteren vermeintlichen "Freunden". Werseine Kinder davor beschützen will, muss ihnenzuhören, sie ernst nehmen, Fragen stellen, genauhinschauen.' --- # German BERT2BERT fine-tuned on MLSUM DE for summarization ## Model [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) (BERT Checkpoint) ## Dataset **MLSUM** is the first large-scale MultiLingual SUMmarization dataset. Obtained from online newspapers, it contains 1.5M+ article/summary pairs in five different languages -- namely, French, **German**, Spanish, Russian, Turkish. Together with English newspapers from the popular CNN/Daily mail dataset, the collected data form a large scale multilingual dataset which can enable new research directions for the text summarization community. We report cross-lingual comparative analyses based on state-of-the-art systems. These highlight existing biases which motivate the use of a multi-lingual dataset. [MLSUM de](https://huggingface.co/datasets/viewer/?dataset=mlsum) ## Results |Set|Metric| # Score| |----|------|------| | Test |Rouge2 - mid -precision | **33.04**| | Test | Rouge2 - mid - recall | **33.83**| | Test | Rouge2 - mid - fmeasure | **33.15**| ## Usage ```python import torch from transformers import BertTokenizerFast, EncoderDecoderModel device = 'cuda' if torch.cuda.is_available() else 'cpu' ckpt = 'mrm8488/bert2bert_shared-german-finetuned-summarization' tokenizer = BertTokenizerFast.from_pretrained(ckpt) model = EncoderDecoderModel.from_pretrained(ckpt).to(device) def generate_summary(text): inputs = tokenizer([text], padding="max_length", truncation=True, max_length=512, return_tensors="pt") input_ids = inputs.input_ids.to(device) attention_mask = inputs.attention_mask.to(device) output = model.generate(input_ids, attention_mask=attention_mask) return tokenizer.decode(output[0], skip_special_tokens=True) text = "Your text here..." generate_summary(text) ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) with the support of [Narrativa](https://www.narrativa.com/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/bert2bert_shared-italian-question-generation
2020-12-11T14:33:07.000Z
[ "pytorch", "encoder-decoder", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
12
transformers
mrm8488/bert2bert_shared-portuguese-question-generation
2020-12-12T18:30:18.000Z
[ "pytorch", "encoder-decoder", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
42
transformers
mrm8488/bert2bert_shared-spanish-finetuned-muchocine-review-summarization
2021-05-07T09:26:36.000Z
[ "pytorch", "encoder-decoder", "seq2seq", "es", "transformers", "summarization", "films", "cinema", "text2text-generation" ]
summarization
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
76
transformers
--- tags: - summarization - films - cinema language: es widget: - text: "Es la película que con más ansia he esperado, dado el precedente de las dos anteriores entregas, esta debía ser la joya de la corona, el mejor film jamás realizado… Pero cuando salí del cine estaba decepcionado, me leí el libro antes de ver la película (cosa que no hice con las otras dos) y sentí que Peter me falló. Le faltaba algo, habían obviado demasiadas cosas y no salía Saruman, algo incomprensible dada la importancia que se le dio en las anteriores películas. La película parecía incompleta y realmente lo estaba. Me pareció la peor de la trilogía. Volví a ver el film y esta vez en su versión extendida y mentalizado ya de que no podía ser igual que el libro y mi opinión cambio. Es la mejor de la trilogía en todos los aspectos y para mi gusto el mejor film que jamás se ha hecho. A pesar de sus casi 240 minutos el ritmo no decae sino que aumenta en algo que solo lo he visto hacer a Peter Jackson, las palabras de Tolkien cobra vida y con gran lirismo el film avanza hacía su clímax final. La impecable banda sonora te transporta los sentimientos que el maestro Jackson te quiere transmitir. Aquella noche de 2003 en el teatro Kodak de L.A los oscars dieron justicia a una trilogía que injustamente fue tratada hasta esa noche. En conjunto gano 17 oscars, pero en mi opinión se quedaron bastante cortos. El tiempo pondrá a esta trilogía como clásico imperecedero, una lección de cómo realizar una superproducción, los mensajes que transmiten, los bellos escenarios que presentan, un cuento al fin y al cabo pero convertido en obra de arte. Genialidad en todos los sentidos, no os dejéis engañar por los que duramente critican a esta trilogía y sino mirar lo que ellos llaman buen cine…" ---
mrm8488/bert2bert_shared-spanish-finetuned-paws-x-paraphrasing
2021-04-23T13:58:39.000Z
[ "pytorch", "encoder-decoder", "seq2seq", "es", "transformers", "spanish", "paraphrasing", "paraphrase", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "trainer_state.json", "training_args.bin", "vocab.txt" ]
mrm8488
198
transformers
--- language: es tags: - spanish - paraphrasing - paraphrase widget: - text: "El pionero suizo John Sutter (1803-1880) llegó a Alta California con otros colonos euroamericanos en agosto de 1839." --- # Spanish Bert2Bert (shared) fine-tuned on PAWS-X es for paraphrasing
mrm8488/bert2bert_shared-spanish-finetuned-summarization
2021-06-15T08:37:40.000Z
[ "pytorch", "encoder-decoder", "seq2seq", "es", "dataset:mlsum", "transformers", "summarization", "news", "text2text-generation" ]
summarization
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
679
transformers
--- tags: - summarization - news language: es datasets: - mlsum widget: - text: 'Al filo de las 22.00 horas del jueves, la Asamblea de Madrid vive un momento sorprendente: Vox decide no apoyar una propuesta del PP en favor del blindaje fiscal de la Comunidad. Se ha roto la unidad de los tres partidos de derechas. Es un hecho excepcional. Desde que arrancó la legislatura, PP, Cs y Vox han votado en bloque casi el 75% de las veces en el pleno de la Cámara. Juntos decidieron la composición de la Mesa de la Asamblea. Juntos invistieron presidenta a Isabel Díaz Ayuso. Y juntos han votado la mayoría de proposiciones no de ley, incluida la que ha marcado el esprint final de la campaña para las elecciones generales: acaban de instar al Gobierno de España a "la ilegalización inmediata" de los partidos separatistas "que atenten contra la unidad de la Nación". Los críticos de Cs no comparten el apoyo al texto de Vox contra el secesionisimo Ese balance retrata una necesidad antes que una complicidad, según fuentes del PP con predicamento en la dirección regional y nacional. Tras casi 15 años gobernando con mayoría absoluta, la formación conservadora vivió como una tortura la pasada legislatura, en la que dependió de Cs para sacar adelante sus iniciativas. El problema se agudizó tras las elecciones autonómicas de mayo. El PP ha tenido que formar con Cs el primer gobierno de coalición de la historia de la región, y ni siquiera con eso le basta para ganar las votaciones de la Cámara. Los dos socios gubernamentales necesitan a Vox, la menos predecible de las tres formaciones. "Tenemos que trabajar juntos defendiendo la unidad del país, por eso no quisimos dejar a Vox solo", dijo ayer Díaz Ayuso para justificar el apoyo de PP y Cs a la proposición de la extrema derecha sobre Cataluña. "Después nosotros llevábamos otra proposición para defender el blindaje fiscal de Madrid, y ahí Vox nos dejó atrás. No permitió que esto saliera. Es un grave error por su parte", prosiguió, recalcando el enfado del PP. "Demuestra que está más en cuestiones electoralistas", subrayó. "Los que pensamos, con nuestras inmensas diferencias, que tenemos cosas en común que nos unen como partidos que queremos Comunidades libres, con bajos impuestos, en las que se viva con seguridad y en paz, tenemos que estar unidos", argumentó. "Y por lo menos nosotros de nuestra línea no nos separamos". Al contrario de lo que está ocurriendo el Ayuntamiento de Madrid, donde el PP y Cs ya han defendido posiciones de voto distintas, pese a compartir el Gobierno, en la Asamblea los partidos de Díaz Ayuso e Ignacio Aguado están actuando con la máxima lealtad en las votaciones del pleno. Otra cosa son las comisiones. Y el caso Avalmadrid. Es en ese terreno donde Cs y Vox están buscando el margen de maniobra necesario para separarse del PP en plena campaña electoral, abandonando a su suerte a su socio para distinguirse ante los electores. —"Usted me ha dejado tirada", le espetó la presidenta de la Comunidad de Madrid a Rocío Monasterio tras saber que Vox permitiría que la izquierda tuviera mayoría en la comisión parlamentaria que investigará los avales concedidos por la empresa semipública entre 2007 y 2018, lo que podría incluir el de 400.000 euros aprobado en 2011, y nunca devuelto al completo, para una empresa participada por el padre de Isabel Díaz Ayuso. "Monasterio no es de fiar. Dice una cosa y hace la contraria", dice una fuente popular sobre las negociaciones mantenidas para repartirse los puestos de las diferentes comisiones, que Vox no cumplió tras buscar un segundo pacto con otras formaciones (que no llegó a buen puerto). Ilegalización de Vox Los tres partidos de derechas también se han enfrentado por la ubicación de Vox en el pleno. Las largas negociaciones para la investidura de Díaz Ayuso dejaron heridas abiertas. Y los diputados de Cs no desaprovechan la oportunidad de lanzar dardos contra los de Vox, pero luego coinciden con ellos en la mayoría de votaciones. Ocurrió, por ejemplo, el jueves, cuando se debatía la polémica proposición para instar al Gobierno nacional a ilegalizar a los partidos separatistas que atenten contra la unidad de España. —"Mostrar nuestra sorpresa ante la presentación por parte de Vox de esta propuesta", lanzó Araceli Gómez, diputada de la formación de Aguado. "Sorprende que planteen ustedes este asunto cuando está también sobre la mesa el debate de su propia ilegalización por atentar contra el ordenamiento jurídico o contra valores constitucionales como la igualdad o la no discriminación". Luego de esa descalificación, y ante la incredulidad de los diputados de los partidos de izquierdas, Cs unió sus votos a los de Vox y a los del PP. La decisión ha provocado polémica interna, como demuestra que Albert Rivera no la apoyara ayer explícitamente. Tampoco ha sido bien acogida por el sector crítico de la formación. Pero ha demostrado una cosa: en Madrid hay tres partidos que casi siempre votan como uno.' --- # Spanish BERT2BERT (BETO) fine-tuned on MLSUM ES for summarization ## Model [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) (BERT Checkpoint) ## Dataset **MLSUM** is the first large-scale MultiLingual SUMmarization dataset. Obtained from online newspapers, it contains 1.5M+ article/summary pairs in five different languages -- namely, French, German, **Spanish**, Russian, Turkish. Together with English newspapers from the popular CNN/Daily mail dataset, the collected data form a large scale multilingual dataset which can enable new research directions for the text summarization community. We report cross-lingual comparative analyses based on state-of-the-art systems. These highlight existing biases which motivate the use of a multi-lingual dataset. [MLSUM es](https://huggingface.co/datasets/viewer/?dataset=mlsum) ## Results |Set|Metric| Value| |----|------|------| | Test |Rouge2 - mid -precision | **9.6**| | Test | Rouge2 - mid - recall | **8.4**| | Test | Rouge2 - mid - fmeasure | **8.7**| | Test | Rouge1 | 26.24 | | Test | Rouge2 | 8.9 | | Test | RougeL | 21.01| | Test | RougeLsum | 21.02 | ## Usage ```python import torch from transformers import BertTokenizerFast, EncoderDecoderModel device = 'cuda' if torch.cuda.is_available() else 'cpu' ckpt = 'mrm8488/bert2bert_shared-spanish-finetuned-summarization' tokenizer = BertTokenizerFast.from_pretrained(ckpt) model = EncoderDecoderModel.from_pretrained(ckpt).to(device) def generate_summary(text): inputs = tokenizer([text], padding="max_length", truncation=True, max_length=512, return_tensors="pt") input_ids = inputs.input_ids.to(device) attention_mask = inputs.attention_mask.to(device) output = model.generate(input_ids, attention_mask=attention_mask) return tokenizer.decode(output[0], skip_special_tokens=True) text = "Your text here..." generate_summary(text) ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) with the support of [Narrativa](https://www.narrativa.com/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/bert2bert_shared-turkish-summarization
2021-05-22T11:11:45.000Z
[ "pytorch", "encoder-decoder", "seq2seq", "tr", "dataset:mlsum", "transformers", "summarization", "news", "text2text-generation" ]
summarization
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
mrm8488
118
transformers
--- tags: - summarization - news language: tr datasets: - mlsum widget: - text: "Ankara'da oto hırsızlık çetesine yönelikdüzenlenen ‘Balta’ operasyonunda, çete lideri‘balta’ lakaplı şahıs ile 7 kişi gözaltına alındı.Diğer bir operasyonda ise 3 şüpheli çaldıklarıaraçları parçalarken yapılan baskında suçüstüyakalandı. Ankara Emniyet Müdürlüğü’ne bağlıAsayiş Şube Müdürlüğü Oto Hırsızlık Büro Amirliğiekipleri, Ankara ilinde meydana gelen, otohırsızlık olaylarına karşı Ankara CumhuriyetBaşsavcılığı’nın izniyle yürüttükleri 3 aylıkçalışma sonucunda operasyon düğmesine bastı.Yapılan teknik ve fiziki takip sonucunda, ‘Balta’çetesine ulaşıldı. Çeteyi izleyen ekipler, Ankara,Konya ve Antalya’da eş zamanlı operasyondüzenleyerek çete lideri ‘Balta’ lakaplı Necati D.ve çete üyesi 7 kişiyi yakaladı. Takip edildiğinianlayınca ortadan kayboldu Çete lideri ‘Balta’nın,polis ekipleri tarafından izlendiğini anladığı veaylarca ortada görünmediğini tespit eden HırsızlıkBüro ekipleri, ‘Balta’nın kendi suç ortaklarını dadolandırmaya çalıştığını saptadı. Adliyeye sevkedilen şüphelilerden haklarında çok sayıda otohırsızlık kaydı bulunan çete lideri Necati D.,Ferhat K., Atakan A. ve Tayfun G., çıkarıldıklarınöbetçi sulh hakimliğince tutuklanarak cezaevinegönderildi. Diğer 3 şüpheli ise adli kontrolşartıyla serbest bırakıldı. Çaldıkları araçlarıparçalarken polis bastı Diğer bir olay iseAltındağ ilçesinde meydana geldi. Hırsızlık Büroekipleri inceledikleri 2 oto hırsızlık olayınınsonucunda 3 şüpheliyi takibe aldı. Şüphelilerinçaldıkları 2 aracı İvedik Hurdacılar Sitesi’ndekidepolarında parçalayacaklarını belirleyen ekiplerharekete geçti. Depoya baskın yapan polisekipleri, 3 şüpheliyi suçüstü yakaladı.Emniyetteki işlemlerinin ardından adliyeye sevkedilen hırsızlık zanlıları, çıkarıldıkları nöbetçimahkeme tarafından adli kontrol şartıyla serbestbırakıldı." --- # Turkish BERT2BERT (shared) fine-tuned on MLSUM TR for summarization ## Model [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) (BERT Checkpoint) ## Dataset **MLSUM** is the first large-scale MultiLingual SUMmarization dataset. Obtained from online newspapers, it contains 1.5M+ article/summary pairs in five different languages -- namely, French, German, Spanish, Russian, **Turkish**. Together with English newspapers from the popular CNN/Daily mail dataset, the collected data form a large scale multilingual dataset which can enable new research directions for the text summarization community. We report cross-lingual comparative analyses based on state-of-the-art systems. These highlight existing biases which motivate the use of a multi-lingual dataset. [MLSUM tu/tr](https://huggingface.co/datasets/viewer/?dataset=mlsum) ## Results |Set|Metric| Value| |----|------|------| | Test |Rouge2 - mid -precision | **32.41**| | Test | Rouge2 - mid - recall | **28.65**| | Test | Rouge2 - mid - fmeasure | **29.48**| ## Usage ```python import torch from transformers import BertTokenizerFast, EncoderDecoderModel device = 'cuda' if torch.cuda.is_available() else 'cpu' ckpt = 'mrm8488/bert2bert_shared-turkish-summarization' tokenizer = BertTokenizerFast.from_pretrained(ckpt) model = EncoderDecoderModel.from_pretrained(ckpt).to(device) def generate_summary(text): inputs = tokenizer([text], padding="max_length", truncation=True, max_length=512, return_tensors="pt") input_ids = inputs.input_ids.to(device) attention_mask = inputs.attention_mask.to(device) output = model.generate(input_ids, attention_mask=attention_mask) return tokenizer.decode(output[0], skip_special_tokens=True) text = "Your text here..." generate_summary(text) ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) with the support of [Narrativa](https://www.narrativa.com/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/bioclinicalBERT-finetuned-covid-papers
2021-05-20T00:44:42.000Z
[ "pytorch", "jax", "bert", "masked-lm", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
47
transformers
mrm8488/byt5-small-finetuned-tweet-qa
2021-06-04T09:34:33.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json" ]
mrm8488
18
transformers
mrm8488/byt5-small-tweet-hate-detection
2021-06-02T18:43:48.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json" ]
mrm8488
275
transformers
mrm8488/camembert-base-finetuned-pawsx-fr
2021-04-28T15:51:53.000Z
[ "pytorch", "camembert", "text-classification", "fr", "dataset:xtreme", "transformers", "nli" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "sentencepiece.bpe.model", "special_tokens_map.json", "tokenizer_config.json" ]
mrm8488
50
transformers
--- language: fr datasets: - xtreme tags: - nli widget: - text: "La première série a été mieux reçue par la critique que la seconde. La seconde série a été bien accueillie par la critique, mieux que la première." --- # Camembert-base fine-tuned on PAWS-X-fr for Paraphrase Identification (NLI)
mrm8488/camembert2camembert_shared-finetuned-french-summarization
2021-05-26T07:42:02.000Z
[ "pytorch", "encoder-decoder", "seq2seq", "fr", "dataset:mlsum", "transformers", "summarization", "news", "text2text-generation" ]
summarization
[ ".gitattributes", "README.md", "config.json", "optimizer.pt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "training_args.bin", "unigram.json" ]
mrm8488
219
transformers
--- tags: - summarization - news language: fr datasets: - mlsum widget: - text: "Un nuage de fumée juste après l’explosion, le 1er juin 2019. Une déflagration dans une importante usine d’explosifs du centre de la Russie a fait au moins 79 blessés samedi 1er juin. L’explosion a eu lieu dans l’usine Kristall à Dzerzhinsk, une ville située à environ 400 kilomètres à l’est de Moscou, dans la région de Nijni-Novgorod. « Il y a eu une explosion technique dans l’un des ateliers, suivie d’un incendie qui s’est propagé sur une centaine de mètres carrés », a expliqué un porte-parole des services d’urgence. Des images circulant sur les réseaux sociaux montraient un énorme nuage de fumée après l’explosion. Cinq bâtiments de l’usine et près de 180 bâtiments résidentiels ont été endommagés par l’explosion, selon les autorités municipales. Une enquête pour de potentielles violations des normes de sécurité a été ouverte. Fragments de shrapnel Les blessés ont été soignés après avoir été atteints par des fragments issus de l’explosion, a précisé une porte-parole des autorités sanitaires citée par Interfax. « Nous parlons de blessures par shrapnel d’une gravité moyenne et modérée », a-t-elle précisé. Selon des représentants de Kristall, cinq personnes travaillaient dans la zone où s’est produite l’explosion. Elles ont pu être évacuées en sécurité. Les pompiers locaux ont rapporté n’avoir aucune information sur des personnes qui se trouveraient encore dans l’usine." --- # French RoBERTa2RoBERTa (shared) fine-tuned on MLSUM FR for summarization ## Model [camembert-base](https://huggingface.co/camembert-base) (RoBERTa Checkpoint) ## Dataset **MLSUM** is the first large-scale MultiLingual SUMmarization dataset. Obtained from online newspapers, it contains 1.5M+ article/summary pairs in five different languages -- namely, **French**, German, Spanish, Russian, Turkish. Together with English newspapers from the popular CNN/Daily mail dataset, the collected data form a large scale multilingual dataset which can enable new research directions for the text summarization community. We report cross-lingual comparative analyses based on state-of-the-art systems. These highlight existing biases which motivate the use of a multi-lingual dataset. [MLSUM fr](https://huggingface.co/datasets/viewer/?dataset=mlsum) ## Results |Set|Metric| # Score| |----|------|------| | Test |Rouge2 - mid -precision | **14.47**| | Test | Rouge2 - mid - recall | **12.90**| | Test | Rouge2 - mid - fmeasure | **13.30**| ## Usage ```python import torch from transformers import RobertaTokenizerFast, EncoderDecoderModel device = 'cuda' if torch.cuda.is_available() else 'cpu' ckpt = 'mrm8488/camembert2camembert_shared-finetuned-french-summarization' tokenizer = RobertaTokenizerFast.from_pretrained(ckpt) model = EncoderDecoderModel.from_pretrained(ckpt).to(device) def generate_summary(text): inputs = tokenizer([text], padding="max_length", truncation=True, max_length=512, return_tensors="pt") input_ids = inputs.input_ids.to(device) attention_mask = inputs.attention_mask.to(device) output = model.generate(input_ids, attention_mask=attention_mask) return tokenizer.decode(output[0], skip_special_tokens=True) text = "Un nuage de fumée juste après l’explosion, le 1er juin 2019. Une déflagration dans une importante usine d’explosifs du centre de la Russie a fait au moins 79 blessés samedi 1er juin. L’explosion a eu lieu dans l’usine Kristall à Dzerzhinsk, une ville située à environ 400 kilomètres à l’est de Moscou, dans la région de Nijni-Novgorod. « Il y a eu une explosion technique dans l’un des ateliers, suivie d’un incendie qui s’est propagé sur une centaine de mètres carrés », a expliqué un porte-parole des services d’urgence. Des images circulant sur les réseaux sociaux montraient un énorme nuage de fumée après l’explosion. Cinq bâtiments de l’usine et près de 180 bâtiments résidentiels ont été endommagés par l’explosion, selon les autorités municipales. Une enquête pour de potentielles violations des normes de sécurité a été ouverte. Fragments de shrapnel Les blessés ont été soignés après avoir été atteints par des fragments issus de l’explosion, a précisé une porte-parole des autorités sanitaires citée par Interfax. « Nous parlons de blessures par shrapnel d’une gravité moyenne et modérée », a-t-elle précisé. Selon des représentants de Kristall, cinq personnes travaillaient dans la zone où s’est produite l’explosion. Elles ont pu être évacuées en sécurité. Les pompiers locaux ont rapporté n’avoir aucune information sur des personnes qui se trouveraient encore dans l’usine." generate_summary(text) # Output: L’explosion a eu lieu dans l’usine Kristall à Dzerzhinsk, une ville située à environ 400 kilomètres à l’est de Moscou. ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) with the support of [Narrativa](https://www.narrativa.com/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/chEMBL26_smiles_v2
2021-05-20T18:16:29.000Z
[ "pytorch", "jax", "roberta", "masked-lm", "en", "transformers", "drugs", "chemist", "drug design", "smile", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "merges.txt", "optimizer.pt", "pytorch_model.bin", "scheduler.pt", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
mrm8488
814
transformers
--- language: en tags: - drugs - chemist - drug design - smile widget: - text: "CC(C)CN(CC(OP(=O)(O)O)C(Cc1ccccc1)NC(=O)OC1CCOC1)S(=O)(=O)c1ccc(N)<mask>" ---
mrm8488/chEMBL_smiles_v1
2021-05-20T18:16:53.000Z
[ "pytorch", "jax", "roberta", "masked-lm", "en", "transformers", "drugs", "chemist", "drug design", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
mrm8488
36
transformers
--- language: en tags: - drugs - chemist - drug design widget: - text: "CC(C)CN(CC(OP(=O)(O)O)C(Cc1ccccc1)NC(=O)OC1CCOC1)S(=O)(=O)c1ccc(N)<mask>" --- # *De Novo* Drug Design with MLM ## What is it? An approximation to [Generative Recurrent Networks for De Novo Drug Design](https://onlinelibrary.wiley.com/doi/full/10.1002/minf.201700111) but training a MLM (RoBERTa like) from scratch. ## Why? As mentioned in the paper: Generative artificial intelligence models present a fresh approach to chemogenomics and de novo drug design, as they provide researchers with the ability to narrow down their search of the chemical space and focus on regions of interest. They used a generative *recurrent neural network (RNN)* containing long short‐term memory (LSTM) cell to capture the syntax of molecular representations in terms of SMILES strings. The learned pattern probabilities can be used for de novo SMILES generation. This molecular design concept **eliminates the need for virtual compound library enumeration** and **enables virtual compound design without requiring secondary or external activity prediction**. ## My Goal 🎯 By training a MLM from scratch on 438552 (cleaned*) SMILES I wanted to build a model that learns this kind of molecular combinations so that given a partial SMILE it can generate plausible combinations so that it can be proposed as new drugs. By cleaned SMILES I mean that I used their [SMILES cleaning script](https://github.com/topazape/LSTM_Chem/blob/master/cleanup_smiles.py) to remove duplicates, salts, and stereochemical information. You can see the detailed process of gathering the data, preprocess it and train the LSTM in their [repo](https://github.com/topazape/LSTM_Chem). ## Fast usage with ```pipelines``` 🧪 ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model='/mrm8488/chEMBL_smiles_v1', tokenizer='/mrm8488/chEMBL_smiles_v1' ) # CC(C)CN(CC(OP(=O)(O)O)C(Cc1ccccc1)NC(=O)OC1CCOC1)S(=O)(=O)c1ccc(N)cc1 Atazanavir smile1 = "CC(C)CN(CC(OP(=O)(O)O)C(Cc1ccccc1)NC(=O)OC1CCOC1)S(=O)(=O)c1ccc(N)<mask>" fill_mask(smile1) # Output: ''' [{'score': 0.6040295958518982, 'sequence': '<s> CC(C)CN(CC(OP(=O)(O)O)C(Cc1ccccc1)NC(=O)OC1CCOC1)S(=O)(=O)c1ccc(N)nc</s>', 'token': 265}, {'score': 0.2185731679201126, 'sequence': '<s> CC(C)CN(CC(OP(=O)(O)O)C(Cc1ccccc1)NC(=O)OC1CCOC1)S(=O)(=O)c1ccc(N)N</s>', 'token': 50}, {'score': 0.0642734169960022, 'sequence': '<s> CC(C)CN(CC(OP(=O)(O)O)C(Cc1ccccc1)NC(=O)OC1CCOC1)S(=O)(=O)c1ccc(N)cc</s>', 'token': 261}, {'score': 0.01932266168296337, 'sequence': '<s> CC(C)CN(CC(OP(=O)(O)O)C(Cc1ccccc1)NC(=O)OC1CCOC1)S(=O)(=O)c1ccc(N)CCCl</s>', 'token': 452}, {'score': 0.005068355705589056, 'sequence': '<s> CC(C)CN(CC(OP(=O)(O)O)C(Cc1ccccc1)NC(=O)OC1CCOC1)S(=O)(=O)c1ccc(N)C</s>', 'token': 39}] ''' ``` ## More I also created a [second version](https://huggingface.co/mrm8488/chEMBL26_smiles_v2) without applying the cleaning SMILES script mentioned above. You can use it in the same way as this one. ```python fill_mask = pipeline( "fill-mask", model='/mrm8488/chEMBL26_smiles_v2', tokenizer='/mrm8488/chEMBL26_smiles_v2' ) ``` [Original paper](https://www.ncbi.nlm.nih.gov/pubmed/29095571) Authors: <details> Swiss Federal Institute of Technology (ETH), Department of Chemistry and Applied Biosciences, Vladimir–Prelog–Weg 4, 8093, Zurich, Switzerland, Stanford University, Department of Computer Science, 450 Sierra Mall, Stanford, CA, 94305, USA, inSili.com GmbH, 8049, Zurich, Switzerland, Gisbert Schneider, Email: hc.zhte@trebsig. </details> > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/codeBERTaJS
2021-05-20T18:17:36.000Z
[ "pytorch", "jax", "roberta", "masked-lm", "code", "arxiv:1909.09436", "transformers", "javascript", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
mrm8488
2,582
transformers
--- language: code thumbnail: tags: - javascript - code widget: - text: "async function createUser(req, <mask>) { if (!validUser(req.body.user)) { return res.status(400); } user = userService.createUser(req.body.user); return res.json(user); }" --- # CodeBERTaJS CodeBERTaJS is a RoBERTa-like model trained on the [CodeSearchNet](https://github.blog/2019-09-26-introducing-the-codesearchnet-challenge/) dataset from GitHub for `javaScript` by [Manuel Romero](https://twitter.com/mrm8488) The **tokenizer** is a Byte-level BPE tokenizer trained on the corpus using Hugging Face `tokenizers`. Because it is trained on a corpus of code (vs. natural language), it encodes the corpus efficiently (the sequences are between 33% to 50% shorter, compared to the same corpus tokenized by gpt2/roberta). The (small) **model** is a 6-layer, 84M parameters, RoBERTa-like Transformer model – that’s the same number of layers & heads as DistilBERT – initialized from the default initialization settings and trained from scratch on the full `javascript` corpus (120M after preproccessing) for 2 epochs. ## Quick start: masked language modeling prediction ```python JS_CODE = """ async function createUser(req, <mask>) { if (!validUser(req.body.user)) { \t return res.status(400); } user = userService.createUser(req.body.user); return res.json(user); } """.lstrip() ``` ### Does the model know how to complete simple JS/express like code? ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="mrm8488/codeBERTaJS", tokenizer="mrm8488/codeBERTaJS" ) fill_mask(JS_CODE) ## Top 5 predictions: # 'res' # prob 0.069489665329 'next' 'req' 'user' ',req' ``` ### Yes! That was easy 🎉 Let's try with another example ```python JS_CODE_= """ function getKeys(obj) { keys = []; for (var [key, value] of Object.entries(obj)) { keys.push(<mask>); } return keys } """.lstrip() ``` Results: ```python 'obj', 'key', ' value', 'keys', 'i' ``` > Not so bad! Right token was predicted as second option! 🎉 ## This work is heavely inspired on [codeBERTa](https://github.com/huggingface/transformers/blob/master/model_cards/huggingface/CodeBERTa-small-v1/README.md) by huggingface team <br> ## CodeSearchNet citation <details> ```bibtex @article{husain_codesearchnet_2019, \ttitle = {{CodeSearchNet} {Challenge}: {Evaluating} the {State} of {Semantic} {Code} {Search}}, \tshorttitle = {{CodeSearchNet} {Challenge}}, \turl = {http://arxiv.org/abs/1909.09436}, \turldate = {2020-03-12}, \tjournal = {arXiv:1909.09436 [cs, stat]}, \tauthor = {Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc}, \tmonth = sep, \tyear = {2019}, \tnote = {arXiv: 1909.09436}, } ``` </details> > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/codebert-base-finetuned-detect-insecure-code
2021-05-20T18:19:02.000Z
[ "pytorch", "jax", "roberta", "text-classification", "en", "dataset:codexglue", "arxiv:2002.08155", "arxiv:1907.11692", "transformers" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.json" ]
mrm8488
101
transformers
--- language: en datasets: - codexglue --- # CodeBERT fine-tuned for Insecure Code Detection 💾⛔ [codebert-base](https://huggingface.co/microsoft/codebert-base) fine-tuned on [CodeXGLUE -- Defect Detection](https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Defect-detection) dataset for **Insecure Code Detection** downstream task. ## Details of [CodeBERT](https://arxiv.org/abs/2002.08155) We present CodeBERT, a bimodal pre-trained model for programming language (PL) and nat-ural language (NL). CodeBERT learns general-purpose representations that support downstream NL-PL applications such as natural language codesearch, code documentation generation, etc. We develop CodeBERT with Transformer-based neural architecture, and train it with a hybrid objective function that incorporates the pre-training task of replaced token detection, which is to detect plausible alternatives sampled from generators. This enables us to utilize both bimodal data of NL-PL pairs and unimodal data, where the former provides input tokens for model training while the latter helps to learn better generators. We evaluate CodeBERT on two NL-PL applications by fine-tuning model parameters. Results show that CodeBERT achieves state-of-the-art performance on both natural language code search and code documentation generation tasks. Furthermore, to investigate what type of knowledge is learned in CodeBERT, we construct a dataset for NL-PL probing, and evaluate in a zero-shot setting where parameters of pre-trained models are fixed. Results show that CodeBERT performs better than previous pre-trained models on NL-PL probing. ## Details of the downstream task (code classification) - Dataset 📚 Given a source code, the task is to identify whether it is an insecure code that may attack software systems, such as resource leaks, use-after-free vulnerabilities and DoS attack. We treat the task as binary classification (0/1), where 1 stands for insecure code and 0 for secure code. The [dataset](https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Defect-detection) used comes from the paper [*Devign*: Effective Vulnerability Identification by Learning Comprehensive Program Semantics via Graph Neural Networks](http://papers.nips.cc/paper/9209-devign-effective-vulnerability-identification-by-learning-comprehensive-program-semantics-via-graph-neural-networks.pdf). All projects are combined and splitted 80%/10%/10% for training/dev/test. Data statistics of the dataset are shown in the below table: | | #Examples | | ----- | :-------: | | Train | 21,854 | | Dev | 2,732 | | Test | 2,732 | ## Test set metrics 🧾 | Methods | ACC | | -------- | :-------: | | BiLSTM | 59.37 | | TextCNN | 60.69 | | [RoBERTa](https://arxiv.org/pdf/1907.11692.pdf) | 61.05 | | [CodeBERT](https://arxiv.org/pdf/2002.08155.pdf) | 62.08 | | [Ours](https://huggingface.co/mrm8488/codebert-base-finetuned-detect-insecure-code) | **65.30** | ## Model in Action 🚀 ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch import numpy as np tokenizer = AutoTokenizer.from_pretrained('mrm8488/codebert-base-finetuned-detect-insecure-code') model = AutoModelForSequenceClassification.from_pretrained('mrm8488/codebert-base-finetuned-detect-insecure-code') inputs = tokenizer("your code here", return_tensors="pt", truncation=True, padding='max_length') labels = torch.tensor([1]).unsqueeze(0) # Batch size 1 outputs = model(**inputs, labels=labels) loss = outputs.loss logits = outputs.logits print(np.argmax(logits.detach().numpy())) ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/codebert-base-finetuned-stackoverflow-ner
2021-05-20T18:21:42.000Z
[ "pytorch", "jax", "roberta", "token-classification", "transformers" ]
token-classification
[ ".gitattributes", "config.json", "eval_results.txt", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "test_predictions.txt", "test_results.txt", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
mrm8488
264
transformers
mrm8488/codebert-finetuned-clone-detection
2021-05-20T18:22:42.000Z
[ "pytorch", "jax", "roberta", "transformers" ]
[ ".gitattributes", "config.json", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.json" ]
mrm8488
55
transformers
mrm8488/codebert2codebert-finetuned-code-defect-detection
2021-06-14T17:17:29.000Z
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.json" ]
mrm8488
0
transformers
mrm8488/codebert2codebert-finetuned-code-refinement-small
2021-06-11T14:36:00.000Z
[ "pytorch", "encoder-decoder", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.json" ]
mrm8488
2
transformers
mrm8488/codebert2codebert-finetuned-code-refinement
2021-06-11T10:30:26.000Z
[ "pytorch", "encoder-decoder", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.json" ]
mrm8488
5
transformers
mrm8488/convbert-small-spanish
2021-03-12T19:03:24.000Z
[ "pytorch", "convbert", "transformers" ]
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
mrm8488
8
transformers
mrm8488/dilstilgpt2-finetuned-amazon-food-reviews
2021-05-23T10:18:52.000Z
[ "pytorch", "jax", "gpt2", "lm-head", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "config.json", "flax_model.msgpack", "merges.txt", "optimizer.pt", "pytorch_model.bin", "scheduler.pt", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
mrm8488
103
transformers
mrm8488/diltilgpt2-finetuned-bookcopus-10
2021-05-23T10:19:39.000Z
[ "pytorch", "jax", "gpt2", "lm-head", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "config.json", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
mrm8488
15
transformers
mrm8488/distilbert-base-multi-cased-finetuned-typo-detection
2020-12-11T21:53:44.000Z
[ "pytorch", "distilbert", "token-classification", "multilingual", "transformers" ]
token-classification
[ ".gitattributes", "README.md", "config.json", "eval_results.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
99
transformers
--- language: multilingual thumbnail: --- # DISTILBERT 🌎 + Typo Detection ✍❌✍✔ [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) fine-tuned on [GitHub Typo Corpus](https://github.com/mhagiwara/github-typo-corpus) for **typo detection** (using *NER* style) ## Details of the downstream task (Typo detection as NER) - Dataset: [GitHub Typo Corpus](https://github.com/mhagiwara/github-typo-corpus) 📚 for 15 languages - [Fine-tune script on NER dataset provided by Huggingface](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner_old.py) 🏋️‍♂️ ## Metrics on test set 📋 | Metric | # score | | :-------: | :-------: | | F1 | **93.51** | | Precision | **96.08** | | Recall | **91.06** | ## Model in action 🔨 Fast usage with **pipelines** 🧪 ```python from transformers import pipeline typo_checker = pipeline( "ner", model="mrm8488/distilbert-base-multi-cased-finetuned-typo-detection", tokenizer="mrm8488/distilbert-base-multi-cased-finetuned-typo-detection" ) result = typo_checker("Adddd validation midelware") result[1:-1] # Output: [{'entity': 'ok', 'score': 0.7128152847290039, 'word': 'add'}, {'entity': 'typo', 'score': 0.5388424396514893, 'word': '##dd'}, {'entity': 'ok', 'score': 0.94792640209198, 'word': 'validation'}, {'entity': 'typo', 'score': 0.5839331746101379, 'word': 'mid'}, {'entity': 'ok', 'score': 0.5195121765136719, 'word': '##el'}, {'entity': 'ok', 'score': 0.7222476601600647, 'word': '##ware'}] ``` It works🎉! We typed wrong ```Add and middleware``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/distilbert-base-uncased-newspop-student
2021-04-27T18:21:40.000Z
[ "pytorch", "distilbert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
7
transformers
mrm8488/distilbert-finetuned-sarcasm-classification
2020-09-13T10:41:07.000Z
[ "tf", "distilbert", "masked-lm", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "config.json", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.txt" ]
mrm8488
39
transformers
mrm8488/distilbert-multi-finedtuned-squad-pt
2020-05-23T07:23:36.000Z
[ "pytorch", "distilbert", "question-answering", "transformers" ]
question-answering
[ ".gitattributes", "config.json", "nbest_predictions_.json", "predictions_.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
11
transformers
mrm8488/distilbert-multi-finetuned-for-xqua-on-tydiqa
2020-12-11T21:53:48.000Z
[ "pytorch", "distilbert", "question-answering", "multilingual", "transformers" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "nbest_predictions_.json", "predictions_.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
68
transformers
--- language: multilingual thumbnail: --- # DistilBERT multilingual fine-tuned on TydiQA (GoldP task) dataset for multilingual Q&A 😛🌍❓ ## Details of the language model [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) ## Details of the Tydi QA dataset TyDi QA contains 200k human-annotated question-answer pairs in 11 Typologically Diverse languages, written without seeing the answer and without the use of translation, and is designed for the **training and evaluation** of automatic question answering systems. This repository provides evaluation code and a baseline system for the dataset. https://ai.google.com/research/tydiqa ## Details of the downstream task (Gold Passage or GoldP aka the secondary task) Given a passage that is guaranteed to contain the answer, predict the single contiguous span of characters that answers the question. the gold passage task differs from the [primary task](https://github.com/google-research-datasets/tydiqa/blob/master/README.md#the-tasks) in several ways: * only the gold answer passage is provided rather than the entire Wikipedia article; * unanswerable questions have been discarded, similar to MLQA and XQuAD; * we evaluate with the SQuAD 1.1 metrics like XQuAD; and * Thai and Japanese are removed since the lack of whitespace breaks some tools. ## Model training 💪🏋️‍ The model was fine-tuned on a Tesla P100 GPU and 25GB of RAM. The script is the following: ```python python transformers/examples/question-answering/run_squad.py \ --model_type distilbert \ --model_name_or_path distilbert-base-multilingual-cased \ --do_train \ --do_eval \ --train_file /path/to/dataset/train.json \ --predict_file /path/to/dataset/dev.json \ --per_gpu_train_batch_size 24 \ --per_gpu_eval_batch_size 24 \ --learning_rate 3e-5 \ --num_train_epochs 5 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /content/model_output \ --overwrite_output_dir \ --save_steps 1000 \ --threads 400 ``` ## Global Results (dev set) 📝 | Metric | # Value | | --------- | ----------- | | **EM** | **63.85** | | **F1** | **75.70** | ## Specific Results (per language) 🌍📝 | Language | # Samples | # EM | # F1 | | --------- | ----------- |--------| ------ | | Arabic | 1314 | 66.66 | 80.02 | | Bengali | 180 | 53.09 | 63.50 | | English | 654 | 62.42 | 73.12 | | Finnish | 1031 | 64.57 | 75.15 | | Indonesian| 773 | 67.89 | 79.70 | | Korean | 414 | 51.29 | 61.73 | | Russian | 1079 | 55.42 | 70.08 | | Swahili | 596 | 74.51 | 81.15 | | Telegu | 874 | 66.21 | 79.85 | ## Similar models You can also try [bert-multi-cased-finedtuned-xquad-tydiqa-goldp](https://huggingface.co/mrm8488/bert-multi-cased-finedtuned-xquad-tydiqa-goldp) that achieves **F1 = 82.16** and **EM = 71.06** (And of course better marks per language). > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/distilgpt2-finedtuned-meditations
2021-05-23T10:20:32.000Z
[ "pytorch", "jax", "gpt2", "lm-head", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "config.json", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
mrm8488
32
transformers
mrm8488/distilgpt2-finetuned-bookcopus-10
2021-05-23T10:21:22.000Z
[ "pytorch", "jax", "gpt2", "lm-head", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "config.json", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
mrm8488
25
transformers
mrm8488/distilgpt2-finetuned-reddit-tifu
2021-05-23T10:22:22.000Z
[ "pytorch", "jax", "gpt2", "lm-head", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "config.json", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
mrm8488
16
transformers
mrm8488/distilgpt2-finetuned-wsb-tweets
2021-05-23T10:23:17.000Z
[ "pytorch", "jax", "gpt2", "lm-head", "causal-lm", "en", "transformers", "wsb", "tweets", "text-generation" ]
text-generation
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "train_results.txt", "trainer_state.json", "training_args.bin", "vocab.json" ]
mrm8488
7
transformers
--- language: en tags: - wsb - tweets widget: - text: "Come on guys this is" --- # distilGPT-2 fine-tuned on Kaggle WSB Reddit posts dataset
mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es
2021-05-20T00:46:49.000Z
[ "pytorch", "jax", "tfsavedmodel", "bert", "question-answering", "es", "transformers" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "nbest_predictions_.json", "null_odds_.json", "predictions_.json", "pytorch_model.bin", "saved_model.tar.gz", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
13,155
transformers
--- language: es thumbnail: https://i.imgur.com/jgBdimh.png --- # BETO (Spanish BERT) + Spanish SQuAD2.0 + distillation using 'bert-base-multilingual-cased' as teacher This model is a fine-tuned on [SQuAD-es-v2.0](https://github.com/ccasimiro88/TranslateAlignRetrieve) and **distilled** version of [BETO](https://github.com/dccuchile/beto) for **Q&A**. Distillation makes the model **smaller, faster, cheaper and lighter** than [bert-base-spanish-wwm-cased-finetuned-spa-squad2-es](https://github.com/huggingface/transformers/blob/master/model_cards/mrm8488/bert-base-spanish-wwm-cased-finetuned-spa-squad2-es/README.md) This model was fine-tuned on the same dataset but using **distillation** during the process as mentioned above (and one more train epoch). The **teacher model** for the distillation was `bert-base-multilingual-cased`. It is the same teacher used for `distilbert-base-multilingual-cased` AKA [**DistilmBERT**](https://github.com/huggingface/transformers/tree/master/examples/distillation) (on average is twice as fast as **mBERT-base**). ## Details of the downstream task (Q&A) - Dataset <details> [SQuAD-es-v2.0](https://github.com/ccasimiro88/TranslateAlignRetrieve) | Dataset | # Q&A | | ----------------------- | ----- | | SQuAD2.0 Train | 130 K | | SQuAD2.0-es-v2.0 | 111 K | | SQuAD2.0 Dev | 12 K | | SQuAD-es-v2.0-small Dev | 69 K | </details> ## Model training The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command: ```bash !export SQUAD_DIR=/path/to/squad-v2_spanish \ && python transformers/examples/distillation/run_squad_w_distillation.py \ --model_type bert \ --model_name_or_path dccuchile/bert-base-spanish-wwm-cased \ --teacher_type bert \ --teacher_name_or_path bert-base-multilingual-cased \ --do_train \ --do_eval \ --do_lower_case \ --train_file $SQUAD_DIR/train-v2.json \ --predict_file $SQUAD_DIR/dev-v2.json \ --per_gpu_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 5.0 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /content/model_output \ --save_steps 5000 \ --threads 4 \ --version_2_with_negative ``` ## Results: | Metric | # Value | | --------- | ----------- | | **Exact** | **90.77**48 | | **F1** | **94.94**71 | ```json { "exact": 90.77483309730933, "f1": 94.94714391266254, "total": 69202, "HasAns_exact": 86.60850599781898, "HasAns_f1": 92.90582885592328, "HasAns_total": 45850, "NoAns_exact": 98.95512161699212, "NoAns_f1": 98.95512161699212, "NoAns_total": 23352, "best_exact": 90.77483309730933, "best_exact_thresh": 0.0, "best_f1": 94.94714391266305, "best_f1_thresh": 0.0 } ``` ## Comparison: | Model | f1 score | | :-------------------------------------------------------------: | :-------: | | bert-base-spanish-wwm-cased-finetuned-spa-squad2-es | 86.07 | | **distill**-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es | **94.94** | So, yes, this version is even more accurate. ### Model in action Fast usage with **pipelines**: ```python from transformers import * # Important!: By now the QA pipeline is not compatible with fast tokenizer, but they are working on it. So that pass the object to the tokenizer {"use_fast": False} as in the following example: nlp = pipeline( 'question-answering', model='mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es', tokenizer=( 'mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es', {"use_fast": False} ) ) nlp( { 'question': '¿Para qué lenguaje está trabajando?', 'context': 'Manuel Romero está colaborando activamente con huggingface/transformers ' + 'para traer el poder de las últimas técnicas de procesamiento de lenguaje natural al idioma español' } ) # Output: {'answer': 'español', 'end': 169, 'score': 0.67530957344621, 'start': 163} ``` Play with this model and ```pipelines``` in a Colab: <a href="https://colab.research.google.com/github/mrm8488/shared_colab_notebooks/blob/master/Using_Spanish_BERT_fine_tuned_for_Q%26A_pipelines.ipynb" target="_parent"><img src="https://camo.githubusercontent.com/52feade06f2fecbf006889a904d221e6a730c194/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667" alt="Open In Colab" data-canonical-src="https://colab.research.google.com/assets/colab-badge.svg"></a> <details> 1. Set the context and ask some questions: ![Set context and questions](https://media.giphy.com/media/mCIaBpfN0LQcuzkA2F/giphy.gif) 2. Run predictions: ![Run the model](https://media.giphy.com/media/WT453aptcbCP7hxWTZ/giphy.gif) </details> More about ``` Huggingface pipelines```? check this Colab out: <a href="https://colab.research.google.com/github/mrm8488/shared_colab_notebooks/blob/master/Huggingface_pipelines_demo.ipynb" target="_parent"><img src="https://camo.githubusercontent.com/52feade06f2fecbf006889a904d221e6a730c194/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667" alt="Open In Colab" data-canonical-src="https://colab.research.google.com/assets/colab-badge.svg"></a> > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/distilroberta-finetuned-age_news-classification
2021-05-20T18:23:35.000Z
[ "pytorch", "jax", "roberta", "text-classification", "en", "dataset:ag_news", "transformers", "news", "classification" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
mrm8488
176
transformers
--- language: en tags: - news - classification datasets: - ag_news widget: - text: "Venezuela Prepares for Chavez Recall Vote Supporters and rivals warn of possible fraud; government says Chavez's defeat could produce turmoil in world oil market." --- # distilroberta-base fine-tuned on age_news dataset for news classification Test set accuray: 0.94
mrm8488/distilroberta-finetuned-squadv1
2021-05-20T18:24:24.000Z
[ "pytorch", "jax", "roberta", "question-answering", "transformers" ]
question-answering
[ ".gitattributes", "config.json", "flax_model.msgpack", "merges.txt", "nbest_predictions_.json", "predictions_.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
mrm8488
20
transformers
mrm8488/distilroberta-finetuned-tweets-hate-speech
2021-05-20T18:25:15.000Z
[ "pytorch", "jax", "roberta", "text-classification", "en", "dataset:tweets_hate_speech_detection", "transformers", "twitter", "hate", "speech" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
mrm8488
66
transformers
--- language: en tags: - twitter - hate - speech datasets: - tweets_hate_speech_detection widget: - text: "the fuck done with #mansplaining and other bullshit." --- # distilroberta-base fine-tuned on tweets_hate_speech_detection dataset for hate speech detection Validation accuray: 0.98
mrm8488/electra-base-finetuned-squadv1
2020-12-11T21:53:55.000Z
[ "pytorch", "electra", "question-answering", "en", "transformers" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "nbest_predictions_.json", "predictions_.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
385
transformers
--- language: en --- # Electra base ⚡ + SQuAD v1 ❓ [Electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) fine-tuned on [SQUAD v1.1 dataset](https://rajpurkar.github.io/SQuAD-explorer/explore/1.1/dev/) for **Q&A** downstream task. ## Details of the downstream task (Q&A) - Model 🧠 **ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. ## Details of the downstream task (Q&A) - Dataset 📚 **S**tanford **Q**uestion **A**nswering **D**ataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. SQuAD v1.1 contains **100,000+** question-answer pairs on **500+** articles. ## Model training 🏋️‍ The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command: ```bash python transformers/examples/question-answering/run_squad.py \ --model_type electra \ --model_name_or_path 'google/electra-base-discriminator' \ --do_eval \ --do_train \ --do_lower_case \ --train_file '/content/dataset/train-v1.1.json' \ --predict_file '/content/dataset/dev-v1.1.json' \ --per_gpu_train_batch_size 16 \ --learning_rate 3e-5 \ --num_train_epochs 10 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir '/content/output' \ --overwrite_output_dir \ --save_steps 1000 ``` ## Test set Results 🧾 | Metric | # Value | | ------ | --------- | | **EM** | **83.03** | | **F1** | **90.77** | | **Size**| **+ 400 MB** | Very good metrics for such a "small" model! ```json { 'exact': 83.03689687795648, 'f1': 90.77486052446231, 'total': 10570, 'HasAns_exact': 83.03689687795648, 'HasAns_f1': 90.77486052446231, 'HasAns_total': 10570, 'best_exact': 83.03689687795648, 'best_exact_thresh': 0.0, 'best_f1': 90.77486052446231, 'best_f1_thresh': 0.0 } ``` ### Model in action 🚀 Fast usage with **pipelines**: ```python from transformers import pipeline QnA_pipeline = pipeline('question-answering', model='mrm8488/electra-base-finetuned-squadv1') QnA_pipeline({ 'context': 'A new strain of flu that has the potential to become a pandemic has been identified in China by scientists.', 'question': 'What has been discovered by scientists from China ?' }) # Output: {'answer': 'A new strain of flu', 'end': 19, 'score': 0.9995211430099182, 'start': 0} ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/electra-base-finetuned-squadv2
2020-06-27T16:29:36.000Z
[ "pytorch", "electra", "question-answering", "transformers" ]
question-answering
[ ".gitattributes", "config.json", "nbest_predictions_.json", "null_odds_.json", "predictions_.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
89
transformers
mrm8488/electra-large-finetuned-squadv1
2020-07-01T10:16:16.000Z
[ "pytorch", "electra", "question-answering", "transformers" ]
question-answering
[ ".gitattributes", "config.json", "optimizer.pt", "pytorch_model.bin", "scheduler.pt", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
84
transformers
mrm8488/electra-small-finetuned-squadv1
2020-12-11T21:53:59.000Z
[ "pytorch", "electra", "question-answering", "en", "transformers" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "nbest_predictions_.json", "predictions_.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
132
transformers
--- language: en --- # Electra small ⚡ + SQuAD v1 ❓ [Electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) fine-tuned on [SQUAD v1.1 dataset](https://rajpurkar.github.io/SQuAD-explorer/explore/1.1/dev/) for **Q&A** downstream task. ## Details of the downstream task (Q&A) - Model 🧠 **ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. ## Details of the downstream task (Q&A) - Dataset 📚 **S**tanford **Q**uestion **A**nswering **D**ataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. SQuAD v1.1 contains **100,000+** question-answer pairs on **500+** articles. ## Model training 🏋️‍ The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command: ```bash python transformers/examples/question-answering/run_squad.py \ --model_type electra \ --model_name_or_path 'google/electra-small-discriminator' \ --do_eval \ --do_train \ --do_lower_case \ --train_file '/content/dataset/train-v1.1.json' \ --predict_file '/content/dataset/dev-v1.1.json' \ --per_gpu_train_batch_size 16 \ --learning_rate 3e-5 \ --num_train_epochs 10 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir '/content/output' \ --overwrite_output_dir \ --save_steps 1000 ``` ## Test set Results 🧾 | Metric | # Value | | ------ | --------- | | **EM** | **77.70** | | **F1** | **85.74** | | **Size**| **50 MB** | Very good metrics for such a "small" model! ```json { 'exact': 77.70104068117313, 'f1': 85.73991234187997, 'total': 10570, 'HasAns_exact': 77.70104068117313, 'HasAns_f1': 85.73991234187997, 'HasAns_total': 10570, 'best_exact': 77.70104068117313, 'best_exact_thresh': 0.0, 'best_f1': 85.73991234187997, 'best_f1_thresh': 0.0 } ``` ### Model in action 🚀 Fast usage with **pipelines**: ```python from transformers import pipeline QnA_pipeline = pipeline('question-answering', model='mrm8488/electra-small-finetuned-squadv1') QnA_pipeline({ 'context': 'A new strain of flu that has the potential to become a pandemic has been identified in China by scientists.', 'question': 'What has been discovered by scientists from China ?' }) # Output: {'answer': 'A new strain of flu', 'end': 19, 'score': 0.7950334108113424, 'start': 0} ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/electra-small-finetuned-squadv2
2020-12-11T21:54:01.000Z
[ "pytorch", "electra", "question-answering", "en", "transformers" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "nbest_predictions_.json", "null_odds_.json", "predictions_.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
507
transformers
--- language: en --- # Electra small ⚡ + SQuAD v2 ❓ [Electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) fine-tuned on [SQUAD v2.0 dataset](https://rajpurkar.github.io/SQuAD-explorer/explore/v2.0/dev/) for **Q&A** downstream task. ## Details of the downstream task (Q&A) - Model 🧠 **ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. ## Details of the downstream task (Q&A) - Dataset 📚 **SQuAD2.0** combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. ## Model training 🏋️‍ The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command: ```bash python transformers/examples/question-answering/run_squad.py \ --model_type electra \ --model_name_or_path 'google/electra-small-discriminator' \ --do_eval \ --do_train \ --do_lower_case \ --train_file '/content/dataset/train-v2.0.json' \ --predict_file '/content/dataset/dev-v2.0.json' \ --per_gpu_train_batch_size 16 \ --learning_rate 3e-5 \ --num_train_epochs 10 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir '/content/output' \ --overwrite_output_dir \ --save_steps 1000 \ --version_2_with_negative ``` ## Test set Results 🧾 | Metric | # Value | | ------ | --------- | | **EM** | **69.71** | | **F1** | **73.44** | | **Size**| **50 MB** | ```json { 'exact': 69.71279373368147, 'f1': 73.4439546123672, 'total': 11873, 'HasAns_exact': 69.92240215924427, 'HasAns_f1': 77.39542393937836, 'HasAns_total': 5928, 'NoAns_exact': 69.50378469301934, 'NoAns_f1': 69.50378469301934, 'NoAns_total': 5945, 'best_exact': 69.71279373368147, 'best_exact_thresh': 0.0, 'best_f1': 73.44395461236732, 'best_f1_thresh': 0.0 } ``` ### Model in action 🚀 Fast usage with **pipelines**: ```python from transformers import pipeline QnA_pipeline = pipeline('question-answering', model='mrm8488/electra-base-finetuned-squadv2') QnA_pipeline({ 'context': 'A new strain of flu that has the potential to become a pandemic has been identified in China by scientists.', 'question': 'What has been discovered by scientists from China ?' }) # Output: {'answer': 'A new strain of flu', 'end': 19, 'score': 0.8650811568752914, 'start': 0} ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/electricidad-base-discriminator
2020-12-11T21:54:04.000Z
[ "pytorch", "electra", "pretraining", "es", "transformers" ]
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "vocab.txt" ]
mrm8488
55
transformers
--- language: es thumbnail: https://i.imgur.com/uxAvBfh.png --- ## ELECTRICIDAD: The Spanish Electra [Imgur](https://imgur.com/uxAvBfh) **Electricidad-base-discriminator** (uncased) is a ```base``` Electra like model (discriminator in this case) trained on a + 20 GB of the [OSCAR](https://oscar-corpus.com/) Spanish corpus. As mentioned in the original [paper](https://openreview.net/pdf?id=r1xMH1BtvB): **ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. For a detailed description and experimental results, please refer the paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB). ## Model details ⚙ |Name| # Value| |-----|--------| |Layers| 12 | |Hidden |768 | |Params| 110M| ## Evaluation metrics (for discriminator) 🧾 |Metric | # Score | |-------|---------| |Accuracy| 0.985| |Precision| 0.726| |AUC | 0.922| ## Fast example of usage 🚀 ```python from transformers import ElectraForPreTraining, ElectraTokenizerFast import torch discriminator = ElectraForPreTraining.from_pretrained("/content/electricidad-base-discriminator") tokenizer = ElectraTokenizerFast.from_pretrained("/content/electricidad-base-discriminator") sentence = "El rápido zorro marrón salta sobre el perro perezoso" fake_sentence = "El rápido zorro marrón amar sobre el perro perezoso" fake_tokens = tokenizer.tokenize(fake_sentence) fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt") discriminator_outputs = discriminator(fake_inputs) predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2) [print("%7s" % token, end="") for token in fake_tokens] [print("%7s" % prediction, end="") for prediction in predictions.tolist()] # Output: ''' el rapido zorro marro ##n amar sobre el perro pere ##zoso 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0[None, None, None, None, None, None, None, None, None, None, None, None, None ''' ``` As you can see there are **1s** in the places where the model detected a fake token. So, it works! 🎉 ### Some models fine-tuned on a downstream task 🛠️ [Question Answering](https://huggingface.co/mrm8488/electricidad-base-finetuned-squadv1-es) [POS](https://huggingface.co/mrm8488/electricidad-base-finetuned-pos) [NER](https://huggingface.co/mrm8488/electricidad-base-finetuned-ner) [Paraphrase Identification](https://huggingface.co/mrm8488/RuPERTa-base-finetuned-pawsx-es) ## Acknowledgments I thank [🤗/transformers team](https://github.com/huggingface/transformers) for allowing me to train the model (specially to [Julien Chaumond](https://twitter.com/julien_c)). > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/electricidad-base-finetuned-muchocine
2021-01-06T19:23:20.000Z
[ "pytorch", "electra", "text-classification", "es", "dataset:muchocine", "transformers", "sentiment", "analysis", "spanish" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
mrm8488
4,060
transformers
--- language: es datasets: - muchocine widget: - text: "Una buena película, sin más." tags: - sentiment - analysis - spanish --- # Electricidad-base fine-tuned for (Spanish) Sentiment Anlalysis 🎞️👍👎 [Electricidad](https://huggingface.co/mrm8488/electricidad-base-discriminator) base fine-tuned on [muchocine](https://huggingface.co/datasets/muchocine) dataset for Spanish **Sentiment Analysis** downstream task. ## Fast usage with `pipelines` 🚀 ```python # pip install -q transformers from transformers import AutoModelForSequenceClassification, AutoTokenizer CHKPT = 'mrm8488/electricidad-base-finetuned-muchocine' model = AutoModelForSequenceClassification.from_pretrained(CHKPT) tokenizer = AutoTokenizer.from_pretrained(CHKPT) from transformers import pipeline classifier = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer) # It ranks your comments between 1 and 5 (stars) classifier('Es una obra mestra. Brillante.') # [{'label': '5', 'score': 0.9498381614685059}] classifier('Es una película muy buena.') # {'label': '4', 'score': 0.9277070760726929}] classifier('Una buena película, sin más.') # [{'label': '3', 'score': 0.9768431782722473}] classifier('Esperaba mucho más.') # [{'label': '2', 'score': 0.7063605189323425}] classifier('He tirado el dinero. Una basura. Vergonzoso.') # [{'label': '1', 'score': 0.8494752049446106}] ```
mrm8488/electricidad-base-finetuned-ner
2020-08-24T16:31:14.000Z
[ "pytorch", "electra", "token-classification", "transformers" ]
token-classification
[ ".gitattributes", "config.json", "eval_results.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
14
transformers
mrm8488/electricidad-base-finetuned-pawsx-es
2021-04-28T15:52:25.000Z
[ "pytorch", "electra", "text-classification", "es", "dataset:xtreme", "transformers", "nli" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
mrm8488
36
transformers
--- language: es datasets: - xtreme tags: - nli widget: - text: "El río Tabaci es una vertiente del río Leurda en Rumania. El río Leurda es un afluente del río Tabaci en Rumania." --- # Electricidad-base fine-tuned on PAWS-X-es for Paraphrase Identification (NLI)
mrm8488/electricidad-base-finetuned-pos
2020-08-24T16:52:15.000Z
[ "pytorch", "electra", "token-classification", "transformers" ]
token-classification
[ ".gitattributes", "config.json", "eval_results.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
22
transformers
mrm8488/electricidad-base-finetuned-squadv1-es
2020-08-21T22:38:53.000Z
[ "pytorch", "electra", "question-answering", "transformers" ]
question-answering
[ ".gitattributes", "config.json", "nbest_predictions_.json", "predictions_.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
10
transformers
mrm8488/electricidad-base-generator
2020-12-11T21:54:10.000Z
[ "pytorch", "electra", "masked-lm", "es", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "vocab.txt" ]
mrm8488
26
transformers
--- language: es thumbnail: https://i.imgur.com/uxAvBfh.png widget: - text: "Madrid es una ciudad muy [MASK] en España." --- ## ELECTRICIDAD: The Spanish Electra [Imgur](https://imgur.com/uxAvBfh) **Electricidad-base-generator** (uncased) is a ```base``` Electra like model (generator in this case) trained on a + 20 GB of the [OSCAR](https://oscar-corpus.com/) Spanish corpus. As mentioned in the original [paper](https://openreview.net/pdf?id=r1xMH1BtvB): **ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. For a detailed description and experimental results, please refer the paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB). ## Fast example of usage 🚀 ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="mrm8488/electricidad-base-generator", tokenizer="mrm8488/electricidad-base-generator" ) print( fill_mask(f"HuggingFace está creando {fill_mask.tokenizer.mask_token} que la comunidad usa para resolver tareas de NLP.") ) # Output: [{'sequence': '[CLS] huggingface esta creando herramientas que la comunidad usa para resolver tareas de nlp. [SEP]', 'score': 0.0896105170249939, 'token': 8760, 'token_str': 'herramientas'}, ...] ``` ## Acknowledgments I thank [🤗/transformers team](https://github.com/huggingface/transformers) for allowing me to train the model (specially to [Julien Chaumond](https://twitter.com/julien_c)). > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/electricidad-small-discriminator
2020-12-11T21:54:14.000Z
[ "pytorch", "electra", "pretraining", "es", "transformers" ]
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "vocab.txt" ]
mrm8488
16
transformers
--- language: es thumbnail: https://i.imgur.com/uxAvBfh.png --- ## ELECTRICIDAD: The Spanish Electra [Imgur](https://imgur.com/uxAvBfh) **ELECTRICIDAD** is a small Electra like model (discriminator in this case) trained on a + 20 GB of the [OSCAR](https://oscar-corpus.com/) Spanish corpus. As mentioned in the original [paper](https://openreview.net/pdf?id=r1xMH1BtvB): **ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. For a detailed description and experimental results, please refer the paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB). ## Model details ⚙ |Param| # Value| |-----|--------| |Layers| 12 | |Hidden |256 | |Params| 14M| ## Evaluation metrics (for discriminator) 🧾 |Metric | # Score | |-------|---------| |Accuracy| 0.94| |Precision| 0.76| |AUC | 0.92| ## Benchmarks 🔨 WIP 🚧 ## How to use the discriminator in `transformers` ```python from transformers import ElectraForPreTraining, ElectraTokenizerFast import torch discriminator = ElectraForPreTraining.from_pretrained("mrm8488/electricidad-small-discriminator") tokenizer = ElectraTokenizerFast.from_pretrained("mrm8488/electricidad-small-discriminator") sentence = "el zorro rojo es muy rápido" fake_sentence = "el zorro rojo es muy ser" fake_tokens = tokenizer.tokenize(sentence) fake_inputs = tokenizer.encode(sentence, return_tensors="pt") discriminator_outputs = discriminator(fake_inputs) predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2) [print("%7s" % token, end="") for token in fake_tokens] [print("%7s" % int(prediction), end="") for prediction in predictions.tolist()[1:-1]] # Output: ''' el zorro rojo es muy ser 0 0 0 0 0 1[None, None, None, None, None, None] ''' ``` As you can see there is a **1** in the place where the model detected the fake token (**ser**). So, it works! 🎉 ## Acknowledgments I thank [🤗/transformers team](https://github.com/huggingface/transformers) for answering my doubts and Google for helping me with the [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc) program. > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/electricidad-small-finetuned-muchocine
2021-01-09T04:46:14.000Z
[ "pytorch", "electra", "text-classification", "es", "dataset:muchocine", "transformers", "sentiment", "analysis", "spanish" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
mrm8488
19
transformers
--- language: es datasets: - muchocine widget: - text: "Una buena película, sin más." tags: - sentiment - analysis - spanish --- # Electricidad-small fine-tuned for (Spanish) Sentiment Anlalysis 🎞️👍👎 [Electricidad](https://huggingface.co/mrm8488/electricidad-small-discriminator) small fine-tuned on [muchocine](https://huggingface.co/datasets/muchocine) dataset for Spanish **Sentiment Analysis** downstream task. ## Fast usage with `pipelines` 🚀 ```python # pip install -q transformers from transformers import AutoModelForSequenceClassification, AutoTokenizer CHKPT = 'mrm8488/electricidad-small-finetuned-muchocine' model = AutoModelForSequenceClassification.from_pretrained(CHKPT) tokenizer = AutoTokenizer.from_pretrained(CHKPT) from transformers import pipeline classifier = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer) # It ranks your comments between 1 and 5 (stars) classifier('Es una obra mestra. Brillante.') classifier('Es una película muy buena.') classifier('Una buena película, sin más.') classifier('Esperaba mucho más.') classifier('He tirado el dinero. Una basura. Vergonzoso.') ```
mrm8488/electricidad-small-finetuned-restaurant-sentiment-analysis
2021-06-15T16:54:27.000Z
[ "pytorch", "electra", "text-classification", "es", "transformers", "restaurant", "classification", "reviews" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.txt" ]
mrm8488
24
transformers
--- language: es tags: - restaurant - classification - reviews widget: - text: "No está a la altura, no volveremos." --- # Electricidad-small fine-tuned on restaurant review sentiment analysis dataset Test set accuray: 0.86
mrm8488/electricidad-small-finetuned-squadv1-es
2020-12-11T21:54:17.000Z
[ "pytorch", "electra", "question-answering", "es", "transformers" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "nbest_predictions_.json", "predictions_.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
581
transformers
--- language: es thumbnail: https://imgur.com/uxAvBfh --- # Electricidad small + Spanish SQuAD v1 ⚡❓ [Electricidad-small-discriminator](https://huggingface.co/mrm8488/electricidad-small-discriminator) fine-tuned on [Spanish SQUAD v1.1 dataset](https://github.com/ccasimiro88/TranslateAlignRetrieve/tree/master/SQuAD-es-v1.1) for **Q&A** downstream task. ## Details of the downstream task (Q&A) - Dataset 📚 [SQuAD-es-v1.1](https://github.com/ccasimiro88/TranslateAlignRetrieve/tree/master/SQuAD-es-v1.1) | Dataset split | # Samples | | ------------- | --------- | | Train | 130 K | | Test | 11 K | ## Model training 🏋️‍ The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command: ```bash python /content/transformers/examples/question-answering/run_squad.py \ --model_type electra \ --model_name_or_path 'mrm8488/electricidad-small-discriminator' \ --do_eval \ --do_train \ --do_lower_case \ --train_file '/content/dataset/train-v1.1-es.json' \ --predict_file '/content/dataset/dev-v1.1-es.json' \ --per_gpu_train_batch_size 16 \ --learning_rate 3e-5 \ --num_train_epochs 10 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir '/content/electricidad-small-finetuned-squadv1-es' \ --overwrite_output_dir \ --save_steps 1000 ``` ## Test set Results 🧾 | Metric | # Value | | ------ | --------- | | **EM** | **46.82** | | **F1** | **64.79** | ```json { 'exact': 46.82119205298013, 'f1': 64.79435260021918, 'total': 10570, 'HasAns_exact': 46.82119205298013, HasAns_f1': 64.79435260021918, 'HasAns_total': 10570, 'best_exact': 46.82119205298013, 'best_exact_thresh': 0.0, 'best_f1': 64.79435260021918, 'best_f1_thresh': 0.0 } ``` ### Model in action 🚀 Fast usage with **pipelines**: ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="mrm8488/electricidad-small-finetuned-squadv1-es", tokenizer="mrm8488/electricidad-small-finetuned-squadv1-es" ) context = "Manuel ha creado una versión del modelo Electra small en español que alcanza una puntuación F1 de 65 en el dataset SQUAD-es y sólo pesa 50 MB" q1 = "Cuál es su marcador F1?" q2 = "¿Cuál es el tamaño del modelo?" q3 = "¿Quién lo ha creado?" q4 = "¿Que es lo que ha hecho Manuel?" questions = [q1, q2, q3, q4] for question in questions: result = qa_pipeline({ 'context': context, 'question': question}) print(result) # Output: {'score': 0.14836778166355025, 'start': 98, 'end': 100, 'answer': '65'} {'score': 0.32219420810758237, 'start': 136, 'end': 140, 'answer': '50 MB'} {'score': 0.9672326951118713, 'start': 0, 'end': 6, 'answer': 'Manuel'} {'score': 0.23552458113848118, 'start': 10, 'end': 53, 'answer': 'creado una versión del modelo Electra small'} ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/electricidad-small-finetuned-xnli-es
2021-04-29T18:34:29.000Z
[ "pytorch", "electra", "text-classification", "es", "dataset:xnli", "transformers", "spanish", "nli", "xnli", "license:mit" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
298
transformers
--- language: es tags: - spanish - nli - xnli datasets: - xnli license: mit widget: - text: "Por favor, no piensen en darnos dinero. Por favor, considere piadosamente cuanto puede dar." --- # electricidad-small-finetuned-xnli-es
mrm8488/electrovid19-small
2020-06-01T07:50:12.000Z
[ "pytorch", "electra", "pretraining", "transformers" ]
[ ".gitattributes", "config.json", "pytorch_model.bin", "vocab.txt" ]
mrm8488
13
transformers
mrm8488/es-tinybert-v1-1
2021-05-20T00:47:22.000Z
[ "pytorch", "jax", "bert", "transformers" ]
[ ".gitattributes", "added_tokens.json", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
mrm8488
10
transformers
mrm8488/es-tinybert-v1
2021-05-20T00:47:48.000Z
[ "pytorch", "jax", "bert", "transformers" ]
[ ".gitattributes", "added_tokens.json", "config.json", "flax_model.msgpack", "model.bin", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
mrm8488
30
transformers
mrm8488/funnel-transformer-intermediate-mnli
2020-11-09T00:09:39.000Z
[ "pytorch", "funnel", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "optimizer.pt", "pytorch_model.bin", "scheduler.pt", "special_tokens_map.json", "tokenizer_config.json", "trainer_state.json", "training_args.bin", "vocab.txt" ]
mrm8488
19
transformers
mrm8488/gpt2-finetuned-recipes-cooking
2021-05-23T10:24:14.000Z
[ "pytorch", "jax", "gpt2", "lm-head", "causal-lm", "en", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
mrm8488
34
transformers
--- language: en thumbnail: widget: - text: "HuggingFace Cake:" ---
mrm8488/gpt2-finetuned-recipes-cooking_v2
2021-05-23T10:25:08.000Z
[ "pytorch", "jax", "gpt2", "lm-head", "causal-lm", "en", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
mrm8488
59
transformers
--- language: en thumbnail: widget: - text: "HuggingFace Cake:" ---
mrm8488/gpt2-finetuned-reddit-tifu
2021-05-23T10:26:20.000Z
[ "pytorch", "jax", "gpt2", "lm-head", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "config.json", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
mrm8488
42
transformers
mrm8488/gpt2-imdb-neg
2021-05-23T10:27:14.000Z
[ "pytorch", "gpt2", "transformers" ]
[ ".gitattributes", "README.md", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.json" ]
mrm8488
25
transformers
# GPT2-IMDB-neg (LM + RL) 🎞😡✍ All credits to [@lvwerra](https://twitter.com/lvwerra) ## What is it? A small GPT2 (`lvwerra/gpt2-imdb`) language model fine-tuned to produce **negative** movie reviews based the [IMDB dataset](https://www.kaggle.com/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews). The model is trained with rewards from a BERT sentiment classifier (`lvwerra/gpt2-imdb`) via **PPO**. ## Why? I wanted to reproduce the experiment [lvwerra/gpt2-imdb-pos](https://huggingface.co/lvwerra/gpt2-imdb-pos) but for generating **negative** movie reviews. ## Training setting The model was trained for `100` optimisation steps with a batch size of `256` which corresponds to `25600` training samples. The full experiment setup (for positive samples) in [trl repo](https://lvwerra.github.io/trl/04-gpt2-sentiment-ppo-training/). ## Examples A few examples of the model response to a query before and after optimisation: | query | response (before) | response (after) | rewards (before) | rewards (after) | |-------|-------------------|------------------|------------------|-----------------| |This movie is a fine | attempt as far as live action is concerned, n...|example of how bad Hollywood in theatrics pla...| 2.118391 | -3.31625| |I have watched 3 episodes |with this guy and he is such a talented actor...| but the show is just plain awful and there ne...| 2.681171| -4.512792| |We know that firefighters and| police officers are forced to become populari...| other chains have going to get this disaster ...| 1.367811| -3.34017| ## Training logs and metrics <img src="https://gblobscdn.gitbook.com/spaces%2F-Lqya5RvLedGEWPhtkjU%2Favatar.png?alt=media" width="25" height="25"> Watch the whole training logs and metrics on [W&B](https://app.wandb.ai/mrm8488/gpt2-sentiment-negative?workspace=user-mrm8488) > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/gpt2-imdb-neutral
2021-05-23T10:27:46.000Z
[ "pytorch", "gpt2", "en", "transformers" ]
[ ".gitattributes", "README.md", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.json" ]
mrm8488
19
transformers
--- language: en thumbnail: --- # GPT2-IMDB-neutral (LM + RL) 🎞😐✍ ## What is it? A small GPT2 (`lvwerra/gpt2-imdb`) language model fine-tuned to produce **neutral**-ish movie reviews based on the [IMDB dataset](https://www.kaggle.com/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews). The model is trained with rewards from a BERT sentiment classifier (`lvwerra/gpt2-imdb`) via **PPO**. ## Why? After reproducing the experiment [lvwerra/gpt2-imdb-pos](https://huggingface.co/lvwerra/gpt2-imdb-pos) but for generating **negative** movie reviews ([mrm8488/gpt2-imdb-neg](https://huggingface.co/mrm8488/gpt2-imdb-neg)) I wanted to check if I could generate neutral-ish movie reviews. So, based on the classifier output (logit), I saw that clearly negative reviews gives around *-4* values and clearly positive reviews around *4*. Then, it was esay to establish an interval ```[-1.75,1.75]``` that it could be considered as **neutral**. So if the classifier output was in that interval I gave it a positive reward while values out of the interval got a negative reward. ## Training setting The model was trained for `100` optimisation steps with a batch size of `128` which corresponds to `30000` training samples. The full experiment setup (for positive samples) in [trl repo](https://lvwerra.github.io/trl/04-gpt2-sentiment-ppo-training/). ## Examples A few examples of the model response to a query before and after optimisation: | query | response (before) | response (after) | rewards (before) | rewards (after) | |-------|-------------------|------------------|------------------|-----------------| |Okay, my title is|partly over, but this drama still makes me proud to read its first 40...|weird. The title is "mana were, ahunter". "Man...|4.200727 |-1.891443| |Where is it written that|there is a monster in this movie anyway? How is it that the entire|[ of the women in the recent women of jungle business between Gender and husband| -3.113942| -1.944993| |As a lesbian, I|cannot believe I was in the Sixties! Subtle yet witty, with original| found it hard to get responsive. In fact I found myself with the long| 3.906178| 0.769166| |The Derek's have over|three times as many acting hours than Jack Nicholson? You think bitches?|30 dueling characters and kill of, they retreat themselves to their base.|-2.503655| -1.898380| > All credits to [@lvwerra](https://twitter.com/lvwerra) > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/longformer-base-4096-finetuned-squadv2
2021-06-08T11:22:33.000Z
[ "pytorch", "tf", "longformer", "question-answering", "en", "dataset:squad_v2", "arxiv:2004.05150", "transformers", "QA", "long context", "Q&A" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
mrm8488
44,549
transformers
--- language: en datasets: - squad_v2 tags: - QA - long context - Q&A --- # Longformer-base-4096 fine-tuned on SQuAD v2 [Longformer-base-4096 model](https://huggingface.co/allenai/longformer-base-4096) fine-tuned on [SQuAD v2](https://rajpurkar.github.io/SQuAD-explorer/) for **Q&A** downstream task. ## Longformer-base-4096 [Longformer](https://arxiv.org/abs/2004.05150) is a transformer model for long documents. `longformer-base-4096` is a BERT-like model started from the RoBERTa checkpoint and pretrained for MLM on long documents. It supports sequences of length up to 4,096. Longformer uses a combination of a sliding window (local) attention and global attention. Global attention is user-configured based on the task to allow the model to learn task-specific representations. ## Details of the downstream task (Q&A) - Dataset 📚 🧐 ❓ Dataset ID: ```squad_v2``` from [HuggingFace/Datasets](https://github.com/huggingface/datasets) | Dataset | Split | # samples | | -------- | ----- | --------- | | squad_v2 | train | 130319 | | squad_v2 | valid | 11873 | How to load it from [datasets](https://github.com/huggingface/datasets) ```python !pip install datasets from datasets import load_dataset dataset = load_dataset('squad_v2') ``` Check out more about this dataset and others in [Datasets Viewer](https://huggingface.co/datasets/viewer/) ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this one](https://colab.research.google.com/drive/1zEl5D-DdkBKva-DdreVOmN0hrAfzKG1o?usp=sharing) ## Model in Action 🚀 ```python import torch from transformers import AutoTokenizer, AutoModelForQuestionAnswering ckpt = "mrm8488/longformer-base-4096-finetuned-squadv2" tokenizer = AutoTokenizer.from_pretrained(ckpt) model = AutoModelForQuestionAnswering.from_pretrained(ckpt) text = "Huggingface has democratized NLP. Huge thanks to Huggingface for this." question = "What has Huggingface done ?" encoding = tokenizer(question, text, return_tensors="pt") input_ids = encoding["input_ids"] # default is local attention everywhere # the forward method will automatically set global attention on question tokens attention_mask = encoding["attention_mask"] start_scores, end_scores = model(input_ids, attention_mask=attention_mask) all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0].tolist()) answer_tokens = all_tokens[torch.argmax(start_scores) :torch.argmax(end_scores)+1] answer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens)) # output => democratized NLP ``` ## Usage with HF `pipleine` ```python from transformers import AutoTokenizer, AutoModelForQuestionAnswering, pipeline ckpt = "mrm8488/longformer-base-4096-finetuned-squadv2" tokenizer = AutoTokenizer.from_pretrained(ckpt) model = AutoModelForQuestionAnswering.from_pretrained(ckpt) qa = pipeline({"question-answering", model=model, tokenizer=tokenizer) text = "Huggingface has democratized NLP. Huge thanks to Huggingface for this." question = "What has Huggingface done?" qa({"question": question, "context": text}) ``` If given the same context we ask something that is not there, the output for **no answer** will be ```<s>``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain [![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/Y8Y3VYYE)
mrm8488/mT5-small-finetuned-multi-question-generation
2020-11-23T10:13:23.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
17
transformers
mrm8488/mT5-small-finetuned-tydiqa-for-xqa
2020-12-11T21:54:38.000Z
[ "pytorch", "t5", "seq2seq", "multilingual", "dataset:tydiqa", "arxiv:2010.11934", "transformers", "question-answering", "pipeline_tag:question-answering", "text2text-generation" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json", "training_args.bin" ]
mrm8488
212
transformers
--- language: multilingual datasets: - tydiqa pipeline_tag: question-answering --- # mT5-small fine-tuned on TyDiQA for multilingual QA 🗺📖❓ [Google's mT5-small](https://huggingface.co/google/mt5-small) fine-tuned on [TyDi QA](https://huggingface.co/nlp/viewer/?dataset=tydiqa&config=secondary_task) (secondary task) for **multingual Q&A** downstream task. ## Details of mT5 [Google's mT5](https://github.com/google-research/multilingual-t5) mT5 is pretrained on the [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 101 languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu. **Note**: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task. Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) Other Community Checkpoints: [here](https://huggingface.co/models?search=mt5) Paper: [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) Authors: *Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel* ## Details of the dataset 📚 **TyDi QA** is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs. The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language expresses -- such that we expect models performing well on this set to generalize across a large number of the languages in the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but don’t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without the use of translation (unlike MLQA and XQuAD). | Dataset | Task | Split | # samples | | -------- | ----- |------| --------- | | TyDi QA | GoldP | train| 49881 | | TyDi QA | GoldP | valid| 5077 | ## Results on validation dataset 📝 | Metric | # Value | | ------ | --------- | | **EM** | **41.65** | ## Model in Action 🚀 ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') tokenizer = AutoTokenizer.from_pretrained("mrm8488/mT5-small-finetuned-tydiqa-for-xqa") model = AutoModelForCausalLM.from_pretrained("mrm8488/mT5-small-finetuned-tydiqa-for-xqa").to(device) def get_response(question, context, max_length=32): input_text = 'question: %s context: %s' % (question, context) features = tokenizer([input_text], return_tensors='pt') output = model.generate(input_ids=features['input_ids'].to(device), attention_mask=features['attention_mask'].to(device), max_length=max_length) return tokenizer.decode(output[0]) # Some examples in different languages context = 'HuggingFace won the best Demo paper at EMNLP2020.' question = 'What won HuggingFace?' get_response(question, context) context = 'HuggingFace ganó la mejor demostración con su paper en la EMNLP2020.' question = 'Qué ganó HuggingFace?' get_response(question, context) context = 'HuggingFace выиграл лучшую демонстрационную работу на EMNLP2020.' question = 'Что победило в HuggingFace?' get_response(question, context) ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/mbart-large-finetuned-bible-es-en-translation
2021-01-14T22:32:54.000Z
[ "pytorch", "mbart", "seq2seq", "es", "en", "dataset:bible_para", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "sentencepiece.bpe.model", "special_tokens_map.json", "tokenizer_config.json" ]
mrm8488
26
transformers
--- tags: - translation language: - es - en datasets: - bible_para --- ### mbart-large-es-en This is mbart-large-cc25, finetuned on bible_para for Spanish to English translation. It scores BLEU **29.34**
mrm8488/mbart-large-finetuned-opus-en-es-translation
2021-01-26T12:24:37.000Z
[ "pytorch", "mbart", "seq2seq", "en", "es", "dataset:opus100", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "sentencepiece.bpe.model", "special_tokens_map.json", "tokenizer_config.json", "trainer_state.json", "training_args.bin" ]
mrm8488
159
transformers
--- tags: - translation language: - en - es datasets: - opus100 --- ### mbart-large-en-es This is mbart-large-cc25, finetuned on opus100 for English to Spanish translation. It scores BLEU **32.54** on test set.
mrm8488/mbart-large-finetuned-opus-es-en-translation
2021-01-23T07:54:59.000Z
[ "pytorch", "mbart", "seq2seq", "es", "en", "dataset:opus100", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "sentencepiece.bpe.model", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin" ]
mrm8488
47
transformers
--- tags: - translation language: - es - en datasets: - opus100 --- ### mbart-large-es-en This is mbart-large-cc25, finetuned on opus100 for Spanish to English translation. It scores BLEU **28.25** on validation dataset It scores BLEU **28.28** on test dataset
mrm8488/mbart-large-finetuned-opus-it-en-translation
2021-01-27T13:19:19.000Z
[ "pytorch", "mbart", "seq2seq", "it", "en", "dataset:opus100", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "sentencepiece.bpe.model", "special_tokens_map.json", "tokenizer_config.json", "trainer_state.json", "training_args.bin" ]
mrm8488
46
transformers
--- tags: - translation language: - it - en datasets: - opus100 --- ### mbart-large-it-en This is mbart-large-cc25, finetuned on opus100 for Italian to English translation. It scores BLEU **25.82** on test set.
mrm8488/mobilebert-finetuned-ner
2021-01-30T11:42:05.000Z
[ "pytorch", "mobilebert", "token-classification", "en", "transformers", "ner", "license:mit" ]
token-classification
[ ".gitattributes", "README.md", "config.json", "eval_results.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
523
transformers
--- language: en tags: - mobilebert - ner license: mit ---
mrm8488/mobilebert-finetuned-pos
2021-03-12T08:08:35.000Z
[ "pytorch", "rust", "mobilebert", "token-classification", "en", "transformers", "pos", "license:mit" ]
token-classification
[ ".gitattributes", "README.md", "config.json", "eval_results.txt", "pytorch_model.bin", "rust_model.ot", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
702
transformers
--- language: en tags: - mobilebert - pos license: mit ---
mrm8488/mobilebert-uncased-finetuned-squadv1
2020-12-11T21:54:41.000Z
[ "pytorch", "mobilebert", "question-answering", "en", "dataset:squad", "arxiv:2004.02984", "transformers" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "optimizer.pt", "pytorch_model.bin", "scheduler.pt", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
70
transformers
--- language: en datasets: - squad --- # MobileBERT + SQuAD (v1.1) 📱❓ [mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) fine-tuned on [SQUAD v2.0 dataset](https://rajpurkar.github.io/SQuAD-explorer/explore/v2.0/dev/) for **Q&A** downstream task. ## Details of the downstream task (Q&A) - Model 🧠 **MobileBERT** is a thin version of *BERT_LARGE*, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks. The checkpoint used here is the original MobileBert Optimized Uncased English: (uncased_L-24_H-128_B-512_A-4_F-4_OPT) checkpoint. More about the model [here](https://arxiv.org/abs/2004.02984) ## Details of the downstream task (Q&A) - Dataset 📚 **S**tanford **Q**uestion **A**nswering **D**ataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. SQuAD v1.1 contains **100,000+** question-answer pairs on **500+** articles. ## Model training 🏋️‍ The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command: ```bash python transformers/examples/question-answering/run_squad.py \ --model_type bert \ --model_name_or_path 'google/mobilebert-uncased' \ --do_eval \ --do_train \ --do_lower_case \ --train_file '/content/dataset/train-v1.1.json' \ --predict_file '/content/dataset/dev-v1.1.json' \ --per_gpu_train_batch_size 16 \ --learning_rate 3e-5 \ --num_train_epochs 5 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir '/content/output' \ --overwrite_output_dir \ --save_steps 1000 ``` It is important to say that this models converges much faster than other ones. So, it is also cheap to fine-tune. ## Test set Results 🧾 | Metric | # Value | | ------ | --------- | | **EM** | **82.33** | | **F1** | **89.64** | | **Size**| **94 MB** | ### Model in action 🚀 Fast usage with **pipelines**: ```python from transformers import pipeline QnA_pipeline = pipeline('question-answering', model='mrm8488/mobilebert-uncased-finetuned-squadv1') QnA_pipeline({ 'context': 'A new strain of flu that has the potential to become a pandemic has been identified in China by scientists.', 'question': 'Who did identified it ?' }) # Output: {'answer': 'scientists.', 'end': 106, 'score': 0.7885545492172241, 'start': 96} ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/mobilebert-uncased-finetuned-squadv2
2020-12-11T21:54:44.000Z
[ "pytorch", "mobilebert", "question-answering", "en", "dataset:squad_v2", "arxiv:2004.02984", "transformers" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "optimizer.pt", "pytorch_model.bin", "scheduler.pt", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
1,479
transformers
--- language: en datasets: - squad_v2 --- # MobileBERT + SQuAD v2 📱❓ [mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) fine-tuned on [SQUAD v2.0 dataset](https://rajpurkar.github.io/SQuAD-explorer/explore/v2.0/dev/) for **Q&A** downstream task. ## Details of the downstream task (Q&A) - Model 🧠 **MobileBERT** is a thin version of *BERT_LARGE*, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks. The checkpoint used here is the original MobileBert Optimized Uncased English: (uncased_L-24_H-128_B-512_A-4_F-4_OPT) checkpoint. More about the model [here](https://arxiv.org/abs/2004.02984) ## Details of the downstream task (Q&A) - Dataset 📚 **SQuAD2.0** combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. ## Model training 🏋️‍ The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command: ```bash python transformers/examples/question-answering/run_squad.py \ --model_type bert \ --model_name_or_path 'google/mobilebert-uncased' \ --do_eval \ --do_train \ --do_lower_case \ --train_file '/content/dataset/train-v2.0.json' \ --predict_file '/content/dataset/dev-v2.0.json' \ --per_gpu_train_batch_size 16 \ --learning_rate 3e-5 \ --num_train_epochs 5 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir '/content/output' \ --overwrite_output_dir \ --save_steps 1000 \ --version_2_with_negative ``` It is important to say that this models converges much faster than other ones. So, it is also cheap to fine-tune. ## Test set Results 🧾 | Metric | # Value | | ------ | --------- | | **EM** | **75.37** | | **F1** | **78.48** | | **Size**| **94 MB** | ### Model in action 🚀 Fast usage with **pipelines**: ```python from transformers import pipeline QnA_pipeline = pipeline('question-answering', model='mrm8488/mobilebert-uncased-finetuned-squadv2') QnA_pipeline({ 'context': 'A new strain of flu that has the potential to become a pandemic has been identified in China by scientists.', 'question': 'Who did identified it ?' }) # Output: {'answer': 'scientists.', 'end': 106, 'score': 0.41531604528427124, 'start': 96} ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/prunebert-base-uncased-finepruned-topK-squadv2
2020-06-16T11:16:59.000Z
[ "pytorch", "masked_bert", "question-answering", "transformers" ]
question-answering
[ ".gitattributes", "config.json", "eval_results.txt", "nbest_predictions_.json", "null_odds_.json", "predictions_.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
29
transformers
mrm8488/prunebert-multi-uncased-finepruned-l0-reg-tydiqa-for-xqa
2020-06-13T10:57:01.000Z
[ "pytorch", "masked_bert", "question-answering", "transformers" ]
question-answering
[ ".gitattributes", "config.json", "eval_results.txt", "nbest_predictions_.json", "predictions_.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
16
transformers
mrm8488/prunebert-multi-uncased-finepruned-magnitude-tydiqa-for-xqa
2020-06-10T17:09:21.000Z
[ "pytorch", "masked_bert", "question-answering", "transformers" ]
question-answering
[ ".gitattributes", "config.json", "eval_results.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
26
transformers
mrm8488/prunebert-multi-uncased-finepruned-soft-movement-tydiqa-for-xqa
2020-06-10T17:24:44.000Z
[ "pytorch", "tensorboard", "masked_bert", "question-answering", "transformers" ]
question-answering
[ ".gitattributes", "config.json", "events.out.tfevents.1591284059.ca2dc1ff53db.371.0", "nbest_predictions_.json", "predictions_.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
15
transformers
mrm8488/prunebert-multi-uncased-finepruned-topK-tydiqa-for-xqa
2020-06-15T12:20:19.000Z
[ "pytorch", "masked_bert", "question-answering", "transformers" ]
question-answering
[ ".gitattributes", "config.json", "eval_results.txt", "nbest_predictions_.json", "predictions_.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
9
transformers
mrm8488/prunebert-multi-uncased-finepruned-tydiqa-for-xqa
2020-06-02T12:12:53.000Z
[ "pytorch", "masked_bert", "question-answering", "transformers" ]
question-answering
[ ".gitattributes", "config.json", "nbest_predictions_.json", "predictions_.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
mrm8488
22
transformers
mrm8488/roberta-base-1B-1-finetuned-squadv1
2021-05-20T18:26:13.000Z
[ "pytorch", "jax", "roberta", "question-answering", "en", "transformers" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "merges.txt", "nbest_predictions_.json", "predictions_.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
mrm8488
22
transformers
--- language: en --- # RoBERTa-base (1B-1) + SQuAD v1 ❓ [roberta-base-1B-1](https://huggingface.co/nyu-mll/roberta-base-1B-1) fine-tuned on [SQUAD v1.1 dataset](https://rajpurkar.github.io/SQuAD-explorer/explore/1.1/dev/) for **Q&A** downstream task. ## Details of the downstream task (Q&A) - Model 🧠 RoBERTa Pretrained on Smaller Datasets [NYU Machine Learning for Language](https://huggingface.co/nyu-mll) pretrained RoBERTa on smaller datasets (1M, 10M, 100M, 1B tokens). They released 3 models with lowest perplexities for each pretraining data size out of 25 runs (or 10 in the case of 1B tokens). The pretraining data reproduces that of BERT: They combine English Wikipedia and a reproduction of BookCorpus using texts from smashwords in a ratio of approximately 3:1. ## Details of the downstream task (Q&A) - Dataset 📚 **S**tanford **Q**uestion **A**nswering **D**ataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. SQuAD v1.1 contains **100,000+** question-answer pairs on **500+** articles. ## Model training 🏋️‍ The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command: ```bash python transformers/examples/question-answering/run_squad.py \ --model_type roberta \ --model_name_or_path 'nyu-mll/roberta-base-1B-1' \ --do_eval \ --do_train \ --do_lower_case \ --train_file /content/dataset/train-v1.1.json \ --predict_file /content/dataset/dev-v1.1.json \ --per_gpu_train_batch_size 16 \ --learning_rate 3e-5 \ --num_train_epochs 10 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /content/output \ --overwrite_output_dir \ --save_steps 1000 ``` ## Test set Results 🧾 | Metric | # Value | | ------ | --------- | | **EM** | **72.62** | | **F1** | **82.19** | ```json { 'exact': 72.62062440870388, 'f1': 82.19430877136834, 'total': 10570, 'HasAns_exact': 72.62062440870388, 'HasAns_f1': 82.19430877136834, 'HasAns_total': 10570, 'best_exact': 72.62062440870388, 'best_exact_thresh': 0.0, 'best_f1': 82.19430877136834, 'best_f1_thresh': 0.0 } ``` ### Model in action 🚀 Fast usage with **pipelines**: ```python from transformers import pipeline QnA_pipeline = pipeline('question-answering', model='mrm8488/roberta-base-1B-1-finetuned-squadv1') QnA_pipeline({ 'context': 'A new strain of flu that has the potential to become a pandemic has been identified in China by scientists.', 'question': 'What has been discovered by scientists from China ?' }) # Output: {'answer': 'A new strain of flu', 'end': 19, 'score': 0.04702283976040074, 'start': 0} ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/roberta-base-1B-1-finetuned-squadv2
2021-05-20T18:27:20.000Z
[ "pytorch", "jax", "roberta", "question-answering", "en", "transformers" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "merges.txt", "nbest_predictions_.json", "predictions_.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
mrm8488
29
transformers
--- language: en --- # RoBERTa-base (1B-1) + SQuAD v2 ❓ [roberta-base-1B-1](https://huggingface.co/nyu-mll/roberta-base-1B-1) fine-tuned on [SQUAD v2 dataset](https://rajpurkar.github.io/SQuAD-explorer/explore/v2.0/dev/) for **Q&A** downstream task. ## Details of the downstream task (Q&A) - Model 🧠 RoBERTa Pretrained on Smaller Datasets [NYU Machine Learning for Language](https://huggingface.co/nyu-mll) pretrained RoBERTa on smaller datasets (1M, 10M, 100M, 1B tokens). They released 3 models with lowest perplexities for each pretraining data size out of 25 runs (or 10 in the case of 1B tokens). The pretraining data reproduces that of BERT: They combine English Wikipedia and a reproduction of BookCorpus using texts from smashwords in a ratio of approximately 3:1. ## Details of the downstream task (Q&A) - Dataset 📚 **S**tanford **Q**uestion **A**nswering **D**ataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. **SQuAD2.0** combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. ## Model training 🏋️‍ The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command: ```bash python transformers/examples/question-answering/run_squad.py \ --model_type roberta \ --model_name_or_path 'nyu-mll/roberta-base-1B-1' \ --do_eval \ --do_train \ --do_lower_case \ --train_file /content/dataset/train-v2.0.json \ --predict_file /content/dataset/dev-v2.0.json \ --per_gpu_train_batch_size 16 \ --learning_rate 3e-5 \ --num_train_epochs 10 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /content/output \ --overwrite_output_dir \ --save_steps 1000 \ --version_2_with_negative ``` ## Test set Results 🧾 | Metric | # Value | | ------ | --------- | | **EM** | **64.86** | | **F1** | **68.99** | ```json { 'exact': 64.86145034953255, 'f1': 68.9902640378272, 'total': 11873, 'HasAns_exact': 64.03508771929825, 'HasAns_f1': 72.3045554860189, 'HasAns_total': 5928, 'NoAns_exact': 65.68544995794785, 'NoAns_f1': 65.68544995794785, 'NoAns_total': 5945, 'best_exact': 64.86987282068559, 'best_exact_thresh': 0.0, 'best_f1': 68.99868650898054, 'best_f1_thresh': 0.0 } ``` ### Model in action 🚀 Fast usage with **pipelines**: ```python from transformers import pipeline QnA_pipeline = pipeline('question-answering', model='mrm8488/roberta-base-1B-1-finetuned-squadv2') QnA_pipeline({ 'context': 'A new strain of flu that has the potential to become a pandemic has been identified in China by scientists.', 'question': 'What has been discovered by scientists from China ?' }) # Output: {'answer': 'A new strain of flu', 'end': 19, 'score': 0.7145650685380576,'start': 0} ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/roberta-base-finetuned-multitask
2020-06-23T20:13:34.000Z
[ "pytorch", "transformers" ]
[ ".gitattributes", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.json" ]
mrm8488
10
transformers
mrm8488/roberta-large-finetuned-wsc
2021-05-20T18:30:59.000Z
[ "pytorch", "jax", "roberta", "masked-lm", "arxiv:1905.06290", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "vocab.json" ]
mrm8488
56
transformers
# RoBERTa (large) fine-tuned on Winograd Schema Challenge (WSC) data Step from its original [repo](https://github.com/pytorch/fairseq/blob/master/examples/roberta/wsc/README.md) The following instructions can be used to finetune RoBERTa on the WSC training data provided by [SuperGLUE](https://super.gluebenchmark.com/). Note that there is high variance in the results. For our GLUE/SuperGLUE submission we swept over the learning rate (1e-5, 2e-5, 3e-5), batch size (16, 32, 64) and total number of updates (500, 1000, 2000, 3000), as well as the random seed. Out of ~100 runs we chose the best 7 models and ensembled them. **Approach:** The instructions below use a slightly different loss function than what's described in the original RoBERTa arXiv paper. In particular, [Kocijan et al. (2019)](https://arxiv.org/abs/1905.06290) introduce a margin ranking loss between `(query, candidate)` pairs with tunable hyperparameters alpha and beta. This is supported in our code as well with the `--wsc-alpha` and `--wsc-beta` arguments. However, we achieved slightly better (and more robust) results on the development set by instead using a single cross entropy loss term over the log-probabilities for the query and all mined candidates. **The candidates are mined using spaCy from each input sentence in isolation, so the approach remains strictly pointwise.** This reduces the number of hyperparameters and our best model achieved 92.3% development set accuracy, compared to ~90% accuracy for the margin loss. Later versions of the RoBERTa arXiv paper will describe this updated formulation. ### 1) Download the WSC data from the SuperGLUE website: ```bash wget https://dl.fbaipublicfiles.com/glue/superglue/data/v2/WSC.zip unzip WSC.zip # we also need to copy the RoBERTa dictionary into the same directory wget -O WSC/dict.txt https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt ``` ### 2) Finetune over the provided training data: ```bash TOTAL_NUM_UPDATES=2000 # Total number of training steps. WARMUP_UPDATES=250 # Linearly increase LR over this many steps. LR=2e-05 # Peak LR for polynomial LR scheduler. MAX_SENTENCES=16 # Batch size per GPU. SEED=1 # Random seed. ROBERTA_PATH=/path/to/roberta/model.pt # we use the --user-dir option to load the task and criterion # from the examples/roberta/wsc directory: FAIRSEQ_PATH=/path/to/fairseq FAIRSEQ_USER_DIR=${FAIRSEQ_PATH}/examples/roberta/wsc CUDA_VISIBLE_DEVICES=0,1,2,3 fairseq-train WSC/ \ --restore-file $ROBERTA_PATH \ --reset-optimizer --reset-dataloader --reset-meters \ --no-epoch-checkpoints --no-last-checkpoints --no-save-optimizer-state \ --best-checkpoint-metric accuracy --maximize-best-checkpoint-metric \ --valid-subset val \ --fp16 --ddp-backend no_c10d \ --user-dir $FAIRSEQ_USER_DIR \ --task wsc --criterion wsc --wsc-cross-entropy \ --arch roberta_large --bpe gpt2 --max-positions 512 \ --dropout 0.1 --attention-dropout 0.1 --weight-decay 0.01 \ --optimizer adam --adam-betas '(0.9, 0.98)' --adam-eps 1e-06 \ --lr-scheduler polynomial_decay --lr $LR \ --warmup-updates $WARMUP_UPDATES --total-num-update $TOTAL_NUM_UPDATES \ --max-sentences $MAX_SENTENCES \ --max-update $TOTAL_NUM_UPDATES \ --log-format simple --log-interval 100 \ --seed $SEED ``` The above command assumes training on 4 GPUs, but you can achieve the same results on a single GPU by adding `--update-freq=4`. ### 3) Evaluate ```python from fairseq.models.roberta import RobertaModel from examples.roberta.wsc import wsc_utils # also loads WSC task and criterion roberta = RobertaModel.from_pretrained('checkpoints', 'checkpoint_best.pt', 'WSC/') roberta.cuda() nsamples, ncorrect = 0, 0 for sentence, label in wsc_utils.jsonl_iterator('WSC/val.jsonl', eval=True): pred = roberta.disambiguate_pronoun(sentence) nsamples += 1 if pred == label: ncorrect += 1 print('Accuracy: ' + str(ncorrect / float(nsamples))) # Accuracy: 0.9230769230769231 ``` ## RoBERTa training on WinoGrande dataset We have also provided `winogrande` task and criterion for finetuning on the [WinoGrande](https://mosaic.allenai.org/projects/winogrande) like datasets where there are always two candidates and one is correct. It's more efficient implementation for such subcases. ```bash TOTAL_NUM_UPDATES=23750 # Total number of training steps. WARMUP_UPDATES=2375 # Linearly increase LR over this many steps. LR=1e-05 # Peak LR for polynomial LR scheduler. MAX_SENTENCES=32 # Batch size per GPU. SEED=1 # Random seed. ROBERTA_PATH=/path/to/roberta/model.pt # we use the --user-dir option to load the task and criterion # from the examples/roberta/wsc directory: FAIRSEQ_PATH=/path/to/fairseq FAIRSEQ_USER_DIR=${FAIRSEQ_PATH}/examples/roberta/wsc cd fairseq CUDA_VISIBLE_DEVICES=0 fairseq-train winogrande_1.0/ \ --restore-file $ROBERTA_PATH \ --reset-optimizer --reset-dataloader --reset-meters \ --no-epoch-checkpoints --no-last-checkpoints --no-save-optimizer-state \ --best-checkpoint-metric accuracy --maximize-best-checkpoint-metric \ --valid-subset val \ --fp16 --ddp-backend no_c10d \ --user-dir $FAIRSEQ_USER_DIR \ --task winogrande --criterion winogrande \ --wsc-margin-alpha 5.0 --wsc-margin-beta 0.4 \ --arch roberta_large --bpe gpt2 --max-positions 512 \ --dropout 0.1 --attention-dropout 0.1 --weight-decay 0.01 \ --optimizer adam --adam-betas '(0.9, 0.98)' --adam-eps 1e-06 \ --lr-scheduler polynomial_decay --lr $LR \ --warmup-updates $WARMUP_UPDATES --total-num-update $TOTAL_NUM_UPDATES \ --max-sentences $MAX_SENTENCES \ --max-update $TOTAL_NUM_UPDATES \ --log-format simple --log-interval 100 ``` [Original repo](https://github.com/pytorch/fairseq/tree/master/examples/roberta/wsc)
mrm8488/roberta-med-small2roberta-med-small-finetuned-cnn_daily_mail-summarization
2021-04-06T09:22:39.000Z
[ "pytorch", "encoder-decoder", "seq2seq", "en", "dataset:cnn_dailymail", "transformers", "license:apache-2.0", "summarization", "text2text-generation" ]
summarization
[ ".gitattributes", "README.md", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
mrm8488
26
transformers
--- language: en license: apache-2.0 datasets: - cnn_dailymail tags: - summarization --- Shared [RoBERTa2RoBERTa (med-small)](https://huggingface.co/nyu-mll/roberta-med-small-1M-1) Summarization with 🤗EncoderDecoder Framework This model is a warm-started *RoBERTaShared* (med-small) model fine-tuned on the *cnn_dailymail* summarization dataset. The model achieves a **16.90** ROUGE-2 score on *cnn_dailymail*'s test dataset.
mrm8488/roberta-med-small_shared-finetuned-bbc_xsum-summarization
2021-04-05T11:28:41.000Z
[ "pytorch", "encoder-decoder", "seq2seq", "en", "dataset:xsum", "transformers", "license:apache-2.0", "summarization", "text2text-generation" ]
summarization
[ ".gitattributes", "README.md", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
mrm8488
31
transformers
--- language: en license: apache-2.0 datasets: - xsum tags: - summarization --- Shared RoBERTa2RoBERTa (med-small) Summarization with 🤗EncoderDecoder Framework This model is a warm-started *RoBERTaShared* (med-small) model fine-tuned on the *BBC XSum* summarization dataset.