Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
text-generation
transformers
# inspired by greentext
{"tags": ["conversational"]}
cartyparty/DialoGPT-small-nerdherd
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
casinobike/online_gambling
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
caspillaga/distilbert-base-uncased-finetuned-cola
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
cassandra/Angel
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # camembert-ner-tcp-ca This model is a fine-tuned version of [cassandra-themis/camembert-base-juri](https://huggingface.co/cassandra-themis/camembert-base-juri) on the cassandra-themis/ner-tcp-ca full dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30.0 ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.1+cu102 - Datasets 1.18.3 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "datasets": ["cassandra-themis/ner-tcp-ca"], "widget": [{"text": "R\u00c9PUBLIQUE FRANCAISE\n\nAU NOM DU PEUPLE FRANCAIS\n\n\n\nCOUR D'APPEL D'AIX EN PROVENCE\n\n\n\n10e Chambre\n\n\n\nARR\u00caT MIXTE\n\nDU 14 JUIN 2006\n\n\n\nNo/2006\n\n\n\n\n\nR\u00f4le No 99/09967\n\n\n\n\n\nJohn X...\n\nArlette Y... \u00e9pouse X...\n\nPatrick X...\n\n\n\n\n\nC/\n\n\n\nFONDS DE GARANTIE DES VICTIMES D'ACTES DE TERRORISME ET D'AUTRES INFRACTIONS\n\n\n\n\n\nD\u00e9cision d\u00e9f\u00e9r\u00e9e \u00e0 la Cour :\n\n\n\nD\u00e9cision rendue le 20 Avril 1999 par la Commission d'Indemnisation des Victimes d'Infractions P\u00e9nales pr\u00e8s le Tribunal de Grande Instance de MARSEILLE, enregistr\u00e9e\n\nau r\u00e9pertoire g\u00e9n\u00e9ral sous le no 98/00491.\n\n\n\n\n\nAPPELANTS\n\n\n\nMonsieur John X..., d\u00e9c\u00e9d\u00e9\n\nn\u00e9 le 17 Mars 1973 \u00e0 MARSEILLE (13000), demeurant ... - 13000 MARSEILLE\n\nrepr\u00e9sent\u00e9 par la SCP COHEN - GUEDJ, avou\u00e9s \u00e0 la Cour\n\n\n\nMadame Arlette Y... \u00e9pouse X...\n\nprise es qualit\u00e9 d'h\u00e9riti\u00e8re de John X..., d\u00e9c\u00e9d\u00e9 le 25/11/2001\n\nn\u00e9e le 18 Ao\u00fbt 1951 \u00e0 SAINT JEAN DE COLE (DORDOGNE), ... - 13012 MARSEILLE\n\nrepr\u00e9sent\u00e9e par la SCP COHEN - GUEDJ, avou\u00e9s \u00e0 la Cour,\n\nassist\u00e9e de la SELARL BAFFERT - FRUCTUS ET ASSOCIES, avocats au barreau de MARSEILLE\n\n\n\nMonsieur Patrick X...\n\npris en sa qualit\u00e9 d'h\u00e9ritier de John X..., d\u00e9c\u00e9d\u00e9 le 25/11/2001\n\nn\u00e9 le 12 Juin 1951 \u00e0 MARSEILLE (BOUCHES DU RH\u00d4NE), demeurant ... - 13012 MARSEILLE\n\nrepr\u00e9sent\u00e9 par la SCP COHEN - GUEDJ, avou\u00e9s \u00e0 la Cour,\n\nassist\u00e9 de la SELARL BAFFERT - FRUCTUS ET ASSOCIES, avocats au barreau de MARSEILLE\n\n\n\n\n\nINTIME\n\n\n\nFONDS DE GARANTIE DES VICTIMES D'ACTES DE TERRORISME ET D'AUTRES INFRACTIONS article L 422.1 du Code des Assurances, g\u00e9r\u00e9 par le Fonds de Garantie contre les Accidents de Circulation et de Chasse, dont le si\u00e8ge social est sis 64 rue Defrance 94300 VINCENNES, 39 bd Vincent Delpuech - les Bureaux du M\u00e9diterran\u00e9e - 13255 MARSEILLE\n\nrepr\u00e9sent\u00e9 par la SCP GIACOMETTI - DESOMBRE, avou\u00e9s \u00e0 la Cour,\n\nassist\u00e9 de Me Alain TUILLIER, avocat au barreau d'AIX EN PROVENCE\n\n\n\n\n\nCOMPOSITION DE LA COUR\n\n\n\nL'affaire a \u00e9t\u00e9 d\u00e9battue le 12 Avril 2006 en audience publique. Conform\u00e9ment \u00e0 l'article 785 du Nouveau Code de Proc\u00e9dure Civile, Mr RAJBAUT, Conseiller a fait un rapport oral de l'affaire \u00e0 l'audience avant les plaidoiries.\n\n\n\nLa Cour \u00e9tait compos\u00e9e de :\n\n\n\nMadame Elisabeth VIEUX, Pr\u00e9sidente\n\nMonsieur Benjamin RAJBAUT, Conseiller\n\nMadame Dominique KLOTZ, Conseiller\n\n\n\n\n\nqui en ont d\u00e9lib\u00e9r\u00e9\n\n\n\nGreffier lors des d\u00e9bats : Madame Genevi\u00e8ve JAUFFRES.\n\n\n\nLes parties ont \u00e9t\u00e9 avis\u00e9es que le prononc\u00e9 public de la d\u00e9cision aura lieu par mise \u00e0 disposition au greffe le 14 Juin 2006..\n\n\n\nMINIST\u00c8RE PUBLIC :\n\nAuquel l'affaire a \u00e9t\u00e9 r\u00e9guli\u00e8rement communiqu\u00e9e.\n\n", "example_title": "Exemple 1"}, {"text": "R\u00c9PUBLIQUE FRANCAISE\n\nAU NOM DU PEUPLE FRANCAIS\n\n\n\nPhD / BLL\n\n\n\nNum\u00e9ro / 06\n\n\n\nCOUR D'APPEL DE PAU\n\n2\u00e8me CH-Section 1\n\n\n\nARR\u00caT DU 19 janvier 2006\n\n\n\nDossier : 04 / 03078\n\n\n\nNature affaire :\n\n\n\nAutres demandes relatives \u00e0 un bail d'habitation ou \u00e0 un bail professionnel\n\n\n\nAffaire :\n\n\n\nBerthe X... \u00e9pouse Y...\n\n\n\nC /\n\n\n\nDominique Z...,\n\nCorinne X...\n\n\n\nR\u00c9PUBLIQUE FRAN\u00c7AISE\n\n\n\nAU NOM DU PEUPLE FRAN\u00c7AIS\n\n\n\nA R R \u00ca T\n\n\n\nprononc\u00e9 par Monsieur GRANGER, conseiller,\n\nen vertu de l'article 452 du Nouveau Code de Proc\u00e9dure Civile,\n\n\n\nassist\u00e9 de Monsieur LASBIATES, Greffier,\n\n\n\n\u00e0 l'audience publique du 19 janvier 2006\n\ndate indiqu\u00e9e \u00e0 l'issue des d\u00e9bats.\n\n\n\n* * * * *\n\n\n\nAPRES D\u00c9BATS\n\n\n\n\u00e0 l'audience publique tenue le 24 Novembre 2005, devant :\n\n\n\nMonsieur DARRACQ, magistrat charg\u00e9 du rapport,\n\n\n\nassist\u00e9 de Monsieur LASBIATES, greffier pr\u00e9sent \u00e0 l'appel des causes,\n\n\n\nMonsieur DARRACQ, en application des articles 786 et 910 du Nouveau Code de Proc\u00e9dure Civile et \u00e0 d\u00e9faut d'opposition a tenu l'audience pour entendre les plaidoiries et en a rendu compte \u00e0 la Cour compos\u00e9e de :\n\n\n\nMonsieur PETRIAT, Conseiller faisant fonction de Pr\u00e9sident, par suite de l'emp\u00eachement l\u00e9gitime de tous les titulaires et des magistrats d\u00e9sign\u00e9s par ordonnance et se trouvant le magistrat du si\u00e8ge pr\u00e9sent le plus ancien dans l'ordre de nomination \u00e0 la Cour\n\n\n\nMonsieur GRANGER, Conseiller\n\nMonsieur DARRACQ, Vice-Pr\u00e9sident plac\u00e9, d\u00e9sign\u00e9 par ordonnance du 12 septembre 2005\n\n\n\nqui en ont d\u00e9lib\u00e9r\u00e9 conform\u00e9ment \u00e0 la loi.\n\n\n\ndans l'affaire opposant :\n\n\n\nAPPELANTE :\n\n\n\nMadame Berthe X... \u00e9pouse Y...\n\nn\u00e9e le 13 Juin 1942 \u00e0 ARCANGUES (64)\n\nde nationalit\u00e9 fran\u00e7aise\n\n...\n\n...\n\n12500 ESPALION\n\n\n\nrepr\u00e9sent\u00e9e par la S. C. P. LONGIN C. ET P., avou\u00e9s \u00e0 la Cour\n\nassist\u00e9e de Ma\u00eetre BLAZY-ANDRIEU, avocat au barreau de BAYONNE\n\n\n\nINTIMES :\n\n\n\nMonsieur Dominique Camille Z...\n\nn\u00e9 le 13 juin 1954 \u00e0 Chatou (78)\n\n...\n\n...\n\n64200 BIARRITZ\n\n\n\nMadame Corinne X...\n\nn\u00e9e le 3 juillet 1969 \u00e0 Bidart (64)\n\n...\n\n...\n\n64200 BIARRITZ\n\n\n\n(b\u00e9n\u00e9ficient d'une aide juridictionnelle Totale num\u00e9ro 2004 / 006320 du 24 / 02 / 2005 accord\u00e9e par le bureau d'aide juridictionnelle de PAU)\n\n\n\nrepr\u00e9sent\u00e9s par la S. C. P. F. PIAULT / M. LACRAMPE-CARRAZE, avou\u00e9s \u00e0 la Cour\n\nassist\u00e9s de Ma\u00eetre FOURGEAU, avocat au barreau de BAYONNE\n\n\n\nsur appel de la d\u00e9cision\n\nen date du 24 AOUT 2004\n\nrendue par le TRIBUNAL D'INSTANCE DE BIARRITZ", "example_title": "Exemple 2"}, {"text": "R\u00c9PUBLIQUE FRANCAISE\n\nAU NOM DU PEUPLE FRANCAIS\n\n\n\nCOUR D'APPEL DE DOUAI\n\n\n\nTROISI\u00c8ME CHAMBRE\n\n\n\nARR\u00caT DU 26 / 01 / 2006\n\n\n\nBAUX RURAUX\n\n\n\nNo RG : 05 / 04854 jonction avec dossier RG No 05 / 04858\n\n\n\nTribunal paritaire des baux ruraux d'AVESNES SUR HELPE\n\ndu 27 Juillet 2005 jugements no 99 / 000010 et 04 / 000006\n\n\n\nAPPELANTE\n\nMadame Marie-No\u00eblle X... \u00e9pouse Y...\n\nDemeurant\n\n...\n\n59138 PONT SUR SAMBRE\n\n\n\nrepr\u00e9sent\u00e9e par Me STERLILN de la SCP JP STERLIN-C STERLIN, avocats au barreau d'AMIENS\n\n\n\nINTIM\u00c9S\n\nMonsieur Michel Z...\n\nDemeurant\n\n...\n\n59138 BACHANT\n\n\n\nrepr\u00e9sent\u00e9 par Me VILLESECHE de la SCP ROFFIAEN-LE FUR-VILLESECHE, avocats au barreau d'AVESNES SUR HELPE\n\n\n\nMonsieur Avit X...\n\nDemeurant\n\n...\n\n59138 BACHANT\n\n\n\nrepr\u00e9sent\u00e9 par Me COLSON de la SCP CHABOT-COLSON, avocats au barreau d'AVESNES SUR HELPE\n\n\n\nMadame Marie-Christine X... \u00e9pouse A...\n\nDemeurant\n\n...\n\n59750 FEIGNIES\n\n\n\nrepr\u00e9sent\u00e9e par Me COLSON de la SCP CHABOT-COLSON, avocats au barreau d'AVESNES SUR HELPE\n\n\n\n\n\nMadame Marie-Claire X... \u00e9pouse B...\n\nDemeurant\n\n...\n\n59550 PRISCHES\n\n\n\nrepr\u00e9sent\u00e9e par Me COLSON de la SCP CHABOT-COLSON, avocats au barreau d'AVESNES SUR HELPE\n\n\n\n\n\nMadame Marie-Antoinette X... \u00e9pouse C...\n\nDemeurant\n\n...\n\n59440 ST AUBIN\n\n\n\nrepr\u00e9sent\u00e9e par Me COLSON de la SCP CHABOT-COLSON, avocats au barreau d'AVESNES SUR HELPE\n\n\n\nCOMPOSITION DE LA COUR LORS DES D\u00c9BATS ET DU D\u00c9LIB\u00c9R\u00c9\n\nMadame MERFELD, Pr\u00e9sident de chambre\n\nMadame CONVAIN, Conseiller\n\nMadame PAOLI, Conseiller\n\n---------------------\n\nGREFFIER LORS DES D\u00c9BATS : Madame GAMEZ\n\n", "example_title": "Exemple 3"}], "model-index": [{"name": "camembert-ner-tcp-ca", "results": []}]}
cassandra-themis/test_tcp_ca
null
[ "transformers", "pytorch", "camembert", "token-classification", "generated_from_trainer", "dataset:cassandra-themis/ner-tcp-ca", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
Hugging Face's logo --- language: - om - am - rw - rn - ha - ig - pcm - so - sw - ti - yo - multilingual --- # afriberta_base ## Model description AfriBERTa base is a pretrained multilingual language model with around 111 million parameters. The model has 8 layers, 6 attention heads, 768 hidden units and 3072 feed forward size. The model was pretrained on 11 African languages namely - Afaan Oromoo (also called Oromo), Amharic, Gahuza (a mixed language containing Kinyarwanda and Kirundi), Hausa, Igbo, Nigerian Pidgin, Somali, Swahili, Tigrinya and Yorùbá. The model has been shown to obtain competitive downstream performances on text classification and Named Entity Recognition on several African languages, including those it was not pretrained on. ## Intended uses & limitations #### How to use You can use this model with Transformers for any downstream task. For example, assuming we want to finetune this model on a token classification task, we do the following: ```python >>> from transformers import AutoTokenizer, AutoModelForTokenClassification >>> model = AutoModelForTokenClassification.from_pretrained("castorini/afriberta_base") >>> tokenizer = AutoTokenizer.from_pretrained("castorini/afriberta_base") # we have to manually set the model max length because it is an imported sentencepiece model, which huggingface does not properly support right now >>> tokenizer.model_max_length = 512 ``` #### Limitations and bias - This model is possibly limited by its training dataset which are majorly obtained from news articles from a specific span of time. Thus, it may not generalize well. - This model is trained on very little data (less than 1 GB), hence it may not have seen enough data to learn very complex linguistic relations. ## Training data The model was trained on an aggregation of datasets from the BBC news website and Common Crawl. ## Training procedure For information on training procedures, please refer to the AfriBERTa [paper]() or [repository](https://github.com/keleog/afriberta) ### BibTeX entry and citation info ``` @inproceedings{ogueji-etal-2021-small, title = "Small Data? No Problem! Exploring the Viability of Pretrained Multilingual Language Models for Low-resourced Languages", author = "Ogueji, Kelechi and Zhu, Yuxin and Lin, Jimmy", booktitle = "Proceedings of the 1st Workshop on Multilingual Representation Learning", month = nov, year = "2021", address = "Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.mrl-1.11", pages = "116--126", } ```
{}
castorini/afriberta_base
null
[ "transformers", "pytorch", "tf", "xlm-roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# afriberta_large ## Model description AfriBERTa large is a pretrained multilingual language model with around 126 million parameters. The model has 10 layers, 6 attention heads, 768 hidden units and 3072 feed forward size. The model was pretrained on 11 African languages namely - Afaan Oromoo (also called Oromo), Amharic, Gahuza (a mixed language containing Kinyarwanda and Kirundi), Hausa, Igbo, Nigerian Pidgin, Somali, Swahili, Tigrinya and Yorùbá. The model has been shown to obtain competitive downstream performances on text classification and Named Entity Recognition on several African languages, including those it was not pretrained on. ## Intended uses & limitations #### How to use You can use this model with Transformers for any downstream task. For example, assuming we want to finetune this model on a token classification task, we do the following: ```python >>> from transformers import AutoTokenizer, AutoModelForTokenClassification >>> model = AutoModelForTokenClassification.from_pretrained("castorini/afriberta_large") >>> tokenizer = AutoTokenizer.from_pretrained("castorini/afriberta_large") # we have to manually set the model max length because it is an imported sentencepiece model, which huggingface does not properly support right now >>> tokenizer.model_max_length = 512 ``` #### Limitations and bias - This model is possibly limited by its training dataset which are majorly obtained from news articles from a specific span of time. Thus, it may not generalize well. - This model is trained on very little data (less than 1 GB), hence it may not have seen enough data to learn very complex linguistic relations. ## Training data The model was trained on an aggregation of datasets from the BBC news website and Common Crawl. ## Training procedure For information on training procedures, please refer to the AfriBERTa [paper]() or [repository](https://github.com/keleog/afriberta) ### BibTeX entry and citation info ``` @inproceedings{ogueji-etal-2021-small, title = "Small Data? No Problem! Exploring the Viability of Pretrained Multilingual Language Models for Low-resourced Languages", author = "Ogueji, Kelechi and Zhu, Yuxin and Lin, Jimmy", booktitle = "Proceedings of the 1st Workshop on Multilingual Representation Learning", month = nov, year = "2021", address = "Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.mrl-1.11", pages = "116--126", } ```
{"language": ["om", "am", "rw", "rn", "ha", "ig", "so", "sw", "ti", "yo", "pcm", "multilingual"], "license": "mit", "datasets": ["castorini/afriberta-corpus"]}
castorini/afriberta_large
null
[ "transformers", "pytorch", "tf", "xlm-roberta", "fill-mask", "om", "am", "rw", "rn", "ha", "ig", "so", "sw", "ti", "yo", "pcm", "multilingual", "dataset:castorini/afriberta-corpus", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
Hugging Face's logo --- language: - om - am - rw - rn - ha - ig - pcm - so - sw - ti - yo - multilingual --- # afriberta_small ## Model description AfriBERTa small is a pretrained multilingual language model with around 97 million parameters. The model has 4 layers, 6 attention heads, 768 hidden units and 3072 feed forward size. The model was pretrained on 11 African languages namely - Afaan Oromoo (also called Oromo), Amharic, Gahuza (a mixed language containing Kinyarwanda and Kirundi), Hausa, Igbo, Nigerian Pidgin, Somali, Swahili, Tigrinya and Yorùbá. The model has been shown to obtain competitive downstream performances on text classification and Named Entity Recognition on several African languages, including those it was not pretrained on. ## Intended uses & limitations #### How to use You can use this model with Transformers for any downstream task. For example, assuming we want to finetune this model on a token classification task, we do the following: ```python >>> from transformers import AutoTokenizer, AutoModelForTokenClassification >>> model = AutoModelForTokenClassification.from_pretrained("castorini/afriberta_small") >>> tokenizer = AutoTokenizer.from_pretrained("castorini/afriberta_small") # we have to manually set the model max length because it is an imported trained sentencepiece model, which huggingface does not properly support right now >>> tokenizer.model_max_length = 512 ``` #### Limitations and bias - This model is possibly limited by its training dataset which are majorly obtained from news articles from a specific span of time. Thus, it may not generalize well. - This model is trained on very little data (less than 1 GB), hence it may not have seen enough data to learn very complex linguistic relations. ## Training data The model was trained on an aggregation of datasets from the BBC news website and Common Crawl. ## Training procedure For information on training procedures, please refer to the AfriBERTa [paper]() or [repository](https://github.com/keleog/afriberta) ### BibTeX entry and citation info ``` @inproceedings{ogueji-etal-2021-small, title = "Small Data? No Problem! Exploring the Viability of Pretrained Multilingual Language Models for Low-resourced Languages", author = "Ogueji, Kelechi and Zhu, Yuxin and Lin, Jimmy", booktitle = "Proceedings of the 1st Workshop on Multilingual Representation Learning", month = nov, year = "2021", address = "Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.mrl-1.11", pages = "116--126", } ```
{}
castorini/afriberta_small
null
[ "transformers", "pytorch", "tf", "xlm-roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
This model is converted from the original ANCE [repo](https://github.com/microsoft/ANCE) and fitted into Pyserini: > Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, Arnold Overwijk. [Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval](https://arxiv.org/pdf/2007.00808.pdf) For more details on how to use it, check our experiments in [Pyserini](https://github.com/castorini/pyserini/blob/master/docs/experiments-ance.md)
{}
castorini/ance-dpr-context-multi
null
[ "transformers", "pytorch", "dpr", "arxiv:2007.00808", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
This model is converted from the original ANCE [repo](https://github.com/microsoft/ANCE) and fitted into Pyserini: > Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, Arnold Overwijk. [Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval](https://arxiv.org/pdf/2007.00808.pdf) For more details on how to use it, check our experiments in [Pyserini](https://github.com/castorini/pyserini/blob/master/docs/experiments-ance.md)
{}
castorini/ance-dpr-question-multi
null
[ "transformers", "pytorch", "dpr", "feature-extraction", "arxiv:2007.00808", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
This model is converted from the original ANCE [repo](https://github.com/microsoft/ANCE) and fitted into Pyserini: > Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, Arnold Overwijk. [Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval](https://arxiv.org/pdf/2007.00808.pdf) For more details on how to use it, check our experiments in [Pyserini](https://github.com/castorini/pyserini/blob/master/docs/experiments-ance.md)
{}
castorini/ance-msmarco-doc-firstp
null
[ "transformers", "pytorch", "roberta", "arxiv:2007.00808", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
This model is converted from the original ANCE [repo](https://github.com/microsoft/ANCE) and fitted into Pyserini: > Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, Arnold Overwijk. [Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval](https://arxiv.org/pdf/2007.00808.pdf) For more details on how to use it, check our experiments in [Pyserini](https://github.com/castorini/pyserini/blob/master/docs/experiments-ance.md)
{}
castorini/ance-msmarco-doc-maxp
null
[ "transformers", "pytorch", "roberta", "arxiv:2007.00808", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# Model Card for ance-msmarco-passage Pyserini is a Python toolkit for reproducible information retrieval research with sparse and dense representations. # Model Details ## Model Description Pyserini is primarily designed to provide effective, reproducible, and easy-to-use first-stage retrieval in a multi-stage ranking architecture - **Developed by:** Castorini - **Shared by [Optional]:** Hugging Face - **Model type:** Information retrieval - **Language(s) (NLP):** en - **License:** More information needed - **Related Models:** More information needed - **Parent Model:** RoBERTa - **Resources for more information:** - [GitHub Repo](https://github.com/castorini/pyserini) - [Associated Paper](https://dl.acm.org/doi/pdf/10.1145/3404835.3463238) # Uses ## Direct Use More information needed ## Downstream Use [Optional] More information needed ## Out-of-Scope Use More information needed # Bias, Risks, and Limitations Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. ## Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. # Training Details ## Training Data More information needed ## Training Procedure ### Preprocessing More information needed ### Speeds, Sizes, Times More information needed # Evaluation ## Testing Data, Factors & Metrics ### Testing Data The model creators note in the [associated Paper](https://dl.acm.org/doi/pdf/10.1145/3404835.3463238) that: > bag-of-words ranking with BM25 (the default ranking model) on the MS MARCO passage corpus (comprising 8.8M passages) ### Factors More information needed ### Metrics More information needed ## Results More information needed # Model Examination More information needed # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** More information needed - **Hours used:** More information needed - **Cloud Provider:** More information needed - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Technical Specifications [optional] ## Model Architecture and Objective More information needed ## Compute Infrastructure More information needed ### Hardware More information needed ### Software For bag-of-words sparse retrieval, we have built in Anserini (written in Java) custom parsers and ingestion pipelines for common document formats used in IR research, # Citation **BibTeX:** ```bibtex @INPROCEEDINGS{Lin_etal_SIGIR2021_Pyserini, author = "Jimmy Lin and Xueguang Ma and Sheng-Chieh Lin and Jheng-Hong Yang and Ronak Pradeep and Rodrigo Nogueira", title = "{Pyserini}: A {Python} Toolkit for Reproducible Information Retrieval Research with Sparse and Dense Representations", booktitle = "Proceedings of the 44th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2021)", year = 2021, pages = "2356--2362", } ``` # Glossary [optional] More information needed # More Information [optional] More information needed # Model Card Authors [optional] Castorini in collaboration with Ezi Ozoani and the Hugging Face team. # Model Card Contact More information needed # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> ```python from transformers import AutoTokenizer, AnceEncoder tokenizer = AutoTokenizer.from_pretrained("castorini/ance-msmarco-passage") model = AnceEncoder.from_pretrained("castorini/ance-msmarco-passage") ``` </details>
{"language": ["en"]}
castorini/ance-msmarco-passage
null
[ "transformers", "pytorch", "roberta", "en", "arxiv:1910.09700", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
## About Here we share a pretrained BERT model that is aware of math tokens. The math tokens are treated specially and tokenized using [pya0](https://github.com/approach0/pya0), which adds very limited new tokens for latex markup (total vocabulary is just 31,061). This model is trained on 4 x 2 Tesla V100 with a total batch size of 64, using Math StackExchange data with 2.7 million sentence pairs trained for 7 epochs. ### Usage Download and try it out ```sh pip install pya0==0.3.2 wget https://vault.cs.uwaterloo.ca/s/gqstFZmWHCLGXe3/download -O ckpt.tar.gz mkdir -p ckpt tar xzf ckpt.tar.gz -C ckpt --strip-components=1 python test.py --test_file test.txt ``` ### Test file format Modify the test examples in `test.txt` to play with it. The test file is tab-separated, the first column is additional positions you want to mask for the right-side sentence (useful for masking tokens in math markups). A zero means no additional mask positions. ### Example output ![](https://i.imgur.com/xpl87KO.png) ### Upload to huggingface This repo is hosted on [Github](https://github.com/approach0/azbert), and only mirrored at [huggingface](https://huggingface.co/castorini/azbert-base). To upload to huggingface, use the `upload2hgf.sh` script. Before runnig this script, be sure to check: * check points for model and tokenizer are created under `./ckpt` folder * model contains all the files needed: `config.json` and `pytorch_model.bin` * tokenizer contains all the files needed: `added_tokens.json`, `special_tokens_map.json`, `tokenizer_config.json`, `vocab.txt` and `tokenizer.json` * no `tokenizer_file` field in `tokenizer_config.json` (sometimes it is located locally at `~/.cache`) * `git-lfs` is installed * having git-remote named `hgf` reference to `https://huggingface.co/castorini/azbert-base`
{"language": "en", "license": "mit", "tags": ["azbert", "pretraining", "fill-mask"], "widget": [{"text": "$f$ $($ $x$ [MASK] $y$ $)$", "example_title": "mathy"}, {"text": "$x$ [MASK] $x$ $equal$ $2$ $x$", "example_title": "mathy"}, {"text": "Proof by [MASK] that $n$ $fact$ $gt$ $3$ $n$ for $n$ $gt$ $6$", "example_title": "mathy"}, {"text": "Proof by induction that $n$ [MASK] $gt$ $3$ $n$ for $n$ $gt$ $6$", "example_title": "mathy"}, {"text": "The goal of life is [MASK].", "example_title": "philosophical"}]}
castorini/azbert-base
null
[ "transformers", "pytorch", "tensorboard", "bert", "pretraining", "azbert", "fill-mask", "en", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
This model is converted from the original BPR [repo](https://github.com/studio-ousia/bpr) and fitted into Pyserini: > Ikuya Yamada, Akari Asai, and Hannaneh Hajishirzi. 2021. Efficient passage retrieval with hashing for open-domain question answering. arXiv:2106.00882.
{}
castorini/bpr-nq-ctx-encoder
null
[ "transformers", "pytorch", "dpr", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
This model is converted from the original BPR [repo](https://github.com/studio-ousia/bpr) and fitted into Pyserini: > Ikuya Yamada, Akari Asai, and Hannaneh Hajishirzi. 2021. Efficient passage retrieval with hashing for open-domain question answering. arXiv:2106.00882.
{}
castorini/bpr-nq-question-encoder
null
[ "transformers", "pytorch", "dpr", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
This model is converted from the original DKRR [repo](https://github.com/facebookresearch/FiD) and ported into Pyserini: ``` @misc{izacard2020distilling, title={Distilling Knowledge from Reader to Retriever for Question Answering}, author={Gautier Izacard and Edouard Grave}, year={2020}, eprint={2012.04584}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{}
castorini/dkrr-dpr-nq-retriever
null
[ "transformers", "pytorch", "bert", "feature-extraction", "arxiv:2012.04584", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
This model is converted from the original DKRR [repo](https://github.com/facebookresearch/FiD) and ported into Pyserini: ``` @misc{izacard2020distilling, title={Distilling Knowledge from Reader to Retriever for Question Answering}, author={Gautier Izacard and Edouard Grave}, year={2020}, eprint={2012.04584}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{}
castorini/dkrr-dpr-tqa-retriever
null
[ "transformers", "pytorch", "bert", "arxiv:2012.04584", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
For more information, check [doc2query.ai](http://doc2query.ai)
{}
castorini/doc2query-t5-base-msmarco
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
For more information, check [doc2query.ai](http://doc2query.ai)
{}
castorini/doc2query-t5-large-msmarco
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
This model is a T5-3B reranker pre-finetuned on the MS MARCO passage dataset for 10K steps (or 1 epoch) on the pairwise task and then finetuned on MedMARCO (from [Sledge-Z paper](https://www.aclweb.org/anthology/2020.emnlp-main.341.pdf)) for 1K steps on the pairwise task. For more details on how to use it, check [pygaggle.ai](pygaggle.ai)! Paper describing the model: [The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models](https://arxiv.org/abs/2101.05667)
{}
castorini/duot5-3b-med-msmarco
null
[ "transformers", "pytorch", "t5", "feature-extraction", "arxiv:2101.05667", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
This model is a T5-3B reranker, initialized with our pointwise ranker, [castorini/monot5-3b-msmarco](https://huggingface.co/castorini/monot5-3b-msmarco), and finetuned on the MS MARCO passage dataset for 50K steps (or 5 epochs) on the pairwise reranking task. For more details on how to use it, check [pygaggle.ai](pygaggle.ai)! Paper describing the model: [The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models](https://arxiv.org/abs/2101.05667)
{}
castorini/duot5-3b-msmarco
null
[ "transformers", "pytorch", "t5", "feature-extraction", "arxiv:2101.05667", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
castorini/duot5-base-msmarco-10k
null
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
This model is a T5-base pairwise reranker fine-tuned on MS MARCO passage dataset for 50k steps (or 5 epochs). For more details on how to use it, check [pygaggle.ai](pygaggle.ai) Paper describing the model: [The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models](https://arxiv.org/pdf/2101.05667.pdf)
{}
castorini/duot5-base-msmarco
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "arxiv:2101.05667", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{}
castorini/mdpr-tied-pft-msmarco
null
[ "transformers", "pytorch", "rust", "safetensors", "bert", "feature-extraction", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
castorini/mdpr-passage-nq
null
[ "transformers", "pytorch", "dpr", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{}
castorini/mdpr-question-nq
null
[ "transformers", "pytorch", "dpr", "feature-extraction", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
# Model Description This checkpoint is a direct conversion of [BERT_Large_trained_on_MSMARCO.zip](https://drive.google.com/open?id=1crlASTMlsihALlkabAQP6JTYIZwC1Wm8) from the original [repo](https://github.com/nyu-dl/dl4marco-bert/). The corresponding model class is BertForSequenceClassification, and its purpose is for MS MARCO passage ranking. Please find the original repo for more detail of its training settings regarding hyperparameter/device/data.
{}
castorini/monobert-large-msmarco-finetune-only
null
[ "transformers", "pytorch", "jax", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
castorini/monobert-large-msmarco
null
[ "transformers", "pytorch", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
This model is a T5-3B reranker fine-tuned on the MS MARCO passage dataset for 10K steps (or 1 epoch) and then fine-tuned again on MedMARCO (from [Sledge-Z paper](https://www.aclweb.org/anthology/2020.emnlp-main.341.pdf)) for 1K steps. For more details on how to use it, check [pygaggle.ai](pygaggle.ai)! Paper describing the model: [Document Ranking with a Pretrained Sequence-to-Sequence Model](https://www.aclweb.org/anthology/2020.findings-emnlp.63/)
{}
castorini/monot5-3b-med-msmarco
null
[ "transformers", "pytorch", "t5", "feature-extraction", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
This model is a T5-3B reranker fine-tuned on the MS MARCO passage dataset for 100k steps (or 10 epochs). For more details on how to use it, check [pygaggle.ai](pygaggle.ai) Paper describing the model: [Document Ranking with a Pretrained Sequence-to-Sequence Model](https://www.aclweb.org/anthology/2020.findings-emnlp.63/)
{}
castorini/monot5-3b-msmarco
null
[ "transformers", "pytorch", "t5", "feature-extraction", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
This model is a T5-base reranker fine-tuned on the MS MARCO passage dataset for 10k steps (or 1 epoch) and then fine-tuned again on MedMARCO (from [Sledge-Z paper](https://www.aclweb.org/anthology/2020.emnlp-main.341.pdf) for 1k steps. For more details on how to use it, check [pygaggle.ai](pygaggle.ai) Paper describing the model: [Document Ranking with a Pretrained Sequence-to-Sequence Model](https://www.aclweb.org/anthology/2020.findings-emnlp.63/)
{}
castorini/monot5-base-med-msmarco
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
This model is a T5-base reranker fine-tuned on the MS MARCO passage dataset for 10k steps (or 1 epoch). This model usually has a better zero-shot performance than `monot5-base-msmarco`, i.e., it performs better on datasets different from MS MARCO. For more details on how to use it, check the following links: - [A simple reranking example](https://github.com/castorini/pygaggle#a-simple-reranking-example) - [Rerank MS MARCO passages](https://github.com/castorini/pygaggle/blob/master/docs/experiments-msmarco-passage-subset.md) - [Rerank Robust04 documents](https://github.com/castorini/pygaggle/blob/master/docs/experiments-robust04-monot5-gpu.md) Paper describing the model: [Document Ranking with a Pretrained Sequence-to-Sequence Model](https://www.aclweb.org/anthology/2020.findings-emnlp.63/)
{}
castorini/monot5-base-msmarco-10k
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
This model is a T5-base reranker fine-tuned on the MS MARCO passage dataset for 100k steps (or 10 epochs). For better zero-shot performance (i.e., inference on other datasets), we recommend using `castorini/monot5-base-msmarco-10k`. For more details on how to use it, check the following links: - [A simple reranking example](https://github.com/castorini/pygaggle#a-simple-reranking-example) - [Rerank MS MARCO passages](https://github.com/castorini/pygaggle/blob/master/docs/experiments-msmarco-passage-subset.md) - [Rerank Robust04 documents](https://github.com/castorini/pygaggle/blob/master/docs/experiments-robust04-monot5-gpu.md) Paper describing the model: [Document Ranking with a Pretrained Sequence-to-Sequence Model](https://www.aclweb.org/anthology/2020.findings-emnlp.63/)
{}
castorini/monot5-base-msmarco
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
This model is a T5-large reranker fine-tuned on the MS MARCO passage dataset for 10k steps (or 1 epoch). This model usually has a better zero-shot performance than `monot5-large-msmarco`, i.e., it performs better on datasets different from MS MARCO. For more details on how to use it, check the following links: - [A simple reranking example](https://github.com/castorini/pygaggle#a-simple-reranking-example) - [Rerank MS MARCO passages](https://github.com/castorini/pygaggle/blob/master/docs/experiments-msmarco-passage-subset.md) - [Rerank Robust04 documents](https://github.com/castorini/pygaggle/blob/master/docs/experiments-robust04-monot5-gpu.md) Paper describing the model: [Document Ranking with a Pretrained Sequence-to-Sequence Model](https://www.aclweb.org/anthology/2020.findings-emnlp.63/)
{}
castorini/monot5-large-msmarco-10k
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
This model is a T5-large reranker fine-tuned on the MS MARCO passage dataset for 100k steps (or 10 epochs). For more details on how to use it, check the following links: - [A simple reranking example](https://github.com/castorini/pygaggle#a-simple-reranking-example) - [Rerank MS MARCO passages](https://github.com/castorini/pygaggle/blob/master/docs/experiments-msmarco-passage-subset.md) - [Rerank Robust04 documents](https://github.com/castorini/pygaggle/blob/master/docs/experiments-robust04-monot5-gpu.md) Paper describing the model: [Document Ranking with a Pretrained Sequence-to-Sequence Model](https://www.aclweb.org/anthology/2020.findings-emnlp.63/)
{}
castorini/monot5-large-msmarco
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
This model is trained for conversational question rewriting. Usage: Source text format: ${HISTORY} ||| ${CURRENT_QUESTION} example from [CANARD](https://sites.google.com/view/qanta/projects/canard): Frank Zappa ||| Disbandment ||| What group disbanded ||| Zappa and the Mothers of Invention ||| When did they disband? Target text: When did Zappa and the Mothers of Invention disband? You can find our guide to reproduce the training in this [repo](https://github.com/castorini/chatty-goose/blob/c7d0cd8c45354b09b5fb930ab0b5af8be2e5772b/docs/t5_finetuning.md).
{}
castorini/t5-base-canard
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
This model is to reproduce the TCT-ColBERT dense retrieval described in the following paper: > Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. [Distilling Dense Representations for Ranking using Tightly-Coupled Teachers.](https://arxiv.org/abs/2010.11386) arXiv:2010.11386, October 2020. For more details on how to use it, check our experiments in [Pyserini](https://github.com/castorini/pyserini/blob/master/docs/experiments-tct_colbert.md)
{}
castorini/tct_colbert-msmarco
null
[ "transformers", "pytorch", "arxiv:2010.11386", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
This model is to reproduce a variant of TCT-ColBERT-V2 dense retrieval models described in the following paper: > Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. [In-Batch Negatives for Knowledge Distillation with Tightly-CoupledTeachers for Dense Retrieval.](https://cs.uwaterloo.ca/~jimmylin/publications/Lin_etal_2021_RepL4NLP.pdf) _RepL4NLP 2021_. You can find our reproduction report in Pyserini [here](https://github.com/castorini/pyserini/blob/master/docs/experiments-tct_colbert-v2.md).
{}
castorini/tct_colbert-v2-hn-msmarco
null
[ "transformers", "pytorch", "bert", "feature-extraction", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
This model is to reproduce a variant of TCT-ColBERT-V2 dense retrieval models described in the following paper: > Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. [In-Batch Negatives for Knowledge Distillation with Tightly-CoupledTeachers for Dense Retrieval.](https://cs.uwaterloo.ca/~jimmylin/publications/Lin_etal_2021_RepL4NLP.pdf) _RepL4NLP 2021_. Specifically, this checkpoint is finetuned for MS MARCO-V2 passage ranking, and we use this checkpoint as our ``trained'' model for TREC DL 2021 submissions. The initial checkpoint is from a previous one [tct_colbert-v2-hnp-msmarco](https://huggingface.co/castorini/tct_colbert-v2-hnp-msmarco) trained on [MS MARCO](https://github.com/microsoft/MSMARCO-Passage-Ranking). For fine-tuning, we construct our training data for MS MARCO-V2 passage ranking using this [script](https://github.com/castorini/pyserini/blob/master/scripts/msmarco_v2/generate_train_triplet.py).
{}
castorini/tct_colbert-v2-hnp-msmarco-r2
null
[ "transformers", "pytorch", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
This model is to reproduce a variant of TCT-ColBERT-V2 dense retrieval models described in the following paper: > Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. [In-Batch Negatives for Knowledge Distillation with Tightly-CoupledTeachers for Dense Retrieval.](https://cs.uwaterloo.ca/~jimmylin/publications/Lin_etal_2021_RepL4NLP.pdf) _RepL4NLP 2021_. You can find our reproduction report in Pyserini [here](https://github.com/castorini/pyserini/blob/master/docs/experiments-tct_colbert-v2.md).
{}
castorini/tct_colbert-v2-hnp-msmarco
null
[ "transformers", "pytorch", "bert", "feature-extraction", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
This model is to reproduce Contextualized Query Embeddings for Conversational Search described in the following paper: > Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. [Contextualized Query Embeddings for Conversational Search.](https://cs.uwaterloo.ca/~jimmylin/publications/Lin_etal_EMNLP2021.pdf) EMNLP, Nov 2021. This model is finetuend only on query ecoder with frezzed passage encoder. The starting point is the [tct_colbert-msmarco](https://huggingface.co/castorini/tct_colbert-msmarco/tree/main). The detailed usage of the model will be out soon on [Chatty Goose](https://github.com/castorini/chatty-goose). You can also check the fine-tuning and inference using tensorflow in our [CQE repo](https://github.com/castorini/CQE)
{}
castorini/tct_colbert-v2-msmarco-cqe
null
[ "transformers", "pytorch", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
This model is to reproduce a variant of TCT-ColBERT-V2 dense retrieval models described in the following paper: > Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. [In-Batch Negatives for Knowledge Distillation with Tightly-CoupledTeachers for Dense Retrieval.](https://cs.uwaterloo.ca/~jimmylin/publications/Lin_etal_2021_RepL4NLP.pdf) _RepL4NLP 2021_. You can find our reproduction report in Pyserini [here](https://github.com/castorini/pyserini/blob/master/docs/experiments-tct_colbert-v2.md).
{}
castorini/tct_colbert-v2-msmarco
null
[ "transformers", "pytorch", "bert", "feature-extraction", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{}
castorini/unicoil-msmarco-passage
null
[ "transformers", "pytorch", "bert", "feature-extraction", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{}
castorini/unicoil-noexp-msmarco-passage
null
[ "transformers", "pytorch", "bert", "feature-extraction", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
catalina-melo/distilBERT-severity-rating
null
[ "transformers", "tf", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
catalina-melo/distilbert-base-uncased-finetuned-cola
null
[ "transformers", "tf", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
cataremix15/distilbert-tiln-proj
null
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
catharina/arcane
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
cathyccc/112
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
cathyhou/roberta-base-finetuned-cola
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
catlovehotpot/bert-base-uncased-finetuned-swag
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
catluvr/v
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
{}
potaycat/vinanews-gpt2-kinda
null
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
An NER model to detect company and person names from news articles.
{}
cb-insights-team/news_ner
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
cbakos/bert-base-uncased-finetuned-ner
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
cbrew475/mpnet-metric
null
[ "transformers", "pytorch", "mpnet", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
cc4/bert-tok
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
cc4/my-new-shiny-tokenizer-2
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
cc4/my-new-shiny-tokenizer-3
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
cc4/my-new-shiny-tokenizer
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# LSG model **Transformers >= 4.36.1**\ **This model relies on a custom modeling file, you need to add trust_remote_code=True**\ **See [\#13467](https://github.com/huggingface/transformers/pull/13467)** LSG ArXiv [paper](https://arxiv.org/abs/2210.15497). \ Github/conversion script is available at this [link](https://github.com/ccdv-ai/convert_checkpoint_to_lsg). * [Usage](#usage) * [Parameters](#parameters) * [Sparse selection type](#sparse-selection-type) * [Tasks](#tasks) * [Training global tokens](#training-global-tokens) This model is adapted from [LEGAL-BERT](https://huggingface.co/nlpaueb/legal-bert-base-uncased) without additional pretraining yet. It uses the same number of parameters/layers and the same tokenizer. This model can handle long sequences but faster and more efficiently than Longformer or BigBird (from Transformers) and relies on Local + Sparse + Global attention (LSG). The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). Support encoder-decoder but I didnt test it extensively.\ Implemented in PyTorch. ![attn](attn.png) ## Usage The model relies on a custom modeling file, you need to add trust_remote_code=True to use it. ```python: from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("ccdv/legal-lsg-base-uncased-4096", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("ccdv/legal-lsg-base-uncased-4096") ``` ## Parameters You can change various parameters like : * the number of global tokens (num_global_tokens=1) * local block size (block_size=128) * sparse block size (sparse_block_size=128) * sparsity factor (sparsity_factor=2) * mask_first_token (mask first token since it is redundant with the first global token) * see config.json file Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix. ```python: from transformers import AutoModel model = AutoModel.from_pretrained("ccdv/legal-lsg-base-uncased-4096", trust_remote_code=True, num_global_tokens=16, block_size=64, sparse_block_size=64, attention_probs_dropout_prob=0.0 sparsity_factor=4, sparsity_type="none", mask_first_token=True ) ``` ## Sparse selection type There are 6 different sparse selection patterns. The best type is task dependent. \ If `sparse_block_size=0` or `sparsity_type="none"`, only local attention is considered. \ Note that for sequences with length < 2*block_size, the type has no effect. * `sparsity_type="bos_pooling"` (new) * weighted average pooling using the BOS token * Works best in general, especially with a rather large sparsity_factor (8, 16, 32) * Additional parameters: * None * `sparsity_type="norm"`, select highest norm tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * `sparsity_type="pooling"`, use average pooling to merge tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * `sparsity_type="lsh"`, use the LSH algorithm to cluster similar tokens * Works best for a large sparsity_factor (4+) * LSH relies on random projections, thus inference may differ slightly with different seeds * Additional parameters: * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids * `sparsity_type="stride"`, use a striding mecanism per head * Each head will use different tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads * `sparsity_type="block_stride"`, use a striding mecanism per head * Each head will use block of tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads ## Tasks Fill mask example: ```python: from transformers import FillMaskPipeline, AutoModelForMaskedLM, AutoTokenizer model = AutoModelForMaskedLM.from_pretrained("ccdv/legal-lsg-base-uncased-4096", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("ccdv/legal-lsg-base-uncased-4096") SENTENCES = ["Paris is the <mask> of France.", "The goal of life is <mask>."] pipeline = FillMaskPipeline(model, tokenizer) output = pipeline(SENTENCES, top_k=1) output = [o[0]["sequence"] for o in output] > ['Paris is the capital of France.', 'The goal of life is happiness.'] ``` Classification example: ```python: from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ccdv/legal-lsg-base-uncased-4096", trust_remote_code=True, pool_with_global=True, # pool with a global token instead of first token ) tokenizer = AutoTokenizer.from_pretrained("ccdv/legal-lsg-base-uncased-4096") SENTENCE = "This is a test for sequence classification. " * 300 token_ids = tokenizer( SENTENCE, return_tensors="pt", #pad_to_multiple_of=... # Optional truncation=True ) output = model(**token_ids) > SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None) ``` ## Training global tokens To train global tokens and the classification head only: ```python: from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ccdv/legal-lsg-base-uncased-4096", trust_remote_code=True, pool_with_global=True, # pool with a global token instead of first token num_global_tokens=16 ) tokenizer = AutoTokenizer.from_pretrained("ccdv/legal-lsg-base-uncased-4096") for name, param in model.named_parameters(): if "global_embeddings" not in name: param.requires_grad = False else: param.required_grad = True ``` **LEGAL-BERT** ``` @inproceedings{chalkidis-etal-2020-legal, title = "{LEGAL}-{BERT}: The Muppets straight out of Law School", author = "Chalkidis, Ilias and Fergadiotis, Manos and Malakasiotis, Prodromos and Aletras, Nikolaos and Androutsopoulos, Ion", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", doi = "10.18653/v1/2020.findings-emnlp.261", pages = "2898--2904" } ```
{"language": "en", "tags": ["long context", "legal"], "pipeline_tag": "fill-mask"}
ccdv/lsg-legal-base-uncased-4096
null
[ "transformers", "pytorch", "bert", "pretraining", "long context", "legal", "fill-mask", "custom_code", "en", "arxiv:2210.15497", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# LSG model **Transformers >= 4.36.1**\ **This model relies on a custom modeling file, you need to add trust_remote_code=True**\ **See [\#13467](https://github.com/huggingface/transformers/pull/13467)** LSG ArXiv [paper](https://arxiv.org/abs/2210.15497). \ Github/conversion script is available at this [link](https://github.com/ccdv-ai/convert_checkpoint_to_lsg). * [Usage](#usage) * [Parameters](#parameters) * [Sparse selection type](#sparse-selection-type) * [Tasks](#tasks) * [Training global tokens](#training-global-tokens) This model is a small version of the [LEGAL-BERT](https://huggingface.co/nlpaueb/legal-bert-small-uncased) model without additional pretraining yet. It uses the same number of parameters/layers and the same tokenizer. This model can handle long sequences but faster and more efficiently than Longformer or BigBird (from Transformers) and relies on Local + Sparse + Global attention (LSG). The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). Support encoder-decoder but I didnt test it extensively.\ Implemented in PyTorch. ![attn](attn.png) ## Usage The model relies on a custom modeling file, you need to add trust_remote_code=True to use it. ```python: from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("ccdv/legal-lsg-small-uncased-4096", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("ccdv/legal-lsg-small-uncased-4096") ``` ## Parameters You can change various parameters like : * the number of global tokens (num_global_tokens=1) * local block size (block_size=128) * sparse block size (sparse_block_size=128) * sparsity factor (sparsity_factor=2) * mask_first_token (mask first token since it is redundant with the first global token) * see config.json file Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix. ```python: from transformers import AutoModel model = AutoModel.from_pretrained("ccdv/legal-lsg-small-uncased-4096", trust_remote_code=True, num_global_tokens=16, block_size=64, sparse_block_size=64, attention_probs_dropout_prob=0.0 sparsity_factor=4, sparsity_type="none", mask_first_token=True ) ``` ## Sparse selection type There are 6 different sparse selection patterns. The best type is task dependent. \ If `sparse_block_size=0` or `sparsity_type="none"`, only local attention is considered. \ Note that for sequences with length < 2*block_size, the type has no effect. * `sparsity_type="bos_pooling"` (new) * weighted average pooling using the BOS token * Works best in general, especially with a rather large sparsity_factor (8, 16, 32) * Additional parameters: * None * `sparsity_type="norm"`, select highest norm tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * `sparsity_type="pooling"`, use average pooling to merge tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * `sparsity_type="lsh"`, use the LSH algorithm to cluster similar tokens * Works best for a large sparsity_factor (4+) * LSH relies on random projections, thus inference may differ slightly with different seeds * Additional parameters: * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids * `sparsity_type="stride"`, use a striding mecanism per head * Each head will use different tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads * `sparsity_type="block_stride"`, use a striding mecanism per head * Each head will use block of tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads ## Tasks Fill mask example: ```python: from transformers import FillMaskPipeline, AutoModelForMaskedLM, AutoTokenizer model = AutoModelForMaskedLM.from_pretrained("ccdv/legal-lsg-small-uncased-4096", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("ccdv/legal-lsg-small-uncased-4096") SENTENCES = ["Paris is the <mask> of France.", "The goal of life is <mask>."] pipeline = FillMaskPipeline(model, tokenizer) output = pipeline(SENTENCES, top_k=1) output = [o[0]["sequence"] for o in output] > ['Paris is the capital of France.', 'The goal of life is happiness.'] ``` Classification example: ```python: from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ccdv/legal-lsg-small-uncased-4096", trust_remote_code=True, pool_with_global=True, # pool with a global token instead of first token ) tokenizer = AutoTokenizer.from_pretrained("ccdv/legal-lsg-small-uncased-4096") SENTENCE = "This is a test for sequence classification. " * 300 token_ids = tokenizer( SENTENCE, return_tensors="pt", #pad_to_multiple_of=... # Optional truncation=True ) output = model(**token_ids) > SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None) ``` ## Training global tokens To train global tokens and the classification head only: ```python: from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ccdv/legal-lsg-small-uncased-4096", trust_remote_code=True, pool_with_global=True, # pool with a global token instead of first token num_global_tokens=16 ) tokenizer = AutoTokenizer.from_pretrained("ccdv/legal-lsg-small-uncased-4096") for name, param in model.named_parameters(): if "global_embeddings" not in name: param.requires_grad = False else: param.required_grad = True ``` **LEGAL-BERT** ``` @inproceedings{chalkidis-etal-2020-legal, title = "{LEGAL}-{BERT}: The Muppets straight out of Law School", author = "Chalkidis, Ilias and Fergadiotis, Manos and Malakasiotis, Prodromos and Aletras, Nikolaos and Androutsopoulos, Ion", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", doi = "10.18653/v1/2020.findings-emnlp.261", pages = "2898--2904" } ```
{"language": "en", "tags": ["long context", "legal"], "pipeline_tag": "fill-mask"}
ccdv/lsg-legal-small-uncased-4096
null
[ "transformers", "pytorch", "bert", "pretraining", "long context", "legal", "fill-mask", "custom_code", "en", "arxiv:2210.15497", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# LSG model **Transformers >= 4.36.1**\ **This model relies on a custom modeling file, you need to add trust_remote_code=True**\ **See [\#13467](https://github.com/huggingface/transformers/pull/13467)** LSG ArXiv [paper](https://arxiv.org/abs/2210.15497). \ Github/conversion script is available at this [link](https://github.com/ccdv-ai/convert_checkpoint_to_lsg). * [Usage](#usage) * [Parameters](#parameters) * [Sparse selection type](#sparse-selection-type) * [Tasks](#tasks) This model is adapted from [BART-base](https://huggingface.co/facebook/bart-base) for encoder-decoder tasks without additional pretraining. It uses the same number of parameters/layers and the same tokenizer. This model can handle long sequences but faster and more efficiently than Longformer (LED) or BigBird (Pegasus) from the hub and relies on Local + Sparse + Global attention (LSG). The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). Implemented in PyTorch. ![attn](attn.png) ## Usage The model relies on a custom modeling file, you need to add trust_remote_code=True to use it. ```python: from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("ccdv/lsg-bart-base-4096", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-4096") ``` ## Parameters You can change various parameters like : * the number of global tokens (num_global_tokens=1) * local block size (block_size=128) * sparse block size (sparse_block_size=128) * sparsity factor (sparsity_factor=2) * mask_first_token (mask first token since it is redundant with the first global token) * see config.json file Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix. ```python: from transformers import AutoModel model = AutoModel.from_pretrained("ccdv/lsg-bart-base-4096", trust_remote_code=True, num_global_tokens=16, block_size=64, sparse_block_size=64, attention_probs_dropout_prob=0.0 sparsity_factor=4, sparsity_type="none", mask_first_token=True ) ``` ## Sparse selection type There are 6 different sparse selection patterns. The best type is task dependent. \ If `sparse_block_size=0` or `sparsity_type="none"`, only local attention is considered. \ Note that for sequences with length < 2*block_size, the type has no effect. * `sparsity_type="bos_pooling"` (new) * weighted average pooling using the BOS token * Works best in general, especially with a rather large sparsity_factor (8, 16, 32) * Additional parameters: * None * `sparsity_type="norm"`, select highest norm tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * `sparsity_type="pooling"`, use average pooling to merge tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * `sparsity_type="lsh"`, use the LSH algorithm to cluster similar tokens * Works best for a large sparsity_factor (4+) * LSH relies on random projections, thus inference may differ slightly with different seeds * Additional parameters: * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids * `sparsity_type="stride"`, use a striding mecanism per head * Each head will use different tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads * `sparsity_type="block_stride"`, use a striding mecanism per head * Each head will use block of tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads ## Tasks Seq2Seq example for summarization: ```python: from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-bart-base-4096", trust_remote_code=True, pass_global_tokens_to_decoder=True, # Pass encoder global tokens to decoder ) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-4096") SENTENCE = "This is a test sequence to test the model. " * 300 token_ids = tokenizer( SENTENCE, return_tensors="pt", padding="max_length", # Optional but recommended truncation=True # Optional but recommended ) output = model(**token_ids) ``` Classification example: ```python: from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-bart-base-4096", trust_remote_code=True, pass_global_tokens_to_decoder=True, # Pass encoder global tokens to decoder ) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-4096") SENTENCE = "This is a test sequence to test the model. " * 300 token_ids = tokenizer( SENTENCE, return_tensors="pt", #pad_to_multiple_of=... # Optional truncation=True ) output = model(**token_ids) > SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None) ``` **BART** ``` @article{DBLP:journals/corr/abs-1910-13461, author = {Mike Lewis and Yinhan Liu and Naman Goyal and Marjan Ghazvininejad and Abdelrahman Mohamed and Omer Levy and Veselin Stoyanov and Luke Zettlemoyer}, title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension}, journal = {CoRR}, volume = {abs/1910.13461}, year = {2019}, url = {http://arxiv.org/abs/1910.13461}, eprinttype = {arXiv}, eprint = {1910.13461}, timestamp = {Thu, 31 Oct 2019 14:02:26 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"language": ["en"], "tags": ["summarization", "bart", "long context"], "pipeline_tag": "fill-mask"}
ccdv/lsg-bart-base-4096
null
[ "transformers", "pytorch", "bart", "text2text-generation", "summarization", "long context", "fill-mask", "custom_code", "en", "arxiv:2210.15497", "arxiv:1910.13461", "autotrain_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# LSG model **Transformers >= 4.36.1**\ **This model relies on a custom modeling file, you need to add trust_remote_code=True**\ **See [\#13467](https://github.com/huggingface/transformers/pull/13467)** LSG ArXiv [paper](https://arxiv.org/abs/2210.15497). \ Github/conversion script is available at this [link](https://github.com/ccdv-ai/convert_checkpoint_to_lsg). * [Usage](#usage) * [Parameters](#parameters) * [Sparse selection type](#sparse-selection-type) * [Tasks](#tasks) This model is adapted from [BART-large](https://huggingface.co/facebook/bart-large) for encoder-decoder tasks without additional pretraining. It uses the same number of parameters/layers and the same tokenizer. This model can handle long sequences but faster and more efficiently than Longformer (LED) or BigBird (Pegasus) from the hub and relies on Local + Sparse + Global attention (LSG). The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \ Implemented in PyTorch. ![attn](attn.png) ## Usage The model relies on a custom modeling file, you need to add trust_remote_code=True to use it. ```python: from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("ccdv/lsg-bart-large-4096", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-large-4096") ``` ## Parameters You can change various parameters like : * the number of global tokens (num_global_tokens=1) * local block size (block_size=128) * sparse block size (sparse_block_size=128) * sparsity factor (sparsity_factor=2) * mask_first_token (mask first token since it is redundant with the first global token) * see config.json file Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix. ```python: from transformers import AutoModel model = AutoModel.from_pretrained("ccdv/lsg-bart-large-4096", trust_remote_code=True, num_global_tokens=16, block_size=64, sparse_block_size=64, attention_probs_dropout_prob=0.0 sparsity_factor=4, sparsity_type="none", mask_first_token=True ) ``` ## Sparse selection type There are 6 different sparse selection patterns. The best type is task dependent. \ If `sparse_block_size=0` or `sparsity_type="none"`, only local attention is considered. \ Note that for sequences with length < 2*block_size, the type has no effect. * `sparsity_type="bos_pooling"` (new) * weighted average pooling using the BOS token * Works best in general, especially with a rather large sparsity_factor (8, 16, 32) * Additional parameters: * None * `sparsity_type="norm"`, select highest norm tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * `sparsity_type="pooling"`, use average pooling to merge tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * `sparsity_type="lsh"`, use the LSH algorithm to cluster similar tokens * Works best for a large sparsity_factor (4+) * LSH relies on random projections, thus inference may differ slightly with different seeds * Additional parameters: * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids * `sparsity_type="stride"`, use a striding mecanism per head * Each head will use different tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads * `sparsity_type="block_stride"`, use a striding mecanism per head * Each head will use block of tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads ## Tasks Seq2Seq example for summarization: ```python: from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-bart-large-4096", trust_remote_code=True, pass_global_tokens_to_decoder=True, # Pass encoder global tokens to decoder ) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-large-4096") SENTENCE = "This is a test sequence to test the model. " * 300 token_ids = tokenizer( SENTENCE, return_tensors="pt", #pad_to_multiple_of=... # Optional truncation=True ) output = model(**token_ids) ``` Classification example: ```python: from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-bart-large-4096", trust_remote_code=True, pass_global_tokens_to_decoder=True, # Pass encoder global tokens to decoder ) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-large-4096") SENTENCE = "This is a test sequence to test the model. " * 300 token_ids = tokenizer( SENTENCE, return_tensors="pt", padding="max_length", # Optional but recommended truncation=True # Optional but recommended ) output = model(**token_ids) > SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None) ``` **BART** ``` @article{DBLP:journals/corr/abs-1910-13461, author = {Mike Lewis and Yinhan Liu and Naman Goyal and Marjan Ghazvininejad and Abdelrahman Mohamed and Omer Levy and Veselin Stoyanov and Luke Zettlemoyer}, title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension}, journal = {CoRR}, volume = {abs/1910.13461}, year = {2019}, url = {http://arxiv.org/abs/1910.13461}, eprinttype = {arXiv}, eprint = {1910.13461}, timestamp = {Thu, 31 Oct 2019 14:02:26 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"language": ["en"], "tags": ["summarization", "bart", "long context"], "pipeline_tag": "fill-mask"}
ccdv/lsg-bart-large-4096
null
[ "transformers", "pytorch", "bart", "text2text-generation", "summarization", "long context", "fill-mask", "custom_code", "en", "arxiv:2210.15497", "arxiv:1910.13461", "autotrain_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# LSG model **Transformers >= 4.36.1**\ **This model relies on a custom modeling file, you need to add trust_remote_code=True**\ **See [\#13467](https://github.com/huggingface/transformers/pull/13467)** LSG ArXiv [paper](https://arxiv.org/abs/2210.15497). \ Github/conversion script is available at this [link](https://github.com/ccdv-ai/convert_checkpoint_to_lsg). * [Usage](#usage) * [Parameters](#parameters) * [Sparse selection type](#sparse-selection-type) * [Tasks](#tasks) This model is adapted from [BARThez](https://huggingface.co/moussaKam/barthez) for encoder-decoder tasks without additional pretraining. It uses the same number of parameters/layers and the same tokenizer. This model can handle long sequences but faster and more efficiently than Longformer (LED) or BigBird (Pegasus) from the hub and relies on Local + Sparse + Global attention (LSG). The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \ Implemented in PyTorch. ![attn](attn.png) ## Usage The model relies on a custom modeling file, you need to add trust_remote_code=True to use it. ```python: from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("ccdv/lsg-barthez-4096", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-barthez-4096") ``` ## Parameters You can change various parameters like : * the number of global tokens (num_global_tokens=1) * local block size (block_size=128) * sparse block size (sparse_block_size=128) * sparsity factor (sparsity_factor=2) * mask_first_token (mask first token since it is redundant with the first global token) * see config.json file Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix. ```python: from transformers import AutoModel model = AutoModel.from_pretrained("ccdv/lsg-barthez-4096", trust_remote_code=True, num_global_tokens=16, block_size=64, sparse_block_size=64, attention_probs_dropout_prob=0.0 sparsity_factor=4, sparsity_type="none", mask_first_token=True ) ``` ## Sparse selection type There are 6 different sparse selection patterns. The best type is task dependent. \ If `sparse_block_size=0` or `sparsity_type="none"`, only local attention is considered. \ Note that for sequences with length < 2*block_size, the type has no effect. * `sparsity_type="bos_pooling"` (new) * weighted average pooling using the BOS token * Works best in general, especially with a rather large sparsity_factor (8, 16, 32) * Additional parameters: * None * `sparsity_type="norm"`, select highest norm tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * `sparsity_type="pooling"`, use average pooling to merge tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * `sparsity_type="lsh"`, use the LSH algorithm to cluster similar tokens * Works best for a large sparsity_factor (4+) * LSH relies on random projections, thus inference may differ slightly with different seeds * Additional parameters: * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids * `sparsity_type="stride"`, use a striding mecanism per head * Each head will use different tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads * `sparsity_type="block_stride"`, use a striding mecanism per head * Each head will use block of tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads ## Tasks Seq2Seq example for summarization: ```python: from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-barthez-4096", trust_remote_code=True, pass_global_tokens_to_decoder=True, # Pass encoder global tokens to decoder ) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-barthez-4096") SENTENCE = "This is a test sequence to test the model. " * 300 token_ids = tokenizer( SENTENCE, return_tensors="pt", padding="max_length", # Optional but recommended truncation=True # Optional but recommended ) output = model(**token_ids) ``` Classification example: ```python: from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-barthez-4096", trust_remote_code=True, pass_global_tokens_to_decoder=True, # Pass encoder global tokens to decoder ) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-barthez-4096") SENTENCE = "This is a test sequence to test the model. " * 300 token_ids = tokenizer( SENTENCE, return_tensors="pt", #pad_to_multiple_of=... # Optional truncation=True ) output = model(**token_ids) > SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None) ``` ## Conversion script To convert a BERT, RoBERTa or BART checkpoint to LSG, see this [repo](https://github.com/ccdv-ai/convert_checkpoint_to_lsg). **BARThez** ``` @article{eddine2020barthez, title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model}, author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis}, journal={arXiv preprint arXiv:2010.12321}, year={2020} } ```
{"language": ["fr"], "tags": ["summarization", "bart", "long context"], "pipeline_tag": "fill-mask"}
ccdv/lsg-barthez-4096
null
[ "transformers", "pytorch", "mbart", "text2text-generation", "summarization", "bart", "long context", "fill-mask", "custom_code", "fr", "arxiv:2210.15497", "autotrain_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# LSG model **Transformers >= 4.36.1**\ **This model relies on a custom modeling file, you need to add trust_remote_code=True**\ **See [\#13467](https://github.com/huggingface/transformers/pull/13467)** LSG ArXiv [paper](https://arxiv.org/abs/2210.15497). \ Github/conversion script is available at this [link](https://github.com/ccdv-ai/convert_checkpoint_to_lsg). * [Usage](#usage) * [Parameters](#parameters) * [Sparse selection type](#sparse-selection-type) * [Tasks](#tasks) * [Training global tokens](#training-global-tokens) This model is adapted from [CamemBERT-base](https://huggingface.co/camembert-base) without additional pretraining yet. It uses the same number of parameters/layers and the same tokenizer. This model can handle long sequences but faster and more efficiently than Longformer or BigBird (from Transformers) and relies on Local + Sparse + Global attention (LSG). The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). Support encoder-decoder but I didnt test it extensively.\ Implemented in PyTorch. ![attn](attn.png) ## Usage The model relies on a custom modeling file, you need to add trust_remote_code=True to use it. ```python: from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("ccdv/lsg-camembert-base-4096", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-camembert-base-4096") ``` ## Parameters You can change various parameters like : * the number of global tokens (num_global_tokens=1) * local block size (block_size=128) * sparse block size (sparse_block_size=128) * sparsity factor (sparsity_factor=2) * mask_first_token (mask first token since it is redundant with the first global token) * see config.json file Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix. ```python: from transformers import AutoModel model = AutoModel.from_pretrained("ccdv/lsg-camembert-base-4096", trust_remote_code=True, num_global_tokens=16, block_size=64, sparse_block_size=64, attention_probs_dropout_prob=0.0 sparsity_factor=4, sparsity_type="none", mask_first_token=True ) ``` ## Sparse selection type There are 6 different sparse selection patterns. The best type is task dependent. \ If `sparse_block_size=0` or `sparsity_type="none"`, only local attention is considered. \ Note that for sequences with length < 2*block_size, the type has no effect. * `sparsity_type="bos_pooling"` (new) * weighted average pooling using the BOS token * Works best in general, especially with a rather large sparsity_factor (8, 16, 32) * Additional parameters: * None * `sparsity_type="norm"`, select highest norm tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * `sparsity_type="pooling"`, use average pooling to merge tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * `sparsity_type="lsh"`, use the LSH algorithm to cluster similar tokens * Works best for a large sparsity_factor (4+) * LSH relies on random projections, thus inference may differ slightly with different seeds * Additional parameters: * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids * `sparsity_type="stride"`, use a striding mecanism per head * Each head will use different tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads * `sparsity_type="block_stride"`, use a striding mecanism per head * Each head will use block of tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads ## Tasks Fill mask example: ```python: from transformers import FillMaskPipeline, AutoModelForMaskedLM, AutoTokenizer model = AutoModelForMaskedLM.from_pretrained("ccdv/lsg-camembert-base-4096", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-camembert-base-4096") SENTENCES = "Paris est la <mask> de la France." pipeline = FillMaskPipeline(model, tokenizer) output = pipeline(SENTENCES) > 'Paris est la capitale de la France.' ``` Classification example: ```python: from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-camembert-base-4096", trust_remote_code=True, pool_with_global=True, # pool with a global token instead of first token ) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-camembert-base-4096") SENTENCE = "This is a test for sequence classification. " * 300 token_ids = tokenizer( SENTENCE, return_tensors="pt", #pad_to_multiple_of=... # Optional truncation=True ) output = model(**token_ids) > SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None) ``` ## Training global tokens To train global tokens and the classification head only: ```python: from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-camembert-base-4096", trust_remote_code=True, pool_with_global=True, # pool with a global token instead of first token num_global_tokens=16 ) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-camembert-base-4096") for name, param in model.named_parameters(): if "global_embeddings" not in name: param.requires_grad = False else: param.required_grad = True ``` **CamemBERT** ``` @inproceedings{Martin_2020, doi = {10.18653/v1/2020.acl-main.645}, url = {https://doi.org/10.18653%2Fv1%2F2020.acl-main.645}, year = 2020, publisher = {Association for Computational Linguistics}, author = {Louis Martin and Benjamin Muller and Pedro Javier Ortiz Su{\'{a}}rez and Yoann Dupont and Laurent Romary and {\'{E}}ric de la Clergeri and Djam{\'{e}} Seddah and Beno{\^{\i}}t Sagot}, title = {{CamemBERT}: a Tasty French Language Model}, booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics} } ```
{"language": "fr", "tags": ["camembert", "long context"], "pipeline_tag": "fill-mask"}
ccdv/lsg-camembert-base-4096
null
[ "transformers", "pytorch", "camembert", "fill-mask", "long context", "custom_code", "fr", "arxiv:2210.15497", "autotrain_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# LSG model **Transformers >= 4.36.1**\ **This model relies on a custom modeling file, you need to add trust_remote_code=True**\ **See [\#13467](https://github.com/huggingface/transformers/pull/13467)** LSG ArXiv [paper](https://arxiv.org/abs/2210.15497). \ Github/conversion script is available at this [link](https://github.com/ccdv-ai/convert_checkpoint_to_lsg). * [Usage](#usage) * [Parameters](#parameters) * [Sparse selection type](#sparse-selection-type) * [Tasks](#tasks) * [Training global tokens](#training-global-tokens) This model can handle long sequences but faster and more efficiently than Longformer or BigBird (from Transformers) and relies on Local + Sparse + Global attention (LSG). The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \ The model is trained starting from a RoBERTa-base checkpoint on 16Gb of data (Wikipedia, Bookcorpus etc...) using the same number of parameters/layers and the same tokenizer. Support encoder-decoder and causal masking but I didnt test it extensively.\ Implemented in PyTorch. ![attn](attn.png) ## Usage The model relies on a custom modeling file, you need to add trust_remote_code=True to use it. ```python: from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("ccdv/lsg-base-4096", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-base-4096") ``` ## Parameters You can change various parameters like : * the number of global tokens (num_global_tokens=1) * local block size (block_size=128) * sparse block size (sparse_block_size=128) * sparsity factor (sparsity_factor=2) * mask_first_token (mask first token since it is redundant with the first global token) * see config.json file Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix. ```python: from transformers import AutoModel model = AutoModel.from_pretrained("ccdv/lsg-base-4096", trust_remote_code=True, num_global_tokens=16, block_size=64, sparse_block_size=64, attention_probs_dropout_prob=0.0 sparsity_factor=4, sparsity_type="none", mask_first_token=True ) ``` ## Sparse selection type There are 6 different sparse selection patterns. The best type is task dependent. \ If `sparse_block_size=0` or `sparsity_type="none"`, only local attention is considered. \ Note that for sequences with length < 2*block_size, the type has no effect. * `sparsity_type="bos_pooling"` (new) * weighted average pooling using the BOS token * Works best in general, especially with a rather large sparsity_factor (8, 16, 32) * Additional parameters: * None * `sparsity_type="norm"`, select highest norm tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * `sparsity_type="pooling"`, use average pooling to merge tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * `sparsity_type="lsh"`, use the LSH algorithm to cluster similar tokens * Works best for a large sparsity_factor (4+) * LSH relies on random projections, thus inference may differ slightly with different seeds * Additional parameters: * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids * `sparsity_type="stride"`, use a striding mecanism per head * Each head will use different tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads * `sparsity_type="block_stride"`, use a striding mecanism per head * Each head will use block of tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads ## Tasks Fill mask example: ```python: from transformers import FillMaskPipeline, AutoModelForMaskedLM, AutoTokenizer model = AutoModelForMaskedLM.from_pretrained("ccdv/lsg-base-4096", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-base-4096") SENTENCES = ["Paris is the <mask> of France.", "The goal of life is <mask>."] pipeline = FillMaskPipeline(model, tokenizer) output = pipeline(SENTENCES, top_k=1) output = [o[0]["sequence"] for o in output] > ['Paris is the capital of France.', 'The goal of life is happiness.'] ``` Classification example: ```python: from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-base-4096", trust_remote_code=True, pool_with_global=True, # pool with a global token instead of first token ) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-base-4096") SENTENCE = "This is a test for sequence classification. " * 300 token_ids = tokenizer( SENTENCE, return_tensors="pt", #pad_to_multiple_of=... # Optional truncation=True ) output = model(**token_ids) > SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None) ``` ## Training global tokens To train global tokens and the classification head only: ```python: from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-base-4096", trust_remote_code=True, pool_with_global=True, # pool with a global token instead of first token num_global_tokens=16 ) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-base-4096") for name, param in model.named_parameters(): if "global_embeddings" not in name: param.requires_grad = False else: param.required_grad = True ```
{"language": "en", "tags": ["long context"]}
ccdv/lsg-base-4096
null
[ "transformers", "pytorch", "roberta", "fill-mask", "long context", "custom_code", "en", "arxiv:2210.15497", "autotrain_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# LSG model **Transformers >= 4.36.1**\ **This model relies on a custom modeling file, you need to add trust_remote_code=True**\ **See [\#13467](https://github.com/huggingface/transformers/pull/13467)** LSG ArXiv [paper](https://arxiv.org/abs/2210.15497). \ Github/conversion script is available at this [link](https://github.com/ccdv-ai/convert_checkpoint_to_lsg). * [Usage](#usage) * [Parameters](#parameters) * [Sparse selection type](#sparse-selection-type) * [Tasks](#tasks) This model is adapted from [Pegasus-large](https://huggingface.co/google/pegasus-large) for encoder-decoder tasks without additional pretraining. It uses the same number of parameters/layers and the same tokenizer. This model can handle long sequences but faster and more efficiently than Longformer (LED) or BigBird (Pegasus) from the hub and relies on Local + Sparse + Global attention (LSG). The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \ Implemented in PyTorch. ![attn](attn.png) ## Usage The model relies on a custom modeling file, you need to add trust_remote_code=True to use it. ```python: from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("ccdv/lsg-pegasus-large-4096", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-pegasus-large-4096") ``` ## Parameters You can change various parameters like : * the number of global tokens (num_global_tokens=1) * local block size (block_size=128) * sparse block size (sparse_block_size=128) * sparsity factor (sparsity_factor=2) * see config.json file Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix. ```python: from transformers import AutoModel model = AutoModel.from_pretrained("ccdv/lsg-pegasus-large-4096", trust_remote_code=True, num_global_tokens=16, block_size=64, sparse_block_size=64, attention_probs_dropout_prob=0.0 sparsity_factor=4, sparsity_type="none", mask_first_token=True ) ``` ## Sparse selection type There are 6 different sparse selection patterns. The best type is task dependent. \ If `sparse_block_size=0` or `sparsity_type="none"`, only local attention is considered. \ Note that for sequences with length < 2*block_size, the type has no effect. * `sparsity_type="bos_pooling"` (new) * weighted average pooling using the BOS token * Works best in general, especially with a rather large sparsity_factor (8, 16, 32) * Additional parameters: * None * `sparsity_type="norm"`, select highest norm tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * `sparsity_type="pooling"`, use average pooling to merge tokens * Works best for a small sparsity_factor (2 to 4) * Additional parameters: * None * `sparsity_type="lsh"`, use the LSH algorithm to cluster similar tokens * Works best for a large sparsity_factor (4+) * LSH relies on random projections, thus inference may differ slightly with different seeds * Additional parameters: * lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids * `sparsity_type="stride"`, use a striding mecanism per head * Each head will use different tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads * `sparsity_type="block_stride"`, use a striding mecanism per head * Each head will use block of tokens strided by sparsify_factor * Not recommended if sparsify_factor > num_heads ## Tasks Seq2Seq example for summarization: ```python: from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-pegasus-large-4096", trust_remote_code=True, pass_global_tokens_to_decoder=True, # Pass encoder global tokens to decoder ) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-pegasus-large-4096") SENTENCE = "This is a test sequence to test the model. " * 300 token_ids = tokenizer( SENTENCE, return_tensors="pt", #pad_to_multiple_of=... # Optional truncation=True ) output = model(**token_ids) ``` Classification example: ```python: from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-pegasus-large-4096", trust_remote_code=True, pass_global_tokens_to_decoder=True, # Pass encoder global tokens to decoder ) tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-pegasus-large-4096") SENTENCE = "This is a test sequence to test the model. " * 300 token_ids = tokenizer( SENTENCE, return_tensors="pt", padding="max_length", # Optional but recommended truncation=True # Optional but recommended ) output = model(**token_ids) > SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None) ``` **Pegasus** ``` @misc{zhang2019pegasus, title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization}, author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu}, year={2019}, eprint={1912.08777}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": ["en"], "tags": ["summarization", "pegasus", "long context"], "pipeline_tag": "fill-mask"}
ccdv/lsg-pegasus-large-4096
null
[ "transformers", "pytorch", "pegasus", "text2text-generation", "summarization", "long context", "fill-mask", "custom_code", "en", "arxiv:2210.15497", "arxiv:1912.08777", "autotrain_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
# Wav2Vec2-Large-100k-VoxPopuli-Català **⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL:** https://huggingface.co/softcatala/wav2vec2-large-100k-voxpopuli-catala Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) on Catalan language using the [Common Voice](https://huggingface.co/datasets/common_voice) and [ParlamentParla](https://www.openslr.org/59/) datasets. **Attention:** The split train/dev/test used does not fully map with the CommonVoice 6.1 dataset. A custom split was used combining both the CommonVoice and ParlamentParla dataset and can be found [here](https://github.com/ccoreilly/wav2vec2-catala). Evaluating on the CV test dataset will produce a biased WER as 1144 audio files of that dataset were used in training/evaluation of this model. WER was calculated using this [test.csv](https://github.com/ccoreilly/wav2vec2-catala/blob/master/test-filtered.csv) which was not seen by the model during training/evaluation. You can find training and evaluation scripts in the github repository [ccoreilly/wav2vec2-catala](https://github.com/ccoreilly/wav2vec2-catala) When using this model, make sure that your speech input is sampled at 16kHz. ## Results Word error rate was evaluated on the following datasets unseen by the model: | Dataset | WER | | ------- | --- | | [Test split CV+ParlamentParla]((https://github.com/ccoreilly/wav2vec2-catala/blob/master/test-filtered.csv)) | 5.98% | | [Google Crowsourced Corpus](https://www.openslr.org/69/) | 12.14% | | Audiobook “La llegenda de Sant Jordi” | 12.02% | ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ca", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("ccoreilly/wav2vec2-large-100k-voxpopuli-catala") model = Wav2Vec2ForCTC.from_pretrained("ccoreilly/wav2vec2-large-100k-voxpopuli-catala") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ```
{"language": "ca", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "speech-to-text"], "datasets": ["common_voice", "parlament_parla"], "metrics": ["wer"]}
ccoreilly/wav2vec2-large-100k-voxpopuli-catala
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "speech-to-text", "ca", "dataset:common_voice", "dataset:parlament_parla", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-Català Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Catalan language using the [Common Voice](https://huggingface.co/datasets/common_voice) and [ParlamentParla](https://www.openslr.org/59/) datasets. **Attention:** The split train/dev/test used does not fully map with the CommonVoice 6.1 dataset. A custom split was used combining both the CommonVoice and ParlamentParla dataset and can be found [here](https://github.com/ccoreilly/wav2vec2-catala). Evaluating on the CV test dataset will produce a biased WER as 1144 audio files of that dataset were used in training/evaluation of this model. WER was calculated using this [test.csv](https://github.com/ccoreilly/wav2vec2-catala/blob/master/test.csv) which was not seen by the model during training/evaluation. You can find training and evaluation scripts in the github repository [ccoreilly/wav2vec2-catala](https://github.com/ccoreilly/wav2vec2-catala) When using this model, make sure that your speech input is sampled at 16kHz. ## Results Word error rate was evaluated on the following datasets unseen by the model: | Dataset | WER | | ------- | --- | | [Test split CV+ParlamentParla]((https://github.com/ccoreilly/wav2vec2-catala/blob/master/test.csv)) | 6.92% | | [Google Crowsourced Corpus](https://www.openslr.org/69/) | 12.99% | | Audiobook “La llegenda de Sant Jordi” | 13.23% | ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ca", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("ccoreilly/wav2vec2-large-xlsr-catala") model = Wav2Vec2ForCTC.from_pretrained("ccoreilly/wav2vec2-large-xlsr-catala") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ```
{"language": "ca", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice", "parlament_parla"], "metrics": ["wer"]}
ccoreilly/wav2vec2-large-xlsr-catala
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "ca", "dataset:common_voice", "dataset:parlament_parla", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# GIMPLEARN knows modeltest2 # To generate conversation use input such as Human: What should I do?\nAI:
{"tags": ["Text Generation"]}
cd-dvd/testmodel2
null
[ "transformers", "pytorch", "gpt_neo", "text-generation", "Text Generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
cdiamana/distilbert-base-uncased-finetuned-squad
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
cdwyer1bod/first_model
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
cecilia/q
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
## a dialoggpt model trained on french opensubtitles with custom tokenizer trained with this notebook https://colab.research.google.com/drive/1pfCV3bngAmISNZVfDvBMyEhQKuYw37Rl#scrollTo=AyImj9qZYLRi&uniqifier=3 config from microsoft/DialoGPT-medium dataset generated from 2018 opensubtitle downloaded from opus folowing these guidelines https://github.com/PolyAI-LDN/conversational-datasets/tree/master/opensubtitles with this notebook https://colab.research.google.com/drive/1uyh3vJ9nEjqOHI68VD73qxt4olJzODxi#scrollTo=deaacv4XfLMk ### How to use Now we are ready to try out how the model works as a chatting partner! ```python import torch from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("cedpsam/chatbot_fr") model = AutoModelWithLMHead.from_pretrained("cedpsam/chatbot_fr") for step in range(6): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id, top_p=0.92, top_k = 50 ) # pretty print last ouput tokens from bot print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
{"language": "fr", "tags": ["conversational"], "widget": [{"text": "bonjour."}, {"text": "mais encore"}, {"text": "est ce que l'argent achete le bonheur?"}]}
cedpsam/chatbot_fr
null
[ "transformers", "pytorch", "jax", "safetensors", "gpt2", "text-generation", "conversational", "fr", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
celential/erc
null
[ "transformers", "pytorch", "safetensors", "albert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
celine/emotion-detection_indobenchmark-indobert-lite-base-p1
null
[ "transformers", "tf", "albert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
celine/hate-speech_indobenchmark-indobert-lite-base-p1
null
[ "transformers", "tf", "albert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
celineguan/wav2vec2-base-timit-demo-colab
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
{}
celinelee/answer-extraction
null
[ "transformers", "pytorch", "distilbert", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
环境领域的中文预训练Bert模型,在hlf/chinese-bert-wwm-ext的基础上进行训练,旨在学习到中文表达后进一步学习到环境领域的专业知识。 1.5G的预训练语料包括水环境、大气环境、土壤环境、气候变化、中文期刊、国家政策等内容。 项目正在进行中,后续会陆续更新相关内容。 清华大学环境学院课题组 有相关需求、建议,联系[email protected]
{"language": "zh", "tags": ["pretrain", "pytorch", "environment"], "widget": [{"text": "\u603b[MASK]\u662f\u6c34\u73af\u5883\u4e2d\u7684\u91cd\u8981\u6c61\u67d3\u7269\u3002"}, {"text": "\u6c14[MASK]\u53d8\u5316\u662f\u91cd\u8981\u7684\u5168\u7403\u73af\u5883\u95ee\u9898\u3002"}]}
celtics1863/env-bert-chinese
null
[ "transformers", "pytorch", "bert", "fill-mask", "pretrain", "environment", "zh", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
中文环境文本分类模型,1.6M的数据集,在env-bert-chinese上进行fine-tuning。 分为环境影响评价与控制、碳排放控制、水污染控制、大气污染控制、土壤污染控制、环境生态、固体废物、环境毒理与健康、环境微生物、环境政策与经济10类。 项目正在进行中,后续会陆续更新相关内容。 清华大学环境学院课题组 有相关需求、建议,联系[email protected]
{"language": ["zh"], "tags": ["bert", "pytorch", "environment", "multi-class", "classification"]}
celtics1863/env-bert-cls-chinese
null
[ "transformers", "pytorch", "bert", "text-classification", "environment", "multi-class", "classification", "zh", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
celtics1863/env-bert-large-chinese
null
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
话题分类模型,使用某乎"环境"话题下所有子话题,过滤后得69类。 top1 acc 60.7, top3 acc 81.6, 可以用于中文环境文本挖掘的预处理步骤。 标签: "生态环境","水污染", "野生动物保护", "太阳能", "环保经济", "污水处理", "绿色建筑", "水处理", "噪音污染", "温室效应", "净水设备", "净水器", "自来水", "生活", "环境评估", "空气污染", "环境评价", "工业污染", "雾霾", "植树", "环保行业", "水处理工程", "沙漠治理", "巴黎协定", "核能", "噪音", "环评工程师", "二氧化碳", "低碳", "自然环境", "沙尘暴", "环境工程", "秸秆焚烧", "PM 2.5", "太空垃圾", "穹顶之下(纪录片)", "垃圾", "环境科学", "净水", "污水排放", "室内空气污染", "环境污染", "全球变暖", "邻居噪音", "土壤污染", "生物多样性", "碳交易", "污染治理", "雾霾治理", "碳金融", "建筑节能", "风能及风力发电", "温室气体", "环境保护", "碳排放", "垃圾处理器", "气候变化", "化学污染", "地球一小时", "环保组织", "物种多样性", "节能减排", "核污染", "环保督查", "垃圾处理", "垃圾分类", "重金属污染", "环境伦理学", "垃圾焚烧"
{"language": "zh", "tags": ["pretrain", "pytorch", "environment", "classification", "topic classification"], "widget": [{"text": "\u7f8e\u56fd\u9000\u51fa\u300a\u5df4\u9ece\u534f\u5b9a\u300b"}, {"text": "\u6c61\u6c34\u5904\u7406\u5382\u4e2d\u7684\u529f\u8017\u9700\u8981\u51cf\u5c11"}]}
celtics1863/env-bert-topic
null
[ "transformers", "pytorch", "bert", "text-classification", "pretrain", "environment", "classification", "topic classification", "zh", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
cemdenizsel/10k-finetuned-bert-model
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
cemdenizsel/51k-finetuned-bert-model
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
cemdenizsel/51k-pretrained-bert-model
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
tags: - array - of - tags license: "any valid license identifier"
{}
cemigo/cemigo-test-model
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
cemigo/test-bart
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
#Harry Potter DialoGPT Model
{"tags": ["conversational"]}
centon21/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "conversational", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Harry Potter Fanfiction Generator This is a pre-trained GPT-2 generative text model that allows you to generate your own Harry Potter fanfiction, trained off of the top 100 rated fanficition stories. We intend for this to be used for individual fun and experimentation and not as a commercial product.
{"language": ["en"], "license": "mit", "tags": ["harry-potter"]}
ceostroff/harry-potter-gpt2-fanfiction
null
[ "transformers", "pytorch", "tf", "jax", "gpt2", "text-generation", "harry-potter", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{"license": "afl-3.0"}
cerulianx/dall-e
null
[ "license:afl-3.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
# TinyBERT_L-4_H-312_v2 English Sentence Encoder This is distilled from the `bert-base-nli-stsb-mean-tokens` pre-trained model from [Sentence-Transformers](https://sbert.net/). The embedding vector is obtained by mean/average pooling of the last layer's hidden states. Update 20210325: Added the attention matrices imitation objective as in the TinyBERT paper, and the distill target has been changed from `distilbert-base-nli-stsb-mean-tokens` to `bert-base-nli-stsb-mean-tokens` (they have almost the same STSb performance). ## Model Comparison We compute cosine similarity scores of the embeddings of the sentence pair to get the spearman correlation on the STS benchmark (bigger is better): | | Dev | Test | | ------------------------------------ | ----- | ----- | | bert-base-nli-stsb-mean-tokens | .8704 | .8505 | | distilbert-base-nli-stsb-mean-tokens | .8667 | .8516 | | TinyBERT_L-4_H-312_v2-distill-AllNLI | .8587 | .8283 | | TinyBERT_L-4_H (20210325) | .8551 | .8341 |
{}
ceshine/TinyBERT_L-4_H-312_v2-distill-AllNLI
null
[ "transformers", "pytorch", "jax", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
# T5-base Parapharasing model fine-tuned on PAWS, MSRP, and Opinosis More details in [ceshine/finetuning-t5 Github repo](https://github.com/ceshine/finetuning-t5/tree/master/paraphrase)
{"language": "en", "license": "apache-2.0", "tags": ["t5", "paraphrasing", "paraphrase"]}
ceshine/t5-paraphrase-paws-msrp-opinosis
null
[ "transformers", "pytorch", "jax", "safetensors", "t5", "text2text-generation", "paraphrasing", "paraphrase", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
# T5-base Parapharasing model fine-tuned on PAWS and Quora More details in [ceshine/finetuning-t5 Github repo](https://github.com/ceshine/finetuning-t5/tree/master/paraphrase)
{"language": "en", "license": "apache-2.0", "tags": ["t5", "paraphrasing", "paraphrase"]}
ceshine/t5-paraphrase-quora-paws
null
[ "transformers", "pytorch", "jax", "safetensors", "t5", "text2text-generation", "paraphrasing", "paraphrase", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
cestwc/bart-base-concise-baseline
null
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
cestwc/bart-paranmt5m
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
{}
cestwc/roberta-base-bigram-binary
null
[ "transformers", "pytorch", "roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
{}
cestwc/roberta-base-unigram-quaternary
null
[ "transformers", "pytorch", "roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00