Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
null
null
{}
RemineTheCat/mt5-small-finetuned-amazon-en-es
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
RenZHU/t5-base-finetuned-xsum
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
RenZHU/t5-small-finetuned-xsum-1213
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
RenZHU/t5-small-finetuned-xsum-2
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum-original This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 2.4436 - Rouge1: 28.8838 - Rouge2: 8.1114 - Rougel: 22.8318 - Rougelsum: 22.8318 - Gen Len: 18.8141 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.6754 | 1.0 | 51012 | 2.4436 | 28.8838 | 8.1114 | 22.8318 | 22.8318 | 18.8141 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["xsum"], "metrics": ["rouge"], "model-index": [{"name": "t5-small-finetuned-xsum-original", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "xsum", "type": "xsum", "args": "default"}, "metrics": [{"type": "rouge", "value": 28.8838, "name": "Rouge1"}]}]}]}
RenZHU/t5-small-finetuned-xsum-original
null
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:xsum", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.5310 - Rouge1: 27.9232 - Rouge2: 7.5324 - Rougel: 22.035 - Rougelsum: 22.0304 - Gen Len: 18.8116 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:| | 2.7564 | 1.0 | 51012 | 2.5310 | 27.9232 | 7.5324 | 22.035 | 22.0304 | 18.8116 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "t5-small-finetuned-xsum", "results": []}]}
RenZHU/t5-small-finetuned-xsum
null
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Renee/gpt2-chinese-lyric
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # rubert-base-srl-seqlabeling This model is a fine-tuned version of [./ruBert-base/](https://huggingface.co/./ruBert-base/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1723 - Causator Precision: 0.8539 - Causator Recall: 0.8352 - Causator F1: 0.8444 - Causator Number: 91 - Expiriencer Precision: 0.9259 - Expiriencer Recall: 0.9740 - Expiriencer F1: 0.9494 - Expiriencer Number: 77 - Instrument Precision: 0.375 - Instrument Recall: 1.0 - Instrument F1: 0.5455 - Instrument Number: 3 - Other Precision: 0.0 - Other Recall: 0.0 - Other F1: 0.0 - Other Number: 1 - Predicate Precision: 0.9352 - Predicate Recall: 0.9902 - Predicate F1: 0.9619 - Predicate Number: 102 - Overall Precision: 0.8916 - Overall Recall: 0.9307 - Overall F1: 0.9107 - Overall Accuracy: 0.9667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.06 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Causator Precision | Causator Recall | Causator F1 | Causator Number | Expiriencer Precision | Expiriencer Recall | Expiriencer F1 | Expiriencer Number | Instrument Precision | Instrument Recall | Instrument F1 | Instrument Number | Other Precision | Other Recall | Other F1 | Other Number | Predicate Precision | Predicate Recall | Predicate F1 | Predicate Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------------------:|:---------------:|:-----------:|:---------------:|:---------------------:|:------------------:|:--------------:|:------------------:|:--------------------:|:-----------------:|:-------------:|:-----------------:|:---------------:|:------------:|:--------:|:------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.2552 | 1.0 | 56 | 0.3471 | 0.8841 | 0.6703 | 0.7625 | 91 | 0.8421 | 0.8312 | 0.8366 | 77 | 0.0 | 0.0 | 0.0 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9259 | 0.9804 | 0.9524 | 102 | 0.8893 | 0.8212 | 0.8539 | 0.9203 | | 0.2385 | 2.0 | 112 | 0.1608 | 0.9103 | 0.7802 | 0.8402 | 91 | 0.9375 | 0.9740 | 0.9554 | 77 | 0.2857 | 0.6667 | 0.4 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9519 | 0.9706 | 0.9612 | 102 | 0.9182 | 0.9015 | 0.9098 | 0.9554 | | 0.0367 | 3.0 | 168 | 0.1311 | 0.8902 | 0.8022 | 0.8439 | 91 | 0.9375 | 0.9740 | 0.9554 | 77 | 0.4286 | 1.0 | 0.6 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9709 | 0.9804 | 0.9756 | 102 | 0.9228 | 0.9161 | 0.9194 | 0.9673 | | 0.0494 | 4.0 | 224 | 0.1507 | 0.7812 | 0.8242 | 0.8021 | 91 | 0.9241 | 0.9481 | 0.9359 | 77 | 0.4286 | 1.0 | 0.6 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9524 | 0.9804 | 0.9662 | 102 | 0.8746 | 0.9161 | 0.8948 | 0.9637 | | 0.0699 | 5.0 | 280 | 0.1830 | 0.8276 | 0.7912 | 0.8090 | 91 | 0.8941 | 0.9870 | 0.9383 | 77 | 0.375 | 1.0 | 0.5455 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9352 | 0.9902 | 0.9619 | 102 | 0.875 | 0.9197 | 0.8968 | 0.9560 | | 0.0352 | 6.0 | 336 | 0.1994 | 0.7857 | 0.8462 | 0.8148 | 91 | 0.9048 | 0.9870 | 0.9441 | 77 | 0.375 | 1.0 | 0.5455 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9266 | 0.9902 | 0.9573 | 102 | 0.8595 | 0.9380 | 0.8970 | 0.9572 | | 0.0186 | 7.0 | 392 | 0.1657 | 0.8652 | 0.8462 | 0.8556 | 91 | 0.9146 | 0.9740 | 0.9434 | 77 | 0.375 | 1.0 | 0.5455 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9352 | 0.9902 | 0.9619 | 102 | 0.8920 | 0.9343 | 0.9127 | 0.9673 | | 0.0052 | 8.0 | 448 | 0.1716 | 0.8556 | 0.8462 | 0.8508 | 91 | 0.9259 | 0.9740 | 0.9494 | 77 | 0.375 | 1.0 | 0.5455 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9352 | 0.9902 | 0.9619 | 102 | 0.8920 | 0.9343 | 0.9127 | 0.9673 | | 0.0094 | 9.0 | 504 | 0.1715 | 0.8444 | 0.8352 | 0.8398 | 91 | 0.9259 | 0.9740 | 0.9494 | 77 | 0.4286 | 1.0 | 0.6 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9352 | 0.9902 | 0.9619 | 102 | 0.8916 | 0.9307 | 0.9107 | 0.9667 | | 0.0078 | 10.0 | 560 | 0.1723 | 0.8539 | 0.8352 | 0.8444 | 91 | 0.9259 | 0.9740 | 0.9494 | 77 | 0.375 | 1.0 | 0.5455 | 3 | 0.0 | 0.0 | 0.0 | 1 | 0.9352 | 0.9902 | 0.9619 | 102 | 0.8916 | 0.9307 | 0.9107 | 0.9667 | ### Framework versions - Transformers 4.13.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.15.1 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "model-index": [{"name": "rubert-base-srl-seqlabeling", "results": []}]}
Rexhaif/rubert-base-srl-seqlabeling
null
[ "transformers", "pytorch", "safetensors", "bert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # rubert-base-srl This model is a fine-tuned version of [./ruBert-base/](https://huggingface.co/./ruBert-base/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2429 - F1: 0.9563 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.06 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5816 | 1.0 | 57 | 0.3865 | 0.8371 | | 0.3685 | 2.0 | 114 | 0.1707 | 0.9325 | | 0.1057 | 3.0 | 171 | 0.0972 | 0.9563 | | 0.0964 | 4.0 | 228 | 0.1429 | 0.9775 | | 0.1789 | 5.0 | 285 | 0.2493 | 0.9457 | | 0.0016 | 6.0 | 342 | 0.1900 | 0.6349 | | 0.0013 | 7.0 | 399 | 0.2060 | 0.9563 | | 0.0008 | 8.0 | 456 | 0.2321 | 0.9563 | | 0.0006 | 9.0 | 513 | 0.2412 | 0.9563 | | 0.0006 | 10.0 | 570 | 0.2429 | 0.9563 | ### Framework versions - Transformers 4.13.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.15.1 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "metrics": ["f1"], "model-index": [{"name": "rubert-base-srl", "results": []}]}
Rexhaif/rubert-base-srl
null
[ "transformers", "pytorch", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
ReynaQuita/twitter_disaster_bart
null
[ "transformers", "pytorch", "bart", "text-classification", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
ReynaQuita/twitter_disaster_bert_large
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
ReynaQuita/twitter_disaster_distilbert
null
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
RiNRin/r
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-bert-mrpc This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4382 - Accuracy: 0.8676 - F1: 0.9085 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5454 | 1.0 | 230 | 0.4396 | 0.8309 | 0.8871 | | 0.3387 | 2.0 | 460 | 0.3783 | 0.8529 | 0.8976 | | 0.1956 | 3.0 | 690 | 0.4382 | 0.8676 | 0.9085 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned-bert-mrpc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "mrpc"}, "metrics": [{"type": "accuracy", "value": 0.8676470588235294, "name": "Accuracy"}, {"type": "f1", "value": 0.9084745762711864, "name": "F1"}]}]}]}
Riad/finetuned-bert-mrpc
null
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Riad/my-awesome-model
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
RickyLeRoi/model_rock
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
[Github](https://github.com/rifkybujana/IndoBERT-QA) This project is part of my research with my friend Muhammad Fajrin Buyang Daffa entitled "Teman Belajar : Asisten Digital Pelajar SMA Negeri 28 Jakarta dalam Membaca" for KOPSI (Kompetisi Penelitian Siswa Indonesia/Indonesian Student Research Competition). ## indoBERT Base-Uncased fine-tuned on Translated Squad v2.0 [IndoBERT](https://huggingface.co/indolem/indobert-base-uncased) trained by [IndoLEM](https://indolem.github.io/) and fine-tuned on [Translated SQuAD 2.0](https://github.com/Wikidepia/indonesian_datasets/tree/master/question-answering/squad) for **Q&A** downstream task. **Model Size** (after training): 420mb ## Details of indoBERT (from their documentation) [IndoBERT](https://huggingface.co/indolem/indobert-base-uncased) is the Indonesian version of BERT model. We train the model using over 220M words, aggregated from three main sources: - Indonesian Wikipedia (74M words) - news articles from Kompas, Tempo (Tala et al., 2003), and Liputan6 (55M words in total) - an Indonesian Web Corpus (Medved and Suchomel, 2017) (90M words). We trained the model for 2.4M steps (180 epochs) with the final perplexity over the development set being 3.97 (similar to English BERT-base). This IndoBERT was used to examine IndoLEM - an Indonesian benchmark that comprises of seven tasks for the Indonesian language, spanning morpho-syntax, semantics, and discourse.[[1]](#1) ## Details of the downstream task (Q&A) - Dataset SQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD2.0 | train | 130k | | SQuAD2.0 | eval | 12.3k | ## Model Training The model was trained on a Tesla T4 GPU and 12GB of RAM. ## Results: | Metric | # Value | | ------ | --------- | | **EM** | **51.61** | | **F1** | **69.09** | ## Simple Usage ```py from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="Rifky/Indobert-QA", tokenizer="Rifky/Indobert-QA" ) qa_pipeline({ 'context': """Pangeran Harya Dipanegara (atau biasa dikenal dengan nama Pangeran Diponegoro, lahir di Ngayogyakarta Hadiningrat, 11 November 1785 – meninggal di Makassar, Hindia Belanda, 8 Januari 1855 pada umur 69 tahun) adalah salah seorang pahlawan nasional Republik Indonesia, yang memimpin Perang Diponegoro atau Perang Jawa selama periode tahun 1825 hingga 1830 melawan pemerintah Hindia Belanda. Sejarah mencatat, Perang Diponegoro atau Perang Jawa dikenal sebagai perang yang menelan korban terbanyak dalam sejarah Indonesia, yakni 8.000 korban serdadu Hindia Belanda, 7.000 pribumi, dan 200 ribu orang Jawa serta kerugian materi 25 juta Gulden.""", 'question': "kapan pangeran diponegoro lahir?" }) ``` *output:* ```py { 'answer': '11 November 1785', 'end': 131, 'score': 0.9272009134292603, 'start': 115 } ``` ### Reference <a id="1">[1]</a>Fajri Koto and Afshin Rahimi and Jey Han Lau and Timothy Baldwin. 2020. IndoLEM and IndoBERT: A Benchmark Dataset and Pre-trained Language Model for Indonesian NLP. Proceedings of the 28th COLING.
{"language": "id", "license": "apache-2.0", "tags": ["indobert", "indolem"], "datasets": ["220M words (IndoWiki, IndoWC, News)", "Squad 2.0 (Indonesian translated)"], "widget": [{"text": "kapan pangeran diponegoro lahir?", "context": "Pangeran Harya Dipanegara (atau biasa dikenal dengan nama Pangeran Diponegoro, lahir di Ngayogyakarta Hadiningrat, 11 November 1785 \u2013 meninggal di Makassar, Hindia Belanda, 8 Januari 1855 pada umur 69 tahun) adalah salah seorang pahlawan nasional Republik Indonesia, yang memimpin Perang Diponegoro atau Perang Jawa selama periode tahun 1825 hingga 1830 melawan pemerintah Hindia Belanda. Sejarah mencatat, Perang Diponegoro atau Perang Jawa dikenal sebagai perang yang menelan korban terbanyak dalam sejarah Indonesia, yakni 8.000 korban serdadu Hindia Belanda, 7.000 pribumi, dan 200 ribu orang Jawa serta kerugian materi 25 juta Gulden."}]}
Rifky/Indobert-QA
null
[ "transformers", "pytorch", "safetensors", "bert", "question-answering", "indobert", "indolem", "id", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# My Awesome Model
{"tags": ["conversational"]}
RifsxD/DialoGPT-medium-raifu
null
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Rin/distilbert-base-uncased-finetuned-cola
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
object-detection
null
<div align="left"> ## You Only Look Once for Panoptic ​ Driving Perception > [**You Only Look at Once for Panoptic driving Perception**](https://arxiv.org/abs/2108.11250) > > by Dong Wu, Manwen Liao, Weitian Zhang, [Xinggang Wang](https://xinggangw.info/) [*School of EIC, HUST*](http://eic.hust.edu.cn/English/Home.htm) > > *arXiv technical report ([arXiv 2108.11250](https://arxiv.org/abs/2108.11250))* --- ### The Illustration of YOLOP ![yolop](pictures/yolop.png) ### Contributions * We put forward an efficient multi-task network that can jointly handle three crucial tasks in autonomous driving: object detection, drivable area segmentation and lane detection to save computational costs, reduce inference time as well as improve the performance of each task. Our work is the first to reach real-time on embedded devices while maintaining state-of-the-art level performance on the `BDD100K `dataset. * We design the ablative experiments to verify the effectiveness of our multi-tasking scheme. It is proved that the three tasks can be learned jointly without tedious alternating optimization. ### Results #### Traffic Object Detection Result | Model | Recall(%) | mAP50(%) | Speed(fps) | | -------------- | --------- | -------- | ---------- | | `Multinet` | 81.3 | 60.2 | 8.6 | | `DLT-Net` | 89.4 | 68.4 | 9.3 | | `Faster R-CNN` | 77.2 | 55.6 | 5.3 | | `YOLOv5s` | 86.8 | 77.2 | 82 | | `YOLOP(ours)` | 89.2 | 76.5 | 41 | #### Drivable Area Segmentation Result | Model | mIOU(%) | Speed(fps) | | ------------- | ------- | ---------- | | `Multinet` | 71.6 | 8.6 | | `DLT-Net` | 71.3 | 9.3 | | `PSPNet` | 89.6 | 11.1 | | `YOLOP(ours)` | 91.5 | 41 | #### Lane Detection Result: | Model | mIOU(%) | IOU(%) | | ------------- | ------- | ------ | | `ENet` | 34.12 | 14.64 | | `SCNN` | 35.79 | 15.84 | | `ENet-SAD` | 36.56 | 16.02 | | `YOLOP(ours)` | 70.50 | 26.20 | #### Ablation Studies 1: End-to-end v.s. Step-by-step: | Training_method | Recall(%) | AP(%) | mIoU(%) | Accuracy(%) | IoU(%) | | --------------- | --------- | ----- | ------- | ----------- | ------ | | `ES-W` | 87.0 | 75.3 | 90.4 | 66.8 | 26.2 | | `ED-W` | 87.3 | 76.0 | 91.6 | 71.2 | 26.1 | | `ES-D-W` | 87.0 | 75.1 | 91.7 | 68.6 | 27.0 | | `ED-S-W` | 87.5 | 76.1 | 91.6 | 68.0 | 26.8 | | `End-to-end` | 89.2 | 76.5 | 91.5 | 70.5 | 26.2 | #### Ablation Studies 2: Multi-task v.s. Single task: | Training_method | Recall(%) | AP(%) | mIoU(%) | Accuracy(%) | IoU(%) | Speed(ms/frame) | | --------------- | --------- | ----- | ------- | ----------- | ------ | --------------- | | `Det(only)` | 88.2 | 76.9 | - | - | - | 15.7 | | `Da-Seg(only)` | - | - | 92.0 | - | - | 14.8 | | `Ll-Seg(only)` | - | - | - | 79.6 | 27.9 | 14.8 | | `Multitask` | 89.2 | 76.5 | 91.5 | 70.5 | 26.2 | 24.4 | **Notes**: - The works we has use for reference including `Multinet` ([paper](https://arxiv.org/pdf/1612.07695.pdf?utm_campaign=affiliate-ir-Optimise%20media%28%20South%20East%20Asia%29%20Pte.%20ltd._156_-99_national_R_all_ACQ_cpa_en&utm_content=&utm_source=%20388939),[code](https://github.com/MarvinTeichmann/MultiNet)),`DLT-Net` ([paper](https://ieeexplore.ieee.org/abstract/document/8937825)),`Faster R-CNN` ([paper](https://proceedings.neurips.cc/paper/2015/file/14bfa6bb14875e45bba028a21ed38046-Paper.pdf),[code](https://github.com/ShaoqingRen/faster_rcnn)),`YOLOv5s`([code](https://github.com/ultralytics/yolov5)) ,`PSPNet`([paper](https://openaccess.thecvf.com/content_cvpr_2017/papers/Zhao_Pyramid_Scene_Parsing_CVPR_2017_paper.pdf),[code](https://github.com/hszhao/PSPNet)) ,`ENet`([paper](https://arxiv.org/pdf/1606.02147.pdf),[code](https://github.com/osmr/imgclsmob)) `SCNN`([paper](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/download/16802/16322),[code](https://github.com/XingangPan/SCNN)) `SAD-ENet`([paper](https://openaccess.thecvf.com/content_ICCV_2019/papers/Hou_Learning_Lightweight_Lane_Detection_CNNs_by_Self_Attention_Distillation_ICCV_2019_paper.pdf),[code](https://github.com/cardwing/Codes-for-Lane-Detection)). Thanks for their wonderful works. - In table 4, E, D, S and W refer to Encoder, Detect head, two Segment heads and whole network. So the Algorithm (First, we only train Encoder and Detect head. Then we freeze the Encoder and Detect head as well as train two Segmentation heads. Finally, the entire network is trained jointly for all three tasks.) can be marked as ED-S-W, and the same for others. --- ### Visualization #### Traffic Object Detection Result ![detect result](pictures/detect.png) #### Drivable Area Segmentation Result ![](pictures/da.png) #### Lane Detection Result ![](pictures/ll.png) **Notes**: - The visualization of lane detection result has been post processed by quadratic fitting. --- ### Project Structure ```python ├─inference │ ├─images # inference images │ ├─output # inference result ├─lib │ ├─config/default # configuration of training and validation │ ├─core │ │ ├─activations.py # activation function │ │ ├─evaluate.py # calculation of metric │ │ ├─function.py # training and validation of model │ │ ├─general.py #calculation of metric、nms、conversion of data-format、visualization │ │ ├─loss.py # loss function │ │ ├─postprocess.py # postprocess(refine da-seg and ll-seg, unrelated to paper) │ ├─dataset │ │ ├─AutoDriveDataset.py # Superclass dataset,general function │ │ ├─bdd.py # Subclass dataset,specific function │ │ ├─hust.py # Subclass dataset(Campus scene, unrelated to paper) │ │ ├─convect.py │ │ ├─DemoDataset.py # demo dataset(image, video and stream) │ ├─models │ │ ├─YOLOP.py # Setup and Configuration of model │ │ ├─light.py # Model lightweight(unrelated to paper, zwt) │ │ ├─commom.py # calculation module │ ├─utils │ │ ├─augmentations.py # data augumentation │ │ ├─autoanchor.py # auto anchor(k-means) │ │ ├─split_dataset.py # (Campus scene, unrelated to paper) │ │ ├─utils.py # logging、device_select、time_measure、optimizer_select、model_save&initialize 、Distributed training │ ├─run │ │ ├─dataset/training time # Visualization, logging and model_save ├─tools │ │ ├─demo.py # demo(folder、camera) │ │ ├─test.py │ │ ├─train.py ├─toolkits │ │ ├─depoly # Deployment of model ├─weights # Pretraining model ``` --- ### Requirement This codebase has been developed with python version 3.7, PyTorch 1.7+ and torchvision 0.8+: ``` conda install pytorch==1.7.0 torchvision==0.8.0 cudatoolkit=10.2 -c pytorch ``` See `requirements.txt` for additional dependencies and version requirements. ```setup pip install -r requirements.txt ``` ### Data preparation #### Download - Download the images from [images](https://bdd-data.berkeley.edu/). - Download the annotations of detection from [det_annotations](https://drive.google.com/file/d/1Ge-R8NTxG1eqd4zbryFo-1Uonuh0Nxyl/view?usp=sharing). - Download the annotations of drivable area segmentation from [da_seg_annotations](https://drive.google.com/file/d/1xy_DhUZRHR8yrZG3OwTQAHhYTnXn7URv/view?usp=sharing). - Download the annotations of lane line segmentation from [ll_seg_annotations](https://drive.google.com/file/d/1lDNTPIQj_YLNZVkksKM25CvCHuquJ8AP/view?usp=sharing). We recommend the dataset directory structure to be the following: ``` # The id represent the correspondence relation ├─dataset root │ ├─images │ │ ├─train │ │ ├─val │ ├─det_annotations │ │ ├─train │ │ ├─val │ ├─da_seg_annotations │ │ ├─train │ │ ├─val │ ├─ll_seg_annotations │ │ ├─train │ │ ├─val ``` Update the your dataset path in the `./lib/config/default.py`. ### Training You can set the training configuration in the `./lib/config/default.py`. (Including: the loading of preliminary model, loss, data augmentation, optimizer, warm-up and cosine annealing, auto-anchor, training epochs, batch_size). If you want try alternating optimization or train model for single task, please modify the corresponding configuration in `./lib/config/default.py` to `True`. (As following, all configurations is `False`, which means training multiple tasks end to end). ```python # Alternating optimization _C.TRAIN.SEG_ONLY = False # Only train two segmentation branchs _C.TRAIN.DET_ONLY = False # Only train detection branch _C.TRAIN.ENC_SEG_ONLY = False # Only train encoder and two segmentation branchs _C.TRAIN.ENC_DET_ONLY = False # Only train encoder and detection branch # Single task _C.TRAIN.DRIVABLE_ONLY = False # Only train da_segmentation task _C.TRAIN.LANE_ONLY = False # Only train ll_segmentation task _C.TRAIN.DET_ONLY = False # Only train detection task ``` Start training: ```shell python tools/train.py ``` ### Evaluation You can set the evaluation configuration in the `./lib/config/default.py`. (Including: batch_size and threshold value for nms). Start evaluating: ```shell python tools/test.py --weights weights/End-to-end.pth ``` ### Demo Test We provide two testing method. #### Folder You can store the image or video in `--source`, and then save the reasoning result to `--save-dir` ```shell python tools/demo --source inference/images ``` #### Camera If there are any camera connected to your computer, you can set the `source` as the camera number(The default is 0). ```shell python tools/demo --source 0 ``` ### Deployment Our model can reason in real-time on `Jetson Tx2`, with `Zed Camera` to capture image. We use `TensorRT` tool for speeding up. We provide code for deployment and reasoning of model in `./toolkits/deploy`. ## Citation If you find our paper and code useful for your research, please consider giving a star and citation: ```BibTeX @misc{2108.11250, Author = {Dong Wu and Manwen Liao and Weitian Zhang and Xinggang Wang}, Title = {YOLOP: You Only Look Once for Panoptic Driving Perception}, Year = {2021}, Eprint = {arXiv:2108.11250}, } ```
{"tags": ["object-detection"]}
Riser/YOLOP
null
[ "object-detection", "arxiv:2108.11250", "arxiv:1612.07695", "arxiv:1606.02147", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Rick Morty DialogGPT Model
{"tags": ["conversational"]}
RishabhRawatt/DialoGPT-small-Rickmorty
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Kela DialoGPT Model
{"tags": ["conversational"]}
RishabhRawatt/DialoGPT-small-kela
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Ritchie/DialoGPT-small-Rick
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Rick and Morty DialoGPT Model
{"tags": ["conversational"]}
Ritchie/DialoGPT-small-Rickandmorty
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Ritchie/DialoGPT-small-Rickbot
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Ritika14/DialoGPT-small-harrypotter
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Ritvik/new_mod
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
Ritvik/nlp_model
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
Ritvik/nlp_model_mini
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
RixEtte/chatbot
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
Riyagarg01/Practice1
null
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Riyagarg01/practice
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
#harry potter DialoGPT model
{"tags": ["conversational"]}
RizqFarIDN/DialoGPT-medium-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
#harry potter DialoGPT model
{"tags": ["conversational"]}
RizqFarIDN/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
RoacherM/models
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
{}
RobW/distilbert-base-cased-finetuned-chunk-2
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
{}
RobW/distilbert-base-cased-finetuned-chunk-3
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-cased-finetuned-chunk This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5180 - Precision: 0.8615 - Recall: 0.9088 - F1: 0.8845 - Accuracy: 0.8239 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.8391 | 1.0 | 878 | 0.5871 | 0.8453 | 0.9035 | 0.8734 | 0.8054 | | 0.6134 | 2.0 | 1756 | 0.5447 | 0.8555 | 0.8983 | 0.8764 | 0.8142 | | 0.5565 | 3.0 | 2634 | 0.5180 | 0.8615 | 0.9088 | 0.8845 | 0.8239 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.9.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-cased-finetuned-chunk", "results": []}]}
RobW/distilbert-base-cased-finetuned-chunk
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
{}
RobW/longformer-base-4096-finetuned-chunk-3
null
[ "transformers", "pytorch", "tensorboard", "longformer", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
RobertRussell/medbert-cn
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-base-mnli-finetuned-cola This model is a fine-tuned version of [microsoft/deberta-base-mnli](https://huggingface.co/microsoft/deberta-base-mnli) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8205 - Matthews Correlation: 0.6282 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.4713 | 1.0 | 535 | 0.5110 | 0.5797 | | 0.2678 | 2.0 | 1070 | 0.6648 | 0.5154 | | 0.1811 | 3.0 | 1605 | 0.6681 | 0.6121 | | 0.113 | 4.0 | 2140 | 0.8205 | 0.6282 | | 0.0831 | 5.0 | 2675 | 1.0413 | 0.6057 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "deberta-base-mnli-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.6281691768918801, "name": "Matthews Correlation"}]}]}]}
Roberta55/deberta-base-mnli-finetuned-cola
null
[ "transformers", "pytorch", "tensorboard", "deberta", "text-classification", "generated_from_trainer", "dataset:glue", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Roberta55/distilbert-base-uncased-finetuned-cola
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Roberta55/my-new-shiny-tokenizer
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Mikoto Jinba DialoGPT Model
{"tags": ["conversational"]}
RobinMari/DialoGPT-small-mikoto
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
RobinMari/Mikoto
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Roblox22r/T
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Roboserg/best_model
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Rocketknight1/bert-base-cased-finetuned-imdb
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
multiple-choice
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Rocketknight1/bert-base-cased-finetuned-swag This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.8709 - Train Accuracy: 0.6465 - Validation Loss: 0.6167 - Validation Accuracy: 0.7590 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 9192, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.8709 | 0.6465 | 0.6167 | 0.7590 | 0 | ### Framework versions - Transformers 4.21.0.dev0 - TensorFlow 2.9.1 - Datasets 2.3.3.dev0 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "Rocketknight1/bert-base-cased-finetuned-swag", "results": []}]}
Rocketknight1/bert-base-cased-finetuned-swag
null
[ "transformers", "tf", "tensorboard", "bert", "multiple-choice", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Rocketknight1/bert-base-cased-finetuned-wikitext2 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 6.3982 - Validation Loss: 6.2664 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 7.0679 | 6.4768 | 0 | | 6.3982 | 6.2664 | 1 | ### Framework versions - Transformers 4.21.0.dev0 - TensorFlow 2.9.1 - Datasets 2.3.3.dev0 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "Rocketknight1/bert-base-cased-finetuned-wikitext2", "results": []}]}
Rocketknight1/bert-base-cased-finetuned-wikitext2
null
[ "transformers", "tf", "tensorboard", "bert", "fill-mask", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
multiple-choice
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Rocketknight1/bert-base-uncased-finetuned-swag This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.8360 - Train Accuracy: 0.6631 - Validation Loss: 0.5885 - Validation Accuracy: 0.7706 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 9192, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.8360 | 0.6631 | 0.5885 | 0.7706 | 0 | ### Framework versions - Transformers 4.18.0.dev0 - TensorFlow 2.8.0-rc0 - Datasets 2.0.1.dev0 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "Rocketknight1/bert-base-uncased-finetuned-swag", "results": []}]}
Rocketknight1/bert-base-uncased-finetuned-swag
null
[ "transformers", "tf", "tensorboard", "bert", "multiple-choice", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
Rocketknight1/bert-fine-tuned-cola
null
[ "transformers", "tf", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
{}
Rocketknight1/bert-finetuned-qa
null
[ "transformers", "tf", "bert", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
Rocketknight1/callback_test
null
[ "transformers", "tf", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
Rocketknight1/checkpoint_test
null
[ "transformers", "tf", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
{}
Rocketknight1/codeparrot-ds
null
[ "transformers", "tf", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
{}
Rocketknight1/distilbert-base-cased-finetuned-imdb
null
[ "transformers", "tf", "distilbert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Rocketknight1/distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3182 - Validation Loss: 0.4914 - Train Matthews Correlation: 0.5056 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Matthews Correlation | Epoch | |:----------:|:---------------:|:--------------------------:|:-----:| | 0.5126 | 0.4638 | 0.4555 | 0 | | 0.3182 | 0.4914 | 0.5056 | 1 | ### Framework versions - Transformers 4.22.0.dev0 - TensorFlow 2.9.1 - Datasets 2.4.1.dev0 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "Rocketknight1/distilbert-base-uncased-finetuned-cola", "results": []}]}
Rocketknight1/distilbert-base-uncased-finetuned-cola
null
[ "transformers", "tf", "tensorboard", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Rocketknight1/distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2026 - Validation Loss: 0.0726 - Train Precision: 0.8945 - Train Recall: 0.9220 - Train F1: 0.9081 - Train Accuracy: 0.9793 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2631, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch | |:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:| | 0.2026 | 0.0726 | 0.8945 | 0.9220 | 0.9081 | 0.9793 | 0 | ### Framework versions - Transformers 4.21.0.dev0 - TensorFlow 2.9.1 - Datasets 2.3.3.dev0 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "Rocketknight1/distilbert-base-uncased-finetuned-ner", "results": []}]}
Rocketknight1/distilbert-base-uncased-finetuned-ner
null
[ "transformers", "tf", "tensorboard", "distilbert", "token-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Rocketknight1/distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.5124 - Train End Logits Accuracy: 0.6041 - Train Start Logits Accuracy: 0.5680 - Validation Loss: 1.1534 - Validation End Logits Accuracy: 0.6849 - Validation Start Logits Accuracy: 0.6443 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11064, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 1.5124 | 0.6041 | 0.5680 | 1.1534 | 0.6849 | 0.6443 | 0 | ### Framework versions - Transformers 4.21.0.dev0 - TensorFlow 2.9.1 - Datasets 2.3.3.dev0 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "Rocketknight1/distilbert-base-uncased-finetuned-squad", "results": []}]}
Rocketknight1/distilbert-base-uncased-finetuned-squad
null
[ "transformers", "tf", "tensorboard", "distilbert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Rocketknight1/distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.8577 - Validation Loss: 3.6752 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.8577 | 3.6752 | 0 | ### Framework versions - Transformers 4.16.0.dev0 - TensorFlow 2.8.0-rc0 - Datasets 1.17.0 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "Rocketknight1/distilgpt2-finetuned-wikitext2", "results": []}]}
Rocketknight1/distilgpt2-finetuned-wikitext2
null
[ "transformers", "tf", "tensorboard", "gpt2", "text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-wikitext2 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - TensorFlow 2.8.0-rc0 - Datasets 1.17.0 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "distilroberta-base-finetuned-wikitext2", "results": []}]}
Rocketknight1/distilroberta-base-finetuned-wikitext2
null
[ "transformers", "tf", "roberta", "fill-mask", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Rocketknight1/gbert-base-germaner This model is a fine-tuned version of [deepset/gbert-base](https://huggingface.co/deepset/gbert-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0340 - Validation Loss: 0.0881 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 4176, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1345 | 0.0865 | 0 | | 0.0550 | 0.0878 | 1 | | 0.0340 | 0.0881 | 2 | ### Framework versions - Transformers 4.15.0.dev0 - TensorFlow 2.6.0 - Datasets 1.16.2.dev0 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "Rocketknight1/gbert-base-germaner", "results": []}]}
Rocketknight1/gbert-base-germaner
null
[ "transformers", "tf", "tensorboard", "bert", "token-classification", "generated_from_keras_callback", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Rocketknight1/gpt2-finetuned-wikitext2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 7.3062 - Validation Loss: 6.7676 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 7.3062 | 6.7676 | 0 | ### Framework versions - Transformers 4.21.0.dev0 - TensorFlow 2.9.1 - Datasets 2.3.3.dev0 - Tokenizers 0.11.0
{"license": "mit", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "Rocketknight1/gpt2-finetuned-wikitext2", "results": []}]}
Rocketknight1/gpt2-finetuned-wikitext2
null
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
{}
Rocketknight1/gpt2-wikitext2
null
[ "transformers", "tf", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Rocketknight1/marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6862 - Validation Loss: 0.8050 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 17733, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.0615 | 0.8832 | 0 | | 0.7983 | 0.8211 | 1 | | 0.6862 | 0.8050 | 2 | ### Framework versions - Transformers 4.16.0.dev0 - TensorFlow 2.7.0 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "Rocketknight1/marian-finetuned-kde4-en-to-fr", "results": []}]}
Rocketknight1/marian-finetuned-kde4-en-to-fr
null
[ "transformers", "tf", "marian", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Rocketknight1/model-card-callback-test-new This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0031 - Train Accuracy: 1.0 - Validation Loss: 0.0000 - Validation Accuracy: 1.0 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.4647 | 0.6406 | 0.0057 | 1.0 | 0 | | 0.0031 | 1.0 | 0.0000 | 1.0 | 1 | ### Framework versions - Transformers 4.14.0.dev0 - TensorFlow 2.6.0 - Datasets 1.16.2.dev0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "Rocketknight1/model-card-callback-test-new", "results": []}]}
Rocketknight1/model-card-callback-test-new
null
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
Rocketknight1/model-card-test
null
[ "transformers", "tf", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Rocketknight1/model_card_test
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # model_card_test2 This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0031 - Train Accuracy: 1.0 - Validation Loss: 0.0000 - Validation Accuracy: 1.0 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.4647 | 0.6406 | 0.0057 | 1.0 | 0 | | 0.0031 | 1.0 | 0.0000 | 1.0 | 1 | ### Framework versions - Transformers 4.14.0.dev0 - TensorFlow 2.6.0 - Datasets 1.16.2.dev0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "model_card_test2", "results": []}]}
Rocketknight1/model_card_test2
null
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Rocketknight1/opus-mt-en-ROMANCE-finetuned-en-to-ro This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ROMANCE](https://huggingface.co/Helsinki-NLP/opus-mt-en-ROMANCE) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.7140 - Validation Loss: 1.2757 - Train Bleu: 26.7914 - Train Gen Len: 41.4932 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Bleu | Train Gen Len | Epoch | |:----------:|:---------------:|:----------:|:-------------:|:-----:| | 0.7140 | 1.2757 | 26.7914 | 41.4932 | 0 | ### Framework versions - Transformers 4.21.0.dev0 - TensorFlow 2.9.1 - Datasets 2.4.0 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "Rocketknight1/opus-mt-en-ROMANCE-finetuned-en-to-ro", "results": []}]}
Rocketknight1/opus-mt-en-ROMANCE-finetuned-en-to-ro
null
[ "transformers", "tf", "tensorboard", "marian", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Rocketknight1/t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.7172 - Validation Loss: 2.3977 - Train Rouge1: 28.7469 - Train Rouge2: 7.9005 - Train Rougel: 22.5917 - Train Rougelsum: 22.6162 - Train Gen Len: 18.875 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch | |:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:| | 2.7172 | 2.3977 | 28.7469 | 7.9005 | 22.5917 | 22.6162 | 18.875 | 0 | ### Framework versions - Transformers 4.16.0.dev0 - TensorFlow 2.8.0-rc0 - Datasets 1.17.0 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "Rocketknight1/t5-small-finetuned-xsum", "results": []}]}
Rocketknight1/t5-small-finetuned-xsum
null
[ "transformers", "tf", "tensorboard", "t5", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
{}
Rocketknight1/test-bert-finetuned-ner
null
[ "transformers", "tf", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
Rocketknight1/test-marian-finetuned-kde4-en-to-fr
null
[ "transformers", "tf", "marian", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # test-model-tf This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.14.0.dev0 - TensorFlow 2.6.0 - Datasets 1.16.2.dev0 - Tokenizers 0.10.3
{"tags": ["generated_from_keras_callback"], "model-index": [{"name": "test-model-tf", "results": []}]}
Rocketknight1/test-model-tf
null
[ "transformers", "tf", "bert", "feature-extraction", "generated_from_keras_callback", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Rocketknight1/test-repo
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
Rocketknight1/test_callback_upload
null
[ "transformers", "tf", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Rocketknight1/test_upload_model
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Rocketknight1/test_upload_model2
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # transformers-qa This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.9300 - Validation Loss: 1.1437 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.5145 | 1.1500 | 0 | | 0.9300 | 1.1437 | 1 | ### Framework versions - Transformers 4.16.0.dev0 - TensorFlow 2.6.0 - Datasets 1.16.2.dev0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "transformers-qa", "results": []}]}
Rocketknight1/transformers-qa
null
[ "transformers", "tf", "distilbert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
RockingGreatness/wav2vec2-large-xlsr-luganda-demo-colab
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Rodolfo/hf
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
# Configuration `title`: _string_ Display title for the Space `emoji`: _string_ Space emoji (emoji-only character allowed) `colorFrom`: _string_ Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) `colorTo`: _string_ Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) `sdk`: _string_ Can be either `gradio` or `streamlit` `app_file`: _string_ Path to your main application file (which contains either `gradio` or `streamlit` Python code). Path is relative to the root of the repository. `pinned`: _boolean_ Whether the Space stays on top of your list.
{"title": "CLIP-Guided-Diffusion", "emoji": "\ud83d\udca9", "colorFrom": "purple", "colorTo": "red", "sdk": "gradio", "app_file": "app.py", "pinned": false}
Rodrigo/teste5
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
{}
Rohan-Kurdekar/Arabic_Bert_Model
null
[ "transformers", "pytorch", "tf", "jax", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
RohithK2028/Transformer
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
{}
RollingMuffin/scripts_ru
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Rolv-Arild/roberta-base-ncc-512d
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
{}
Rolv-Arild/roberta-ncc-1shard-rolvb
null
[ "transformers", "jax", "tensorboard", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Rolv-Arild/wav2vec2-large-xls-r-300m-npsc-colab
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Rolv-Arild/wav2vec2-large-xls-r-300m-turkish-colab
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
{}
Rolv-Arild/xls-r-300m-npsc-2
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
{}
Rolv-Arild/xls-r-300m-npsc-3
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the NBAILAB/NPSC - 16K_MP3 dataset. It achieves the following results on the evaluation set: - Loss: 0.1957 - Wer: 0.1697 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 4.4527 | 0.28 | 250 | 4.0144 | 1.0 | | 3.1828 | 0.56 | 500 | 3.1369 | 1.0 | | 2.9927 | 0.85 | 750 | 3.0183 | 1.0 | | 2.9591 | 1.13 | 1000 | 2.9991 | 1.0 | | 2.8989 | 1.41 | 1250 | 2.9000 | 1.0000 | | 2.4286 | 1.69 | 1500 | 1.7688 | 0.9550 | | 1.6765 | 1.98 | 1750 | 0.6842 | 0.4855 | | 1.4521 | 2.26 | 2000 | 0.5096 | 0.3736 | | 1.3589 | 2.54 | 2250 | 0.4479 | 0.3335 | | 1.3136 | 2.82 | 2500 | 0.4056 | 0.3123 | | 1.2856 | 3.11 | 2750 | 0.3870 | 0.2987 | | 1.2283 | 3.39 | 3000 | 0.3646 | 0.2828 | | 1.2053 | 3.67 | 3250 | 0.3499 | 0.2748 | | 1.2087 | 3.95 | 3500 | 0.3345 | 0.2603 | | 1.2002 | 4.24 | 3750 | 0.3320 | 0.2523 | | 1.1383 | 4.52 | 4000 | 0.3117 | 0.2439 | | 1.1364 | 4.8 | 4250 | 0.3198 | 0.2383 | | 1.158 | 5.08 | 4500 | 0.3071 | 0.2342 | | 1.108 | 5.37 | 4750 | 0.3011 | 0.2314 | | 1.1025 | 5.65 | 5000 | 0.2875 | 0.2289 | | 1.0697 | 5.93 | 5250 | 0.2926 | 0.2256 | | 1.0904 | 6.21 | 5500 | 0.2695 | 0.2245 | | 1.0802 | 6.5 | 5750 | 0.2602 | 0.2189 | | 1.0882 | 6.78 | 6000 | 0.2603 | 0.2168 | | 1.0881 | 7.06 | 6250 | 0.2540 | 0.2293 | | 1.0378 | 7.34 | 6500 | 0.2614 | 0.2193 | | 1.0397 | 7.63 | 6750 | 0.2707 | 0.2104 | | 1.0296 | 7.91 | 7000 | 0.2483 | 0.2119 | | 1.0249 | 8.19 | 7250 | 0.2483 | 0.2047 | | 1.013 | 8.47 | 7500 | 0.2487 | 0.2042 | | 1.0064 | 8.76 | 7750 | 0.2456 | 0.2016 | | 1.0668 | 9.04 | 8000 | 0.2397 | 0.1995 | | 1.0129 | 9.32 | 8250 | 0.2374 | 0.1994 | | 1.0164 | 9.6 | 8500 | 0.2206 | 0.1992 | | 0.975 | 9.89 | 8750 | 0.2247 | 0.1973 | | 0.9849 | 10.17 | 9000 | 0.2325 | 0.1953 | | 0.9826 | 10.45 | 9250 | 0.2301 | 0.1934 | | 0.9835 | 10.73 | 9500 | 0.2192 | 0.1942 | | 0.9676 | 11.02 | 9750 | 0.2266 | 0.1913 | | 0.9627 | 11.3 | 10000 | 0.2193 | 0.1921 | | 0.976 | 11.58 | 10250 | 0.2309 | 0.1882 | | 0.969 | 11.86 | 10500 | 0.2268 | 0.1886 | | 0.9611 | 12.15 | 10750 | 0.2322 | 0.1863 | | 0.9397 | 12.43 | 11000 | 0.2197 | 0.1844 | | 0.9601 | 12.71 | 11250 | 0.2211 | 0.1871 | | 0.9718 | 12.99 | 11500 | 0.2079 | 0.1898 | | 0.9347 | 13.28 | 11750 | 0.2054 | 0.1843 | | 0.9377 | 13.56 | 12000 | 0.2031 | 0.1842 | | 0.934 | 13.84 | 12250 | 0.2059 | 0.1806 | | 0.9295 | 14.12 | 12500 | 0.2122 | 0.1861 | | 0.935 | 14.41 | 12750 | 0.2072 | 0.1787 | | 0.9021 | 14.69 | 13000 | 0.2105 | 0.1781 | | 0.9193 | 14.97 | 13250 | 0.2035 | 0.1786 | | 0.9214 | 15.25 | 13500 | 0.2035 | 0.1766 | | 0.9048 | 15.54 | 13750 | 0.1964 | 0.1758 | | 0.9006 | 15.82 | 14000 | 0.1984 | 0.1757 | | 0.9027 | 16.1 | 14250 | 0.2022 | 0.1743 | | 0.9083 | 16.38 | 14500 | 0.1969 | 0.1744 | | 0.9761 | 16.67 | 14750 | 0.1963 | 0.1728 | | 0.9311 | 16.95 | 15000 | 0.1960 | 0.1737 | | 0.886 | 17.23 | 15250 | 0.1929 | 0.1726 | | 0.8969 | 17.51 | 15500 | 0.1928 | 0.1734 | | 0.9084 | 17.8 | 15750 | 0.1937 | 0.1713 | | 0.8795 | 18.08 | 16000 | 0.1978 | 0.1709 | | 0.8883 | 18.36 | 16250 | 0.1956 | 0.1703 | | 0.8901 | 18.64 | 16500 | 0.1933 | 0.1705 | | 0.8922 | 18.93 | 16750 | 0.1962 | 0.1711 | | 0.8765 | 19.21 | 17000 | 0.1962 | 0.1711 | | 0.8992 | 19.49 | 17250 | 0.1965 | 0.1703 | | 0.8778 | 19.77 | 17500 | 0.1957 | 0.1699 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.0+cu113 - Datasets 1.18.1 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["automatic-speech-recognition", "NbAiLab/NPSC", "generated_from_trainer"], "model-index": [{"name": "", "results": []}]}
Rolv-Arild/xls-r-300m-npsc-4
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "NbAiLab/NPSC", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2965 - Wer: 0.3144 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 2.888 | 0.51 | 400 | 3.7320 | 0.9440 | | 3.1636 | 1.02 | 800 | 2.9188 | 1.1916 | | 2.773 | 1.53 | 1200 | 2.3347 | 1.0134 | | 0.7198 | 2.04 | 1600 | 0.6678 | 0.4826 | | 0.5255 | 2.55 | 2000 | 0.4605 | 0.4135 | | 0.3961 | 3.06 | 2400 | 0.4266 | 0.3955 | | 0.3424 | 3.57 | 2800 | 0.3786 | 0.3741 | | 0.3858 | 4.08 | 3200 | 0.3161 | 0.3552 | | 0.3218 | 4.59 | 3600 | 0.3029 | 0.3510 | | 0.199 | 5.1 | 4000 | 0.2988 | 0.3418 | | 0.2054 | 5.61 | 4400 | 0.2873 | 0.3434 | | 0.1704 | 6.12 | 4800 | 0.3129 | 0.3432 | | 0.1805 | 6.63 | 5200 | 0.2963 | 0.3413 | | 0.2091 | 7.14 | 5600 | 0.2755 | 0.3329 | | 0.1971 | 7.65 | 6000 | 0.2706 | 0.3309 | | 0.1237 | 8.16 | 6400 | 0.2823 | 0.3270 | | 0.123 | 8.67 | 6800 | 0.2754 | 0.3246 | | 0.103 | 9.18 | 7200 | 0.2917 | 0.3272 | | 0.1143 | 9.69 | 7600 | 0.2885 | 0.3305 | | 0.156 | 10.2 | 8000 | 0.2810 | 0.3288 | | 0.167 | 10.71 | 8400 | 0.2689 | 0.3232 | | 0.0815 | 11.22 | 8800 | 0.2899 | 0.3236 | | 0.0844 | 11.73 | 9200 | 0.2798 | 0.3225 | | 0.0775 | 12.24 | 9600 | 0.2894 | 0.3224 | | 0.0677 | 12.75 | 10000 | 0.2838 | 0.3204 | | 0.1383 | 13.27 | 10400 | 0.2959 | 0.3211 | | 0.1233 | 13.77 | 10800 | 0.2922 | 0.3213 | | 0.0688 | 14.29 | 11200 | 0.2903 | 0.3209 | | 0.0655 | 14.8 | 11600 | 0.2868 | 0.3182 | | 0.0449 | 15.31 | 12000 | 0.2959 | 0.3172 | | 0.0421 | 15.82 | 12400 | 0.2966 | 0.3180 | | 0.0858 | 16.33 | 12800 | 0.2941 | 0.3164 | | 0.0859 | 16.84 | 13200 | 0.2980 | 0.3165 | | 0.0561 | 17.35 | 13600 | 0.2965 | 0.3165 | | 0.0506 | 17.86 | 14000 | 0.2935 | 0.3148 | | 0.0312 | 18.37 | 14400 | 0.2964 | 0.3154 | | 0.0403 | 18.88 | 14800 | 0.2967 | 0.3160 | | 0.0924 | 19.39 | 15200 | 0.2955 | 0.3147 | | 0.0585 | 19.9 | 15600 | 0.2965 | 0.3144 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.0+cu113 - Datasets 1.18.1 - Tokenizers 0.11.0
{"tags": ["generated_from_trainer"], "model-index": [{"name": "", "results": []}]}
Rolv-Arild/xls-r-300m-npsc-seq2seq
null
[ "transformers", "pytorch", "tensorboard", "speech-encoder-decoder", "automatic-speech-recognition", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
{}
Rolv-Arild/xls-r-300m-npsc
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Roman54/distilbert-base-uncased-finetuned-ner
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Ron12/OOP
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
RonnieTheCat/QG-System
null
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
{}
Rostlab/prot_albert
null
[ "transformers", "pytorch", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
# ProtBert model Pretrained model on protein sequences using a masked language modeling (MLM) objective. It was introduced in [this paper](https://doi.org/10.1101/2020.07.12.199554) and first released in [this repository](https://github.com/agemagician/ProtTrans). This model is trained on uppercase amino acids: it only works with capital letter amino acids. ## Model description ProtBert is based on Bert model which pretrained on a large corpus of protein sequences in a self-supervised fashion. This means it was pretrained on the raw protein sequences only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those protein sequences. One important difference between our Bert model and the original Bert version is the way of dealing with sequences as separate documents. This means the Next sentence prediction is not used, as each sequence is treated as a complete document. The masking follows the original Bert training with randomly masks 15% of the amino acids in the input. At the end, the feature extracted from this model revealed that the LM-embeddings from unlabeled data (only protein sequences) captured important biophysical properties governing protein shape. This implied learning some of the grammar of the language of life realized in protein sequences. ## Intended uses & limitations The model could be used for protein feature extraction or to be fine-tuned on downstream tasks. We have noticed in some tasks you could gain more accuracy by fine-tuning the model rather than using it as a feature extractor. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import BertForMaskedLM, BertTokenizer, pipeline >>> tokenizer = BertTokenizer.from_pretrained("Rostlab/prot_bert", do_lower_case=False ) >>> model = BertForMaskedLM.from_pretrained("Rostlab/prot_bert") >>> unmasker = pipeline('fill-mask', model=model, tokenizer=tokenizer) >>> unmasker('D L I P T S S K L V V [MASK] D T S L Q V K K A F F A L V T') [{'score': 0.11088453233242035, 'sequence': '[CLS] D L I P T S S K L V V L D T S L Q V K K A F F A L V T [SEP]', 'token': 5, 'token_str': 'L'}, {'score': 0.08402521163225174, 'sequence': '[CLS] D L I P T S S K L V V S D T S L Q V K K A F F A L V T [SEP]', 'token': 10, 'token_str': 'S'}, {'score': 0.07328339666128159, 'sequence': '[CLS] D L I P T S S K L V V V D T S L Q V K K A F F A L V T [SEP]', 'token': 8, 'token_str': 'V'}, {'score': 0.06921856850385666, 'sequence': '[CLS] D L I P T S S K L V V K D T S L Q V K K A F F A L V T [SEP]', 'token': 12, 'token_str': 'K'}, {'score': 0.06382402777671814, 'sequence': '[CLS] D L I P T S S K L V V I D T S L Q V K K A F F A L V T [SEP]', 'token': 11, 'token_str': 'I'}] ``` Here is how to use this model to get the features of a given protein sequence in PyTorch: ```python from transformers import BertModel, BertTokenizer import re tokenizer = BertTokenizer.from_pretrained("Rostlab/prot_bert", do_lower_case=False ) model = BertModel.from_pretrained("Rostlab/prot_bert") sequence_Example = "A E T C Z A O" sequence_Example = re.sub(r"[UZOB]", "X", sequence_Example) encoded_input = tokenizer(sequence_Example, return_tensors='pt') output = model(**encoded_input) ``` ## Training data The ProtBert model was pretrained on [Uniref100](https://www.uniprot.org/downloads), a dataset consisting of 217 million protein sequences. ## Training procedure ### Preprocessing The protein sequences are uppercased and tokenized using a single space and a vocabulary size of 21. The rare amino acids "U,Z,O,B" were mapped to "X". The inputs of the model are then of the form: ``` [CLS] Protein Sequence A [SEP] Protein Sequence B [SEP] ``` Furthermore, each protein sequence was treated as a separate document. The preprocessing step was performed twice, once for a combined length (2 sequences) of less than 512 amino acids, and another time using a combined length (2 sequences) of less than 2048 amino acids. The details of the masking procedure for each sequence followed the original Bert model as following: - 15% of the amino acids are masked. - In 80% of the cases, the masked amino acids are replaced by `[MASK]`. - In 10% of the cases, the masked amino acids are replaced by a random amino acid (different) from the one they replace. - In the 10% remaining cases, the masked amino acids are left as is. ### Pretraining The model was trained on a single TPU Pod V3-512 for 400k steps in total. 300K steps using sequence length 512 (batch size 15k), and 100K steps using sequence length 2048 (batch size 2.5k). The optimizer used is Lamb with a learning rate of 0.002, a weight decay of 0.01, learning rate warmup for 40k steps and linear decay of the learning rate after. ## Evaluation results When fine-tuned on downstream tasks, this model achieves the following results: Test results : | Task/Dataset | secondary structure (3-states) | secondary structure (8-states) | Localization | Membrane | |:-----:|:-----:|:-----:|:-----:|:-----:| | CASP12 | 75 | 63 | | | | TS115 | 83 | 72 | | | | CB513 | 81 | 66 | | | | DeepLoc | | | 79 | 91 | ### BibTeX entry and citation info ```bibtex @article {Elnaggar2020.07.12.199554, author = {Elnaggar, Ahmed and Heinzinger, Michael and Dallago, Christian and Rehawi, Ghalia and Wang, Yu and Jones, Llion and Gibbs, Tom and Feher, Tamas and Angerer, Christoph and Steinegger, Martin and BHOWMIK, DEBSINDHU and Rost, Burkhard}, title = {ProtTrans: Towards Cracking the Language of Life{\textquoteright}s Code Through Self-Supervised Deep Learning and High Performance Computing}, elocation-id = {2020.07.12.199554}, year = {2020}, doi = {10.1101/2020.07.12.199554}, publisher = {Cold Spring Harbor Laboratory}, abstract = {Computational biology and bioinformatics provide vast data gold-mines from protein sequences, ideal for Language Models (LMs) taken from Natural Language Processing (NLP). These LMs reach for new prediction frontiers at low inference costs. Here, we trained two auto-regressive language models (Transformer-XL, XLNet) and two auto-encoder models (Bert, Albert) on data from UniRef and BFD containing up to 393 billion amino acids (words) from 2.1 billion protein sequences (22- and 112 times the entire English Wikipedia). The LMs were trained on the Summit supercomputer at Oak Ridge National Laboratory (ORNL), using 936 nodes (total 5616 GPUs) and one TPU Pod (V3-512 or V3-1024). We validated the advantage of up-scaling LMs to larger models supported by bigger data by predicting secondary structure (3-states: Q3=76-84, 8 states: Q8=65-73), sub-cellular localization for 10 cellular compartments (Q10=74) and whether a protein is membrane-bound or water-soluble (Q2=89). Dimensionality reduction revealed that the LM-embeddings from unlabeled data (only protein sequences) captured important biophysical properties governing protein shape. This implied learning some of the grammar of the language of life realized in protein sequences. The successful up-scaling of protein LMs through HPC to larger data sets slightly reduced the gap between models trained on evolutionary information and LMs. Availability ProtTrans: \&lt;a href="https://github.com/agemagician/ProtTrans"\&gt;https://github.com/agemagician/ProtTrans\&lt;/a\&gt;Competing Interest StatementThe authors have declared no competing interest.}, URL = {https://www.biorxiv.org/content/early/2020/07/21/2020.07.12.199554}, eprint = {https://www.biorxiv.org/content/early/2020/07/21/2020.07.12.199554.full.pdf}, journal = {bioRxiv} } ``` > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"tags": ["protein language model", "protein"], "datasets": ["Uniref100"]}
Rostlab/prot_bert
null
[ "transformers", "pytorch", "fill-mask", "protein language model", "protein", "dataset:Uniref100", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00