modelId
stringlengths
4
81
tags
list
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
timestamp[ns, tz=UTC]
card
stringlengths
51
438k
Cameron/BERT-eec-emotion
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
36
2022-07-28T17:27:40Z
--- language: en tags: - roberta-base - roberta-base-epoch_49 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 49 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_49. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
Cameron/BERT-jigsaw-identityhate
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
37
2022-07-28T17:28:26Z
--- language: en tags: - roberta-base - roberta-base-epoch_50 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 50 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_50. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
Cameron/BERT-jigsaw-severetoxic
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
2022-07-28T17:29:17Z
--- language: en tags: - roberta-base - roberta-base-epoch_51 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 51 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_51. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
Cameron/BERT-mdgender-convai-ternary
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
38
2022-07-28T17:30:47Z
--- language: en tags: - roberta-base - roberta-base-epoch_53 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 53 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_53. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
Cameron/BERT-mdgender-wizard
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
2022-07-28T17:31:39Z
--- language: en tags: - roberta-base - roberta-base-epoch_54 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 54 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_54. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
Camzure/MaamiBot-test
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
2022-07-28T17:33:30Z
--- language: en tags: - roberta-base - roberta-base-epoch_56 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 56 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_56. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
Canadiancaleb/DialoGPT-small-jesse
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
2022-07-28T17:35:06Z
--- language: en tags: - roberta-base - roberta-base-epoch_58 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 58 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_58. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
Canadiancaleb/DialoGPT-small-walter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
2022-07-28T17:35:53Z
--- language: en tags: - roberta-base - roberta-base-epoch_59 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 59 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_59. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
Canadiancaleb/jessebot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-07-28T17:36:36Z
--- language: en tags: - roberta-base - roberta-base-epoch_60 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 60 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_60. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
CapitainData/wav2vec2-large-xlsr-turkish-demo-colab
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-07-28T17:41:03Z
--- language: en tags: - roberta-base - roberta-base-epoch_62 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 62 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_62. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
Capreolus/birch-bert-large-car_mb
[ "pytorch", "tf", "jax", "bert", "next-sentence-prediction", "transformers" ]
null
{ "architectures": [ "BertForNextSentencePrediction" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2022-07-28T17:43:33Z
--- language: en tags: - roberta-base - roberta-base-epoch_64 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 64 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_64. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
Capreolus/birch-bert-large-mb
[ "pytorch", "tf", "jax", "bert", "next-sentence-prediction", "transformers" ]
null
{ "architectures": [ "BertForNextSentencePrediction" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
2022-07-28T17:45:42Z
--- language: en tags: - roberta-base - roberta-base-epoch_65 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 65 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_65. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
Capreolus/birch-bert-large-msmarco_mb
[ "pytorch", "tf", "jax", "bert", "next-sentence-prediction", "transformers" ]
null
{ "architectures": [ "BertForNextSentencePrediction" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
2022-07-28T17:46:47Z
--- language: en tags: - roberta-base - roberta-base-epoch_66 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 66 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_66. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
Captain-1337/CrudeBERT
[ "pytorch", "bert", "text-classification", "arxiv:1908.10063", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
2022-07-28T17:48:39Z
--- language: en tags: - roberta-base - roberta-base-epoch_67 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 67 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_67. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
Carlork314/Carlos
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-07-28T17:50:57Z
--- language: en tags: - roberta-base - roberta-base-epoch_69 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 69 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_69. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
Carlork314/Xd
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-07-28T17:52:21Z
--- language: en tags: - roberta-base - roberta-base-epoch_70 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 70 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_70. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
CarlosTron/Yo
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-07-28T17:53:20Z
--- language: en tags: - roberta-base - roberta-base-epoch_71 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 71 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_71. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
CasualHomie/DialoGPT-small-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
2022-07-28T17:55:51Z
--- language: en tags: - roberta-base - roberta-base-epoch_73 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 73 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_73. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
Cat/Kitty
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-07-28T17:56:39Z
--- language: en tags: - roberta-base - roberta-base-epoch_74 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 74 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_74. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
Cdial/hausa-asr
[ "wav2vec2", "automatic-speech-recognition", "ha", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2022-07-28T17:58:41Z
--- language: en tags: - roberta-base - roberta-base-epoch_76 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 76 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_76. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
Cedille/fr-boris
[ "pytorch", "gptj", "text-generation", "fr", "dataset:c4", "arxiv:2202.03371", "transformers", "causal-lm", "license:mit", "has_space" ]
text-generation
{ "architectures": [ "GPTJForCausalLM" ], "model_type": "gptj", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
401
2022-07-28T17:59:57Z
--- language: en tags: - roberta-base - roberta-base-epoch_77 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 77 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_77. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
dccuchile/albert-base-spanish-finetuned-mldoc
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
34
2022-07-28T18:01:03Z
--- language: en tags: - roberta-base - roberta-base-epoch_78 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 78 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_78. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
dccuchile/albert-base-spanish-finetuned-ner
[ "pytorch", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
2022-07-28T18:02:16Z
--- language: en tags: - roberta-base - roberta-base-epoch_79 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 79 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_79. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
dccuchile/albert-base-spanish-finetuned-pawsx
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
25
2022-07-28T18:02:50Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme model-index: - name: xlm-roberta-base-finetuned-panx-de results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
dccuchile/albert-base-spanish-finetuned-pos
[ "pytorch", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
2022-07-28T18:03:25Z
--- language: en tags: - roberta-base - roberta-base-epoch_80 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 80 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_80. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
dccuchile/albert-base-spanish-finetuned-qa-mlqa
[ "pytorch", "albert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "AlbertForQuestionAnswering" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
2022-07-28T18:04:26Z
--- language: en tags: - roberta-base - roberta-base-epoch_81 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 81 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_81. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
dccuchile/albert-large-spanish-finetuned-mldoc
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
2022-07-28T18:06:23Z
--- language: en tags: - roberta-base - roberta-base-epoch_83 license: mit datasets: - wikipedia - bookcorpus --- # RoBERTa, Intermediate Checkpoint - Epoch 83 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_83. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
dccuchile/albert-large-spanish-finetuned-ner
[ "pytorch", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
2022-07-28T18:06:46Z
--- tags: - conversational --- # DialoGPT BaymaxBot
dccuchile/albert-large-spanish-finetuned-pawsx
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
25
2022-07-28T18:07:20Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Heem/distilroberta-finetuned-wtner results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Heem/distilroberta-finetuned-wtner This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0055 - Validation Loss: 0.4521 - Train Precision: 0.7410 - Train Recall: 0.8122 - Train F1: 0.775 - Train Accuracy: 0.9382 - Epoch: 69 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2030, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch | |:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:| | 1.3579 | 0.8909 | 0.0 | 0.0 | 0.0 | 0.7744 | 0 | | 0.7332 | 0.6231 | 0.3526 | 0.2926 | 0.3198 | 0.8256 | 1 | | 0.5037 | 0.4471 | 0.3927 | 0.3755 | 0.3839 | 0.8575 | 2 | | 0.3675 | 0.3776 | 0.484 | 0.5284 | 0.5052 | 0.8855 | 3 | | 0.2890 | 0.3519 | 0.5149 | 0.6026 | 0.5553 | 0.9039 | 4 | | 0.2367 | 0.3317 | 0.5820 | 0.6507 | 0.6144 | 0.9150 | 5 | | 0.1942 | 0.2970 | 0.6220 | 0.6900 | 0.6542 | 0.9237 | 6 | | 0.1599 | 0.3040 | 0.6375 | 0.6681 | 0.6525 | 0.9217 | 7 | | 0.1281 | 0.3037 | 0.6774 | 0.7336 | 0.7044 | 0.9304 | 8 | | 0.1097 | 0.3127 | 0.708 | 0.7729 | 0.7390 | 0.9309 | 9 | | 0.0915 | 0.3114 | 0.6836 | 0.7642 | 0.7216 | 0.9290 | 10 | | 0.0765 | 0.3190 | 0.7072 | 0.8122 | 0.7561 | 0.9372 | 11 | | 0.0665 | 0.3169 | 0.7154 | 0.7904 | 0.7510 | 0.9353 | 12 | | 0.0543 | 0.3251 | 0.7059 | 0.7860 | 0.7438 | 0.9329 | 13 | | 0.0472 | 0.3307 | 0.7181 | 0.8122 | 0.7623 | 0.9357 | 14 | | 0.0427 | 0.3639 | 0.7148 | 0.7991 | 0.7546 | 0.9357 | 15 | | 0.0380 | 0.3373 | 0.7373 | 0.8210 | 0.7769 | 0.9377 | 16 | | 0.0380 | 0.3422 | 0.7449 | 0.8035 | 0.7731 | 0.9372 | 17 | | 0.0304 | 0.3455 | 0.7530 | 0.8122 | 0.7815 | 0.9386 | 18 | | 0.0271 | 0.3584 | 0.7294 | 0.8122 | 0.7686 | 0.9377 | 19 | | 0.0249 | 0.3661 | 0.7291 | 0.7991 | 0.7625 | 0.9377 | 20 | | 0.0205 | 0.3683 | 0.7352 | 0.8122 | 0.7718 | 0.9391 | 21 | | 0.0212 | 0.3855 | 0.7331 | 0.8035 | 0.7667 | 0.9382 | 22 | | 0.0188 | 0.3814 | 0.7419 | 0.8035 | 0.7715 | 0.9391 | 23 | | 0.0189 | 0.3889 | 0.7352 | 0.8122 | 0.7718 | 0.9357 | 24 | | 0.0161 | 0.3913 | 0.7379 | 0.7991 | 0.7673 | 0.9382 | 25 | | 0.0154 | 0.3872 | 0.7470 | 0.8122 | 0.7782 | 0.9406 | 26 | | 0.0144 | 0.3934 | 0.7326 | 0.8253 | 0.7762 | 0.9401 | 27 | | 0.0154 | 0.4167 | 0.7255 | 0.8079 | 0.7645 | 0.9343 | 28 | | 0.0135 | 0.3976 | 0.7341 | 0.8079 | 0.7692 | 0.9362 | 29 | | 0.0119 | 0.4118 | 0.7510 | 0.8297 | 0.7884 | 0.9382 | 30 | | 0.0103 | 0.4112 | 0.7323 | 0.8122 | 0.7702 | 0.9372 | 31 | | 0.0103 | 0.4172 | 0.7362 | 0.8166 | 0.7743 | 0.9382 | 32 | | 0.0111 | 0.4157 | 0.7283 | 0.8079 | 0.7660 | 0.9382 | 33 | | 0.0103 | 0.4152 | 0.7262 | 0.7991 | 0.7609 | 0.9372 | 34 | | 0.0117 | 0.4090 | 0.7188 | 0.8035 | 0.7588 | 0.9377 | 35 | | 0.0098 | 0.4268 | 0.7302 | 0.8035 | 0.7651 | 0.9367 | 36 | | 0.0082 | 0.4354 | 0.7233 | 0.7991 | 0.7593 | 0.9362 | 37 | | 0.0096 | 0.4298 | 0.7154 | 0.7904 | 0.7510 | 0.9357 | 38 | | 0.0093 | 0.4294 | 0.7273 | 0.8035 | 0.7635 | 0.9362 | 39 | | 0.0084 | 0.4266 | 0.7298 | 0.7904 | 0.7589 | 0.9348 | 40 | | 0.0076 | 0.4230 | 0.7251 | 0.7948 | 0.7583 | 0.9357 | 41 | | 0.0068 | 0.4243 | 0.7075 | 0.7817 | 0.7427 | 0.9329 | 42 | | 0.0080 | 0.4379 | 0.7137 | 0.7729 | 0.7421 | 0.9338 | 43 | | 0.0067 | 0.4361 | 0.7302 | 0.8035 | 0.7651 | 0.9362 | 44 | | 0.0066 | 0.4377 | 0.7341 | 0.8079 | 0.7692 | 0.9367 | 45 | | 0.0056 | 0.4357 | 0.7222 | 0.7948 | 0.7568 | 0.9362 | 46 | | 0.0060 | 0.4393 | 0.7205 | 0.7991 | 0.7578 | 0.9362 | 47 | | 0.0060 | 0.4429 | 0.7194 | 0.7948 | 0.7552 | 0.9357 | 48 | | 0.0054 | 0.4416 | 0.7312 | 0.8079 | 0.7676 | 0.9367 | 49 | | 0.0060 | 0.4413 | 0.7188 | 0.8035 | 0.7588 | 0.9362 | 50 | | 0.0058 | 0.4381 | 0.7344 | 0.8210 | 0.7753 | 0.9377 | 51 | | 0.0063 | 0.4388 | 0.7309 | 0.7948 | 0.7615 | 0.9377 | 52 | | 0.0057 | 0.4402 | 0.7412 | 0.8253 | 0.7810 | 0.9382 | 53 | | 0.0052 | 0.4381 | 0.7362 | 0.8166 | 0.7743 | 0.9377 | 54 | | 0.0049 | 0.4407 | 0.7362 | 0.8166 | 0.7743 | 0.9377 | 55 | | 0.0050 | 0.4394 | 0.7490 | 0.8210 | 0.7833 | 0.9386 | 56 | | 0.0047 | 0.4481 | 0.7460 | 0.8210 | 0.7817 | 0.9382 | 57 | | 0.0052 | 0.4544 | 0.748 | 0.8166 | 0.7808 | 0.9367 | 58 | | 0.0049 | 0.4501 | 0.7430 | 0.8079 | 0.7741 | 0.9362 | 59 | | 0.0050 | 0.4504 | 0.744 | 0.8122 | 0.7766 | 0.9367 | 60 | | 0.0047 | 0.4517 | 0.7312 | 0.8079 | 0.7676 | 0.9372 | 61 | | 0.0049 | 0.4526 | 0.7450 | 0.8166 | 0.7792 | 0.9382 | 62 | | 0.0049 | 0.4534 | 0.7490 | 0.8210 | 0.7833 | 0.9386 | 63 | | 0.0056 | 0.4543 | 0.748 | 0.8166 | 0.7808 | 0.9386 | 64 | | 0.0044 | 0.4522 | 0.7410 | 0.8122 | 0.775 | 0.9382 | 65 | | 0.0047 | 0.4522 | 0.7410 | 0.8122 | 0.775 | 0.9382 | 66 | | 0.0050 | 0.4521 | 0.7410 | 0.8122 | 0.775 | 0.9382 | 67 | | 0.0049 | 0.4521 | 0.7410 | 0.8122 | 0.775 | 0.9382 | 68 | | 0.0055 | 0.4521 | 0.7410 | 0.8122 | 0.775 | 0.9382 | 69 | ### Framework versions - Transformers 4.20.1 - TensorFlow 2.9.1 - Datasets 2.4.0 - Tokenizers 0.12.1
dccuchile/albert-tiny-spanish-finetuned-pos
[ "pytorch", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
2022-07-28T18:53:45Z
--- widget: - text: "Paytm’s Revenue Growth Trajectory To Remain Strong In Q1: Goldman Sachs" - text: "Nifty ends above 16,900, Sensex gains 1,041 pts led by IT, metal, realty" - text: "Amazon reports BLOWOUT earnings, beating revenue estimates and raising Q3 guidance" - text: "Company went through great loss due to lawsuit in Q1" --- ## What is Roberta-Earning-Call-Transcript-Classification Model? Roberta-Earning-Call-Transcript-Classification is a Multi-Label Classification Model trained with Annotated earning call transcript data. Roberta-base model was fine-tuned to train on earning call transcript data. This model could be very helpful in finding Negative, Positive, Litigious, Constraining and Uncertain thing in the sentence. This could be really helpful in analyzing Profit warning of a company. ## What is RoBERTa RoBERTa builds on BERT’s language masking strategy and modifies key hyperparameters in BERT, including removing BERT’s next-sentence pretraining objective, and training with much larger mini-batches and learning rates. RoBERTa was also trained on an order of magnitude more data than BERT, for a longer amount of time. This allows RoBERTa representations to generalize even better to downstream tasks compared to BERT. ## What is Earning Call Transcript? An earnings call is a teleconference, or webcast, in which a public company discusses the financial results of a reporting period. The name comes from earnings per share, the bottom line number in the income statement divided by the number of shares outstanding. Example of Earning call Transcipt: https://www.fool.com/earnings/call-transcripts/2022/04/29/apple-aapl-q2-2022-earnings-call-transcript Scraped 10 years of earning call transcript data for 10 companies like Apple, google, microsoft, Nvidia, Amazon, Intel, Cisco etc. Annotate the data in various categories of sentences like Negative, Positive, Litigious, Constraining and Uncertainty And then used Loughran-McDonald sentiment lexicon and Use FinancialPhraseBank [Malo, P., Sinha, A., Korhonen, P., Wallenius, J., & Takala, P. (2014). Good debt or bad debt: Detecting semantic orientations in economic texts. Journal of the Association for Information Science and Technology, 65(4), 782-796.] for data annotation. ## Hyperparameters | Parameter | | | ----------------- | :---: | | Learning rate | 1e-5 | | Epochs | 12 | | Max Seq Length | 240 | | Batch size | 128 | ## Results Best Result of `Micro F1` - 82.8% ## Usage ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("NLPScholars/Roberta-Earning-Call-Transcript-Classification") model = AutoModelForSequenceClassification.from_pretrained("NLPScholars/Roberta-Earning-Call-Transcript-Classification") ``` # Contributors * Sumit Ranjan- [email protected], * Aanchal Varma- [email protected], * Akshul Mittal- [email protected]
dccuchile/albert-xxlarge-spanish-finetuned-qa-mlqa
[ "pytorch", "albert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "AlbertForQuestionAnswering" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
2022-07-28T21:30:57Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: movieHunt3-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # movieHunt3-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0009 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 95 | 0.0462 | | No log | 2.0 | 190 | 0.0067 | | No log | 3.0 | 285 | 0.0028 | | No log | 4.0 | 380 | 0.0018 | | No log | 5.0 | 475 | 0.0014 | | 0.1098 | 6.0 | 570 | 0.0012 | | 0.1098 | 7.0 | 665 | 0.0011 | | 0.1098 | 8.0 | 760 | 0.0010 | | 0.1098 | 9.0 | 855 | 0.0010 | | 0.1098 | 10.0 | 950 | 0.0009 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
dccuchile/albert-base-spanish
[ "pytorch", "tf", "albert", "pretraining", "es", "dataset:large_spanish_corpus", "transformers", "spanish", "OpenCENIA" ]
null
{ "architectures": [ "AlbertForPreTraining" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
586
2022-07-28T22:27:55Z
# Tranception model This Hugging Face Hub repo contains the model checkpoint for the Tranception model as described in our paper ["Tranception: protein fitness prediction with autoregressive transformers and inference-time retrieval"](https://arxiv.org/abs/2205.13760). The official GitHub repository can be accessed [here](https://github.com/OATML-Markslab/Tranception). This project is a joint collaboration between the [Marks lab](https://www.deboramarkslab.com/) and the [OATML group](https://oatml.cs.ox.ac.uk/). ## Abstract The ability to accurately model the fitness landscape of protein sequences is critical to a wide range of applications, from quantifying the effects of human variants on disease likelihood, to predicting immune-escape mutations in viruses and designing novel biotherapeutic proteins. Deep generative models of protein sequences trained on multiple sequence alignments have been the most successful approaches so far to address these tasks. The performance of these methods is however contingent on the availability of sufficiently deep and diverse alignments for reliable training. Their potential scope is thus limited by the fact many protein families are hard, if not impossible, to align. Large language models trained on massive quantities of non-aligned protein sequences from diverse families address these problems and show potential to eventually bridge the performance gap. We introduce Tranception, a novel transformer architecture leveraging autoregressive predictions and retrieval of homologous sequences at inference to achieve state-of-the-art fitness prediction performance. Given its markedly higher performance on multiple mutants, robustness to shallow alignments and ability to score indels, our approach offers significant gain of scope over existing approaches. To enable more rigorous model testing across a broader range of protein families, we develop ProteinGym -- an extensive set of multiplexed assays of variant effects, substantially increasing both the number and diversity of assays compared to existing benchmarks. ## License This project is available under the MIT license. ## Reference If you use Tranception or other files provided through our GitHub repository, please cite the following paper: ``` Notin, P., Dias, M., Frazer, J., Marchena-Hurtado, J., Gomez, A., Marks, D.S., Gal, Y. (2022). Tranception: Protein Fitness Prediction with Autoregressive Transformers and Inference-time Retrieval. ICML. ``` ## Links Pre-print: https://arxiv.org/abs/2205.13760 GitHub: https://github.com/OATML-Markslab/Tranception
dccuchile/albert-xlarge-spanish
[ "pytorch", "tf", "albert", "pretraining", "es", "dataset:large_spanish_corpus", "transformers", "spanish", "OpenCENIA" ]
null
{ "architectures": [ "AlbertForPreTraining" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
91
null
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - metrics: - type: mean_reward value: 13.50 +/- 7.43 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: atari_pong type: atari_pong --- A(n) **APPO** model trained on the **atari_pong** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
dccuchile/albert-xxlarge-spanish
[ "pytorch", "tf", "albert", "pretraining", "es", "dataset:large_spanish_corpus", "transformers", "spanish", "OpenCENIA" ]
null
{ "architectures": [ "AlbertForPreTraining" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
42
2022-07-28T23:08:32Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - metrics: - type: mean_reward value: 3848.00 +/- 308.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: atari_beamrider type: atari_beamrider --- A(n) **APPO** model trained on the **atari_beamrider** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
dccuchile/bert-base-spanish-wwm-cased-finetuned-mldoc
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
2022-07-28T23:10:36Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - metrics: - type: mean_reward value: 30.20 +/- 23.45 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: atari_breakout type: atari_breakout --- A(n) **APPO** model trained on the **atari_breakout** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
dccuchile/distilbert-base-spanish-uncased-finetuned-pawsx
[ "pytorch", "distilbert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
29
null
--- license: mit tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: roberta-base-finetuned-jigsaw-toxic results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-jigsaw-toxic This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0859 - Accuracy: 0.9747 - F1: 0.9746 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.1179 | 1.0 | 2116 | 0.0982 | 0.9694 | 0.9694 | | 0.0748 | 2.0 | 4232 | 0.0859 | 0.9747 | 0.9746 | | 0.0582 | 3.0 | 6348 | 0.0916 | 0.9750 | 0.9750 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
dccuchile/distilbert-base-spanish-uncased-finetuned-qa-mlqa
[ "pytorch", "distilbert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "DistilBertForQuestionAnswering" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: roberta_large-chunking_0728_v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta_large-chunking_0728_v2 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5270 - Precision: 0.6228 - Recall: 0.6467 - F1: 0.6345 - Accuracy: 0.8153 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 125 | 0.5667 | 0.4931 | 0.5415 | 0.5162 | 0.7397 | | No log | 2.0 | 250 | 0.4839 | 0.5484 | 0.5894 | 0.5682 | 0.7874 | | No log | 3.0 | 375 | 0.4822 | 0.5997 | 0.6341 | 0.6164 | 0.8085 | | 0.4673 | 4.0 | 500 | 0.5117 | 0.6023 | 0.6373 | 0.6193 | 0.8120 | | 0.4673 | 5.0 | 625 | 0.5270 | 0.6228 | 0.6467 | 0.6345 | 0.8153 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
dccuchile/distilbert-base-spanish-uncased
[ "pytorch", "distilbert", "fill-mask", "es", "dataset:large_spanish_corpus", "transformers", "spanish", "OpenCENIA", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
670
null
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer datasets: - skript metrics: - precision - recall - f1 - accuracy model-index: - name: wikineural-multilingual-ner-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: skript type: skript config: myscript split: train args: myscript metrics: - name: Precision type: precision value: 0.9007335298553506 - name: Recall type: recall value: 0.9301946902654867 - name: F1 type: f1 value: 0.9152270827528559 - name: Accuracy type: accuracy value: 0.9653644982020269 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wikineural-multilingual-ner-finetuned-ner This model is a fine-tuned version of [Babelscape/wikineural-multilingual-ner](https://huggingface.co/Babelscape/wikineural-multilingual-ner) on the skript dataset. It achieves the following results on the evaluation set: - Loss: 0.1243 - Precision: 0.9007 - Recall: 0.9302 - F1: 0.9152 - Accuracy: 0.9654 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 298 | 0.1179 | 0.8975 | 0.8981 | 0.8978 | 0.9592 | | 0.104 | 2.0 | 596 | 0.1161 | 0.9051 | 0.9201 | 0.9126 | 0.9648 | | 0.104 | 3.0 | 894 | 0.1243 | 0.9007 | 0.9302 | 0.9152 | 0.9654 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
CennetOguz/distilbert-base-uncased-finetuned-recipe-accelerate
[ "pytorch", "distilbert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
2022-07-29T04:34:31Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 244.25 +/- 15.32 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
CennetOguz/distilbert-base-uncased-finetuned-recipe
[ "pytorch", "tensorboard", "distilbert", "fill-mask", "transformers", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-google-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-google-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5366 - Wer: 0.3452 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.5499 | 2.01 | 500 | 1.9780 | 0.9933 | | 0.7517 | 4.02 | 1000 | 0.4654 | 0.4720 | | 0.2953 | 6.02 | 1500 | 0.4202 | 0.4049 | | 0.1809 | 8.03 | 2000 | 0.4276 | 0.3759 | | 0.1335 | 10.04 | 2500 | 0.4458 | 0.3774 | | 0.107 | 12.05 | 3000 | 0.4559 | 0.3707 | | 0.0923 | 14.06 | 3500 | 0.4607 | 0.3659 | | 0.0753 | 16.06 | 4000 | 0.4699 | 0.3531 | | 0.0658 | 18.07 | 4500 | 0.4507 | 0.3588 | | 0.0569 | 20.08 | 5000 | 0.5089 | 0.3532 | | 0.0493 | 22.09 | 5500 | 0.5481 | 0.3515 | | 0.043 | 24.1 | 6000 | 0.5066 | 0.3528 | | 0.0388 | 26.1 | 6500 | 0.5418 | 0.3534 | | 0.034 | 28.11 | 7000 | 0.5566 | 0.3524 | | 0.03 | 30.12 | 7500 | 0.4994 | 0.3437 | | 0.0274 | 32.13 | 8000 | 0.5588 | 0.3520 | | 0.0239 | 34.14 | 8500 | 0.5328 | 0.3458 | | 0.0212 | 36.14 | 9000 | 0.5221 | 0.3467 | | 0.0186 | 38.15 | 9500 | 0.5366 | 0.3452 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.12.0+cu113 - Datasets 1.18.3 - Tokenizers 0.12.1
Certified-Zoomer/DialoGPT-small-rick
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-07-29T04:42:53Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-finetuned-eurosat results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.7914171656686627 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5603 - Accuracy: 0.7914 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.67 | 0.99 | 70 | 0.7920 | 0.7265 | | 0.5856 | 1.99 | 140 | 0.6192 | 0.7804 | | 0.5612 | 2.99 | 210 | 0.5603 | 0.7914 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
Chaddmckay/Cdm
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
This Model can be used in the Kaggle Competition - https://www.kaggle.com/competitions/feedback-prize-effectivenes Data Used to train the MLM model - https://www.kaggle.com/competitions/feedback-prize-2021
Chae/botman
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
2022-07-29T05:07:17Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - metrics: - type: mean_reward value: 674.59 +/- 89.58 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Chaewon/mnmt_decoder_en
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2022-07-29T05:41:49Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: distilBERT_bio_pv_superset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilBERT_bio_pv_superset This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2328 - Precision: 0.5462 - Recall: 0.5325 - F1: 0.5393 - Accuracy: 0.9495 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0964 | 1.0 | 5467 | 0.1593 | 0.4625 | 0.3682 | 0.4100 | 0.9416 | | 0.1918 | 2.0 | 10934 | 0.1541 | 0.4796 | 0.4658 | 0.4726 | 0.9436 | | 0.0394 | 3.0 | 16401 | 0.1508 | 0.5349 | 0.4744 | 0.5028 | 0.9482 | | 0.1207 | 4.0 | 21868 | 0.1615 | 0.5422 | 0.4953 | 0.5177 | 0.9490 | | 0.0221 | 5.0 | 27335 | 0.1827 | 0.5377 | 0.5018 | 0.5191 | 0.9487 | | 0.0629 | 6.0 | 32802 | 0.1874 | 0.5479 | 0.5130 | 0.5299 | 0.9493 | | 0.0173 | 7.0 | 38269 | 0.2025 | 0.5388 | 0.5323 | 0.5356 | 0.9488 | | 0.2603 | 8.0 | 43736 | 0.2148 | 0.5437 | 0.5397 | 0.5417 | 0.9493 | | 0.0378 | 9.0 | 49203 | 0.2323 | 0.5430 | 0.5194 | 0.5310 | 0.9489 | | 0.031 | 10.0 | 54670 | 0.2328 | 0.5462 | 0.5325 | 0.5393 | 0.9495 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Chaewon/mnmt_decoder_en_gpt2
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-07-29T05:42:38Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 249.89 +/- 15.90 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ChaitanyaU/FineTuneLM
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-07-29T06:23:18Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: pond_image_classification_2 results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9974489808082581 --- # pond_image_classification_2 Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### Algae ![Algae](images/Algae.png) #### Boiling ![Boiling](images/Boiling.png) #### BoilingNight ![BoilingNight](images/BoilingNight.png) #### Normal ![Normal](images/Normal.png) #### NormalCement ![NormalCement](images/NormalCement.png) #### NormalNight ![NormalNight](images/NormalNight.png) #### NormalRain ![NormalRain](images/NormalRain.png)
Chakita/KROBERT
[ "pytorch", "roberta", "fill-mask", "transformers", "masked-lm", "fill-in-the-blanks", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
2022-07-29T06:50:48Z
# ELECTRA discriminator small - pretrained with large Korean corpus datasets (30GB) - 13.7M model parameters (followed google/electra-small-discriminator config) - 32,000 vocab size - trained for 1,000,000 steps - build with [lassl](https://github.com/lassl/lassl) framework pretrain-data ┣ korean_corpus.txt ┣ kowiki_latest.txt ┣ modu_dialogue_v1.2.txt ┣ modu_news_v1.1.txt ┣ modu_news_v2.0.txt ┣ modu_np_2021_v1.0.txt ┣ modu_np_v1.1.txt ┣ modu_spoken_v1.2.txt ┗ modu_written_v1.0.txt
Chakita/KannadaBERT
[ "pytorch", "roberta", "fill-mask", "transformers", "masked-lm", "fill-in-the-blanks", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
2022-07-29T06:52:47Z
--- library_name: stable-baselines3 tags: - Walker2DBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - metrics: - type: mean_reward value: 21.00 +/- 3.61 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Walker2DBulletEnv-v0 type: Walker2DBulletEnv-v0 --- # **A2C** Agent playing **Walker2DBulletEnv-v0** This is a trained model of a **A2C** agent playing **Walker2DBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Chalponkey/DialoGPT-small-Barry
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
2022-07-29T07:02:53Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: pond_image_classification_3 results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9974489808082581 --- # pond_image_classification_3 Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### Algae ![Algae](images/Algae.png) #### Boiling ![Boiling](images/Boiling.png) #### BoilingNight ![BoilingNight](images/BoilingNight.png) #### Normal ![Normal](images/Normal.png) #### NormalCement ![NormalCement](images/NormalCement.png) #### NormalNight ![NormalNight](images/NormalNight.png) #### NormalRain ![NormalRain](images/NormalRain.png)
Chan/distilgpt2-finetuned-wikitext2
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: pond_image_classification_4 results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9783163070678711 --- # pond_image_classification_4 Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### Algae ![Algae](images/Algae.png) #### Boiling ![Boiling](images/Boiling.png) #### BoilingNight ![BoilingNight](images/BoilingNight.png) #### Normal ![Normal](images/Normal.png) #### NormalCement ![NormalCement](images/NormalCement.png) #### NormalNight ![NormalNight](images/NormalNight.png) #### NormalRain ![NormalRain](images/NormalRain.png)
Chandanbhat/distilbert-base-uncased-finetuned-cola
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- datasets: - BramVanroy/hebban-reviews language: - nl license: mit metrics: - accuracy - f1 - precision - qwk - recall model-index: - name: bert-base-dutch-cased-hebban-reviews5 results: - dataset: config: filtered_rating name: BramVanroy/hebban-reviews - filtered_rating - 2.0.0 revision: 2.0.0 split: test type: BramVanroy/hebban-reviews metrics: - name: Test accuracy type: accuracy value: 0.6071005917159763 - name: Test f1 type: f1 value: 0.6050857981600024 - name: Test precision type: precision value: 0.6167698094913165 - name: Test qwk type: qwk value: 0.7455315835020534 - name: Test recall type: recall value: 0.6071005917159763 task: name: sentiment analysis type: text-classification tags: - sentiment-analysis - dutch - text widget: - text: Wauw, wat een leuk boek! Ik heb me er er goed mee vermaakt. - text: Nee, deze vond ik niet goed. De auteur doet zijn best om je als lezer mee te trekken in het verhaal maar mij overtuigt het alleszins niet. - text: Ik vind het niet slecht maar de schrijfstijl trekt me ook niet echt aan. Het wordt een beetje saai vanaf het vijfde hoofdstuk --- # bert-base-dutch-cased-hebban-reviews5 # Dataset - dataset_name: BramVanroy/hebban-reviews - dataset_config: filtered_rating - dataset_revision: 2.0.0 - labelcolumn: review_rating0 - textcolumn: review_text_without_quotes # Training - optim: adamw_hf - learning_rate: 5e-05 - per_device_train_batch_size: 64 - per_device_eval_batch_size: 64 - gradient_accumulation_steps: 1 - max_steps: 5001 - save_steps: 500 - metric_for_best_model: qwk # Best checkedpoint based on validation - best_metric: 0.736704788874575 - best_model_checkpoint: trained/hebban-reviews5/bert-base-dutch-cased/checkpoint-2000 # Test results of best checkpoint - accuracy: 0.6071005917159763 - f1: 0.6050857981600024 - precision: 0.6167698094913165 - qwk: 0.7455315835020534 - recall: 0.6071005917159763 ## Confusion matrix ![cfm](fig/test_confusion_matrix.png) ## Normalized confusion matrix ![norm cfm](fig/test_confusion_matrix_norm.png) # Environment - cuda_capabilities: 8.0; 8.0 - cuda_device_count: 2 - cuda_devices: NVIDIA A100-SXM4-80GB; NVIDIA A100-SXM4-80GB - finetuner_commit: 8159b4c1d5e66b36f68dd263299927ffb8670ebd - platform: Linux-4.18.0-305.49.1.el8_4.x86_64-x86_64-with-glibc2.28 - python_version: 3.9.5 - toch_version: 1.10.0 - transformers_version: 4.21.0
Cheatham/xlm-roberta-large-finetuned3
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
22
2022-07-29T08:00:24Z
--- thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png language: - en license: cc-by-4.0 tags: - conversational - transformers datasets: - AfriWOZ metrics: - perplexity widget: - text: "How I fit chop for here?" --- ## DialoGPT_AfriWOZ (Pidgin) This is a fine-tuned model of DialoGPT (small) on the AfriWOZ dataset. It is intended to be used as a conversational system in Nigeria Pidgin English language. The dataset it's trained on is limited in scope, as it covers only certain domains such as restaurants, hotel, taxi, and booking. The perplexity achieved on the validation set is 38.52. * Generation example from an interactive environment: |Role | Response | |------|------------| |User | I hear say restaurant dey here. | |Bot | I dey find restaurant. | |User | Abeg you fit tell me which kind chop dey? | |Bot | I no dey sure right now. | |User | You do well. Thank you. | |Bot | I dey find restauarant. | Please find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT) The paper for this work can be found on arXiv: [https://arxiv.org/pdf/2204.08083.pdf](https://arxiv.org/pdf/2204.08083.pdf) ### How to use Now we are ready to try out how the model works as a chatting partner! ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("tosin/dialogpt_afriwoz_pidgin") model = AutoModelForCausalLM.from_pretrained("tosin/dialogpt_afriwoz_pidgin") # Let's chat for 5 lines for step in range(5): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id) # pretty print last ouput tokens from bot print("DialoGPT_pidgin_Bot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
Check/vaw2tmp
[ "tensorboard" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-07-29T08:15:04Z
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: huggan/selfie2anime metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-ema-anime-256 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/selfie2anime` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(0.95, 0.999), weight_decay=1e-06 and epsilon=1e-08 - lr_scheduler: cosine - lr_warmup_steps: 500 - ema_inv_gamma: 1.0 - ema_inv_gamma: 0.75 - ema_inv_gamma: 0.9999 - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/mrm8488/ddpm-ema-anime-256/tensorboard?#scalars) > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) with the support of [Q Blocks](https://www.qblocks.cloud/)
CheonggyeMountain-Sherpa/kogpt-trinity-poem
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
15
null
Access to model dquisi/storySpanish is restricted and you are not in the authorized list. Visit https://huggingface.co/dquisi/storySpanish to ask for access.
CheonggyeMountain-Sherpa/kogpt-trinity-punct-wrapper
[ "ko", "gpt2", "license:cc-by-nc-sa-4.0" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-07-29T08:19:36Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: pond_image_classification_6 results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9948979616165161 --- # pond_image_classification_6 Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### Algae ![Algae](images/Algae.png) #### Boiling ![Boiling](images/Boiling.png) #### BoilingNight ![BoilingNight](images/BoilingNight.png) #### Normal ![Normal](images/Normal.png) #### NormalCement ![NormalCement](images/NormalCement.png) #### NormalNight ![NormalNight](images/NormalNight.png) #### NormalRain ![NormalRain](images/NormalRain.png)
Chertilasus/main
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-07-29T08:32:27Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: pond_image_classification_7 results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9936224222183228 --- # pond_image_classification_7 Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### Algae ![Algae](images/Algae.png) #### Boiling ![Boiling](images/Boiling.png) #### BoilingNight ![BoilingNight](images/BoilingNight.png) #### Normal ![Normal](images/Normal.png) #### NormalCement ![NormalCement](images/NormalCement.png) #### NormalNight ![NormalNight](images/NormalNight.png) #### NormalRain ![NormalRain](images/NormalRain.png)
Chester/traffic-rec
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-07-29T08:37:24Z
--- license: mit tags: - generated_from_trainer model-index: - name: vgdunkey-vgdunkeybot results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vgdunkey-vgdunkeybot This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001372 - train_batch_size: 1 - eval_batch_size: 8 - seed: 2843356107 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.9.1+cu111 - Datasets 2.3.2 - Tokenizers 0.12.1
Chikita1/www_stash_stock
[ "license:bsd-3-clause-clear" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-07-29T08:39:10Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8666666666666667 - name: F1 type: f1 value: 0.8666666666666667 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3236 - Accuracy: 0.8667 - F1: 0.8667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
ChristopherA08/IndoELECTRA
[ "pytorch", "electra", "pretraining", "id", "dataset:oscar", "transformers" ]
null
{ "architectures": [ "ElectraForPreTraining" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2022-07-29T10:16:59Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4772 - Wer: 0.2821 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.6949 | 0.87 | 500 | 2.4599 | 0.9999 | | 0.9858 | 1.73 | 1000 | 0.5249 | 0.4674 | | 0.4645 | 2.6 | 1500 | 0.4604 | 0.3900 | | 0.3273 | 3.46 | 2000 | 0.3939 | 0.3612 | | 0.2474 | 4.33 | 2500 | 0.4150 | 0.3560 | | 0.2191 | 5.19 | 3000 | 0.3855 | 0.3344 | | 0.1662 | 6.06 | 3500 | 0.3779 | 0.3258 | | 0.1669 | 6.92 | 4000 | 0.4841 | 0.3286 | | 0.151 | 7.79 | 4500 | 0.4182 | 0.3219 | | 0.1175 | 8.65 | 5000 | 0.4194 | 0.3107 | | 0.1103 | 9.52 | 5500 | 0.4256 | 0.3129 | | 0.1 | 10.38 | 6000 | 0.4352 | 0.3089 | | 0.0949 | 11.25 | 6500 | 0.4649 | 0.3160 | | 0.0899 | 12.11 | 7000 | 0.4472 | 0.3065 | | 0.0787 | 12.98 | 7500 | 0.4763 | 0.3128 | | 0.0742 | 13.84 | 8000 | 0.4321 | 0.3034 | | 0.067 | 14.71 | 8500 | 0.4562 | 0.3076 | | 0.063 | 15.57 | 9000 | 0.4541 | 0.3102 | | 0.0624 | 16.44 | 9500 | 0.5113 | 0.3040 | | 0.0519 | 17.3 | 10000 | 0.4925 | 0.3008 | | 0.0525 | 18.17 | 10500 | 0.4710 | 0.2987 | | 0.046 | 19.03 | 11000 | 0.4781 | 0.2977 | | 0.0455 | 19.9 | 11500 | 0.4572 | 0.2969 | | 0.0394 | 20.76 | 12000 | 0.5256 | 0.2966 | | 0.0373 | 21.63 | 12500 | 0.4723 | 0.2921 | | 0.0375 | 22.49 | 13000 | 0.4640 | 0.2847 | | 0.0334 | 23.36 | 13500 | 0.4740 | 0.2917 | | 0.0304 | 24.22 | 14000 | 0.4817 | 0.2874 | | 0.0291 | 25.09 | 14500 | 0.4722 | 0.2896 | | 0.0247 | 25.95 | 15000 | 0.4765 | 0.2870 | | 0.0223 | 26.82 | 15500 | 0.4728 | 0.2821 | | 0.0223 | 27.68 | 16000 | 0.4690 | 0.2834 | | 0.0207 | 28.55 | 16500 | 0.4706 | 0.2825 | | 0.0186 | 29.41 | 17000 | 0.4772 | 0.2821 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.12.0+cu113 - Datasets 1.18.3 - Tokenizers 0.12.1
Chungu424/DATA
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-07-29T12:17:21Z
--- license: apache-2.0 tags: - text-classification - generated_from_trainer datasets: - glue metrics: - accuracy - f1 widget: - text: ["Yucaipa owned Dominick 's before selling the chain to Safeway in 1998 for $ 2.5 billion.","Yucaipa bought Dominick's in 1995 for $ 693 million and sold it to Safeway for $ 1.8 billion in 1998."] example_title: Not Equivalent - text: ["Revenue in the first quarter of the year dropped 15 percent from the same period a year earlier.", "With the scandal hanging over Stewart's company revenue the first quarter of the year dropped 15 percent from the same period a year earlier."] example_title: Equivalent model-index: - name: platzi-distilroberta-base-mrpc-glue-omar-espejel results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: mrpc split: train args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8431372549019608 - name: F1 type: f1 value: 0.8861209964412811 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi-distilroberta-base-mrpc-glue-omar-espejel This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets. It achieves the following results on the evaluation set: - Loss: 0.6332 - Accuracy: 0.8431 - F1: 0.8861 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5076 | 1.09 | 500 | 0.7464 | 0.8137 | 0.8671 | | 0.3443 | 2.18 | 1000 | 0.6332 | 0.8431 | 0.8861 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
CoShin/XLM-roberta-large_ko_en_nil_sts
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: silviacamplani/distilbert-uncase-direct-finetuning-ai-ner_3labels results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # silviacamplani/distilbert-uncase-direct-finetuning-ai-ner_3labels This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6593 - Validation Loss: 0.6130 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 60, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.9721 | 1.8113 | 0 | | 1.6564 | 1.5052 | 1 | | 1.3640 | 1.2332 | 2 | | 1.1078 | 0.9996 | 3 | | 0.9158 | 0.8249 | 4 | | 0.7850 | 0.7188 | 5 | | 0.7135 | 0.6595 | 6 | | 0.6822 | 0.6310 | 7 | | 0.6394 | 0.6171 | 8 | | 0.6593 | 0.6130 | 9 | ### Framework versions - Transformers 4.20.1 - TensorFlow 2.6.4 - Datasets 2.1.0 - Tokenizers 0.12.1
CoderEFE/DialoGPT-medium-marx
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- tags: - information retrieval - reranking license: apache-2.0 --- # Model Card for NQ Reranker in Re2G # Model Details > The approach of RAG, Multi-DPR, and KGI is to train a neural IR (Information Retrieval) component and further train it end-to-end through its impact in generating the correct output. > >It has been previously established that results from initial retrieval can be greatly improved through the use of a reranker. Therefore we hypothesized that natural language generation systems incorporating retrieval can benefit from reranking. > >In addition to improving the ranking of passages returned from DPR, a reranker can be used after merging the results of multiple retrieval methods with incomparable scores. For example, the scores returned by BM25 are not comparable to the inner products from DPR. Using the scores from a reranker, we can find the top-k documents from the union of DPR and BM25 results. The figure below illustrates our extension of RAG with a reranker. We call our system Re2G (*Re*trieve, *Re*rank, *G*enerate). <img src="https://github.com/IBM/kgi-slot-filling/raw/re2g/model_cards/Re2G_Arch2.png" width="100%"> ## Training, Evaluation and Inference The code for training, evaluation and inference is in our github in the [re2g branch](https://github.com/IBM/kgi-slot-filling/tree/re2g). ## Usage The best way to use the model is by adapting the [reranker_apply.py](https://github.com/IBM/kgi-slot-filling/blob/re2g/reranker/reranker_apply.py) ## Citation ``` @inproceedings{glass-etal-2022-re2g, title = "{R}e2{G}: Retrieve, Rerank, Generate", author = "Glass, Michael and Rossiello, Gaetano and Chowdhury, Md Faisal Mahbub and Naik, Ankita and Cai, Pengshan and Gliozzo, Alfio", booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.naacl-main.194", doi = "10.18653/v1/2022.naacl-main.194", pages = "2701--2715", abstract = "As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9{\%} to 34{\%} over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.", } ``` ## Model Description The model creators note in the [associated paper](https://aclanthology.org/2022.naacl-main.194.pdf): > As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9% to 34% over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source. - **Developed by:** IBM - **Shared by [Optional]:** IBM - **Model type:** Query/Passage Reranker - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Parent Model:** [BERT-base trained on MSMARCO](https://huggingface.co/nboost/pt-bert-base-uncased-msmarco) - **Resources for more information:** - [GitHub Repo](https://github.com/IBM/kgi-slot-filling) - [Associated Paper](https://aclanthology.org/2022.naacl-main.194.pdf) # Uses ## Direct Use This model can be used for the task of reranking passage results for a question. # Citation **BibTeX:** ```bibtex @inproceedings{glass-etal-2022-re2g, title = "{R}e2{G}: Retrieve, Rerank, Generate", author = "Glass, Michael and Rossiello, Gaetano and Chowdhury, Md Faisal Mahbub and Naik, Ankita and Cai, Pengshan and Gliozzo, Alfio", booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.naacl-main.194", doi = "10.18653/v1/2022.naacl-main.194", pages = "2701--2715", abstract = "As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9{\%} to 34{\%} over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.", } ```
CoffeeAddict93/gpt2-medium-modest-proposal
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- tags: - information retrieval - reranking license: apache-2.0 --- # Model Card for NQ Context Encoder in Re2G # Model Details > The approach of RAG, Multi-DPR, and KGI is to train a neural IR (Information Retrieval) component and further train it end-to-end through its impact in generating the correct output. <img src="https://github.com/IBM/kgi-slot-filling/raw/re2g/model_cards/Re2G_Arch2.png" width="100%"> ## Training, Evaluation and Inference The code for training, evaluation and inference is in our github in the [re2g branch](https://github.com/IBM/kgi-slot-filling/tree/re2g). ## Usage The best way to use the model is by adapting the [dpr_apply.py](https://github.com/IBM/kgi-slot-filling/blob/re2g/dpr/dpr_apply.py) ## Citation ``` @inproceedings{glass-etal-2022-re2g, title = "{R}e2{G}: Retrieve, Rerank, Generate", author = "Glass, Michael and Rossiello, Gaetano and Chowdhury, Md Faisal Mahbub and Naik, Ankita and Cai, Pengshan and Gliozzo, Alfio", booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.naacl-main.194", doi = "10.18653/v1/2022.naacl-main.194", pages = "2701--2715", abstract = "As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9{\%} to 34{\%} over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.", } ``` ## Model Description The model creators note in the [associated paper](https://aclanthology.org/2022.naacl-main.194.pdf): > As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9% to 34% over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source. - **Developed by:** IBM - **Shared by [Optional]:** IBM - **Model type:** Query/Passage Reranker - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Parent Model:** [dpr-question_encoder-multiset-base](https://huggingface.co/facebook/dpr-question_encoder-multiset-base) - **Resources for more information:** - [GitHub Repo](https://github.com/IBM/kgi-slot-filling) - [Associated Paper](https://aclanthology.org/2022.naacl-main.194.pdf) # Uses ## Direct Use This model can be used for the task of encoding a passage to a vector, this passage or context vector should then be indexed into an Approximate Nearest Neighbors index. It must be used in combination with a query or question encoder that encodes a question to a query vector to search the index. # Citation **BibTeX:** ```bibtex @inproceedings{glass-etal-2022-re2g, title = "{R}e2{G}: Retrieve, Rerank, Generate", author = "Glass, Michael and Rossiello, Gaetano and Chowdhury, Md Faisal Mahbub and Naik, Ankita and Cai, Pengshan and Gliozzo, Alfio", booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.naacl-main.194", doi = "10.18653/v1/2022.naacl-main.194", pages = "2701--2715", abstract = "As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9{\%} to 34{\%} over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.", } ```
CohleM/bert-nepali-tokenizer
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-large-xlsr-korean-demo-colab_epoch15 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-korean-demo-colab_epoch15 This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4133 - Wer: 0.3801 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 16.9017 | 0.8 | 400 | 4.6273 | 1.0 | | 4.4633 | 1.6 | 800 | 4.4419 | 1.0 | | 4.2262 | 2.4 | 1200 | 3.8477 | 0.9994 | | 2.4402 | 3.21 | 1600 | 1.3564 | 0.8111 | | 1.3499 | 4.01 | 2000 | 0.9070 | 0.6664 | | 0.9922 | 4.81 | 2400 | 0.7496 | 0.6131 | | 0.8271 | 5.61 | 2800 | 0.6240 | 0.5408 | | 0.6918 | 6.41 | 3200 | 0.5506 | 0.5026 | | 0.6015 | 7.21 | 3600 | 0.5303 | 0.4935 | | 0.5435 | 8.02 | 4000 | 0.4951 | 0.4696 | | 0.4584 | 8.82 | 4400 | 0.4677 | 0.4432 | | 0.4258 | 9.62 | 4800 | 0.4602 | 0.4307 | | 0.3906 | 10.42 | 5200 | 0.4456 | 0.4195 | | 0.3481 | 11.22 | 5600 | 0.4265 | 0.4062 | | 0.3216 | 12.02 | 6000 | 0.4241 | 0.4046 | | 0.2908 | 12.83 | 6400 | 0.4106 | 0.3941 | | 0.2747 | 13.63 | 6800 | 0.4146 | 0.3855 | | 0.2633 | 14.43 | 7200 | 0.4133 | 0.3801 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.12.0+cu113 - Datasets 1.18.3 - Tokenizers 0.12.1
ComCom/gpt2-medium
[ "pytorch", "gpt2", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "GPT2Model" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - generated_from_trainer model-index: - name: ViT-BERT-Chess-V4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT-BERT-Chess-V4 This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.3213 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.705 | 1.0 | 3895 | 3.5686 | | 3.5139 | 2.0 | 7790 | 3.4288 | | 3.4156 | 3.0 | 11685 | 3.3663 | | 3.3661 | 4.0 | 15580 | 3.3331 | | 3.3352 | 5.0 | 19475 | 3.3213 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu116 - Datasets 2.3.2 - Tokenizers 0.12.1
ComCom/gpt2
[ "pytorch", "gpt2", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "GPT2Model" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="andres-hsn/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
ComCom-Dev/gpt2-bible-test
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - metrics: - type: mean_reward value: 7.54 +/- 2.72 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="andres-hsn/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
cometrain/neurotitle-rugpt3-small
[ "pytorch", "gpt2", "text-generation", "ru", "en", "dataset:All-NeurIPS-Papers-Scraper", "transformers", "Cometrain AutoCode", "Cometrain AlphaML", "license:mit" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
20
null
--- license: apache-2.0 tags: - summarisation - generated_from_trainer metrics: - rouge model-index: - name: bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-bbc-news-extracted-sumy results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-bbc-news-extracted-sumy This model is a fine-tuned version of [mrm8488/bert-small2bert-small-finetuned-cnn_daily_mail-summarization](https://huggingface.co/mrm8488/bert-small2bert-small-finetuned-cnn_daily_mail-summarization) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3228 - Rouge1: 56.5706 - Rouge2: 43.0906 - Rougel: 47.9957 - Rougelsum: 53.417 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | 0.3226 | 1.0 | 223 | 0.3225 | 55.7639 | 41.9414 | 46.9804 | 52.5639 | | 0.262 | 2.0 | 446 | 0.3198 | 55.7522 | 42.0929 | 46.8388 | 52.6659 | | 0.2153 | 3.0 | 669 | 0.3195 | 55.7091 | 42.2111 | 47.2641 | 52.5765 | | 0.1805 | 4.0 | 892 | 0.3164 | 55.8115 | 42.5536 | 47.3529 | 52.7672 | | 0.1527 | 5.0 | 1115 | 0.3203 | 56.8658 | 43.4238 | 48.2268 | 53.8136 | | 0.14 | 6.0 | 1338 | 0.3234 | 55.7138 | 41.8562 | 46.8362 | 52.5201 | | 0.1252 | 7.0 | 1561 | 0.3228 | 56.5706 | 43.0906 | 47.9957 | 53.417 | | 0.1229 | 8.0 | 1784 | 0.3228 | 56.5706 | 43.0906 | 47.9957 | 53.417 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
CouchCat/ma_ner_v6_distil
[ "pytorch", "distilbert", "token-classification", "en", "transformers", "ner", "license:mit", "autotrain_compatible" ]
token-classification
{ "architectures": [ "DistilBertForTokenClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- tags: - information retrieval - reranking license: apache-2.0 --- # Model Card for T-REx Context Encoder in Re2G # Model Details > The approach of RAG, Multi-DPR, and KGI is to train a neural IR (Information Retrieval) component and further train it end-to-end through its impact in generating the correct output. <img src="https://github.com/IBM/kgi-slot-filling/raw/re2g/model_cards/Re2G_Arch2.png" width="100%"> ## Training, Evaluation and Inference The code for training, evaluation and inference is in our github in the [re2g branch](https://github.com/IBM/kgi-slot-filling/tree/re2g). ## Usage The best way to use the model is by adapting the [dpr_apply.py](https://github.com/IBM/kgi-slot-filling/blob/re2g/dpr/dpr_apply.py) ## Citation ``` @inproceedings{glass-etal-2022-re2g, title = "{R}e2{G}: Retrieve, Rerank, Generate", author = "Glass, Michael and Rossiello, Gaetano and Chowdhury, Md Faisal Mahbub and Naik, Ankita and Cai, Pengshan and Gliozzo, Alfio", booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.naacl-main.194", doi = "10.18653/v1/2022.naacl-main.194", pages = "2701--2715", abstract = "As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9{\%} to 34{\%} over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.", } ``` ## Model Description The model creators note in the [associated paper](https://aclanthology.org/2022.naacl-main.194.pdf): > As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9% to 34% over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source. - **Developed by:** IBM - **Shared by [Optional]:** IBM - **Model type:** Query/Passage Reranker - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Parent Model:** [dpr-question_encoder-multiset-base](https://huggingface.co/facebook/dpr-question_encoder-multiset-base) - **Resources for more information:** - [GitHub Repo](https://github.com/IBM/kgi-slot-filling) - [Associated Paper](https://aclanthology.org/2022.naacl-main.194.pdf) # Uses ## Direct Use This model can be used for the task of encoding a passage to a vector, this passage or context vector should then be indexed into an Approximate Nearest Neighbors index. It must be used in combination with a query or question encoder that encodes a question to a query vector to search the index. # Citation **BibTeX:** ```bibtex @inproceedings{glass-etal-2022-re2g, title = "{R}e2{G}: Retrieve, Rerank, Generate", author = "Glass, Michael and Rossiello, Gaetano and Chowdhury, Md Faisal Mahbub and Naik, Ankita and Cai, Pengshan and Gliozzo, Alfio", booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.naacl-main.194", doi = "10.18653/v1/2022.naacl-main.194", pages = "2701--2715", abstract = "As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9{\%} to 34{\%} over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.", } ```
CoveJH/ConBot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-07-29T18:21:58Z
--- tags: - information retrieval - reranking license: apache-2.0 --- # Model Card for TriviaQA Reranker in Re2G # Model Details > The approach of RAG, Multi-DPR, and KGI is to train a neural IR (Information Retrieval) component and further train it end-to-end through its impact in generating the correct output. > >It has been previously established that results from initial retrieval can be greatly improved through the use of a reranker. Therefore we hypothesized that natural language generation systems incorporating retrieval can benefit from reranking. > >In addition to improving the ranking of passages returned from DPR, a reranker can be used after merging the results of multiple retrieval methods with incomparable scores. For example, the scores returned by BM25 are not comparable to the inner products from DPR. Using the scores from a reranker, we can find the top-k documents from the union of DPR and BM25 results. The figure below illustrates our extension of RAG with a reranker. We call our system Re2G (*Re*trieve, *Re*rank, *G*enerate). <img src="https://github.com/IBM/kgi-slot-filling/raw/re2g/model_cards/Re2G_Arch2.png" width="100%"> ## Training, Evaluation and Inference The code for training, evaluation and inference is in our github in the [re2g branch](https://github.com/IBM/kgi-slot-filling/tree/re2g). ## Usage The best way to use the model is by adapting the [reranker_apply.py](https://github.com/IBM/kgi-slot-filling/blob/re2g/reranker/reranker_apply.py) ## Citation ``` @inproceedings{glass-etal-2022-re2g, title = "{R}e2{G}: Retrieve, Rerank, Generate", author = "Glass, Michael and Rossiello, Gaetano and Chowdhury, Md Faisal Mahbub and Naik, Ankita and Cai, Pengshan and Gliozzo, Alfio", booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.naacl-main.194", doi = "10.18653/v1/2022.naacl-main.194", pages = "2701--2715", abstract = "As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9{\%} to 34{\%} over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.", } ``` ## Model Description The model creators note in the [associated paper](https://aclanthology.org/2022.naacl-main.194.pdf): > As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9% to 34% over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source. - **Developed by:** IBM - **Shared by [Optional]:** IBM - **Model type:** Query/Passage Reranker - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Parent Model:** [BERT-base trained on MSMARCO](https://huggingface.co/nboost/pt-bert-base-uncased-msmarco) - **Resources for more information:** - [GitHub Repo](https://github.com/IBM/kgi-slot-filling) - [Associated Paper](https://aclanthology.org/2022.naacl-main.194.pdf) # Uses ## Direct Use This model can be used for the task of reranking passage results for a question. # Citation **BibTeX:** ```bibtex @inproceedings{glass-etal-2022-re2g, title = "{R}e2{G}: Retrieve, Rerank, Generate", author = "Glass, Michael and Rossiello, Gaetano and Chowdhury, Md Faisal Mahbub and Naik, Ankita and Cai, Pengshan and Gliozzo, Alfio", booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.naacl-main.194", doi = "10.18653/v1/2022.naacl-main.194", pages = "2701--2715", abstract = "As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9{\%} to 34{\%} over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.", } ```
Coverage/sakurajimamai
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - information retrieval - reranking license: apache-2.0 --- # Model Card for TriviaQA Question Encoder in Re2G # Model Details > The approach of RAG, Multi-DPR, and KGI is to train a neural IR (Information Retrieval) component and further train it end-to-end through its impact in generating the correct output. <img src="https://github.com/IBM/kgi-slot-filling/raw/re2g/model_cards/Re2G_Arch2.png" width="100%"> ## Training, Evaluation and Inference The code for training, evaluation and inference is in our github in the [re2g branch](https://github.com/IBM/kgi-slot-filling/tree/re2g). ## Usage The best way to use the model is by adapting the [dpr_apply.py](https://github.com/IBM/kgi-slot-filling/blob/re2g/dpr/dpr_apply.py) ## Citation ``` @inproceedings{glass-etal-2022-re2g, title = "{R}e2{G}: Retrieve, Rerank, Generate", author = "Glass, Michael and Rossiello, Gaetano and Chowdhury, Md Faisal Mahbub and Naik, Ankita and Cai, Pengshan and Gliozzo, Alfio", booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.naacl-main.194", doi = "10.18653/v1/2022.naacl-main.194", pages = "2701--2715", abstract = "As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9{\%} to 34{\%} over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.", } ``` ## Model Description The model creators note in the [associated paper](https://aclanthology.org/2022.naacl-main.194.pdf): > As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9% to 34% over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source. - **Developed by:** IBM - **Shared by [Optional]:** IBM - **Model type:** Query/Passage Reranker - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Parent Model:** [dpr-question_encoder-multiset-base](https://huggingface.co/facebook/dpr-question_encoder-multiset-base) - **Resources for more information:** - [GitHub Repo](https://github.com/IBM/kgi-slot-filling) - [Associated Paper](https://aclanthology.org/2022.naacl-main.194.pdf) # Uses ## Direct Use This model can be used for the task of encoding a question to a vector to be used as a query into an Approximate Nearest Neighbors index. It must be used in combination with a context encoder that encodes passages to a vector and indexes them. # Citation **BibTeX:** ```bibtex @inproceedings{glass-etal-2022-re2g, title = "{R}e2{G}: Retrieve, Rerank, Generate", author = "Glass, Michael and Rossiello, Gaetano and Chowdhury, Md Faisal Mahbub and Naik, Ankita and Cai, Pengshan and Gliozzo, Alfio", booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.naacl-main.194", doi = "10.18653/v1/2022.naacl-main.194", pages = "2701--2715", abstract = "As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9{\%} to 34{\%} over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.", } ```
Coyotl/DialoGPT-test2-arthurmorgan
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: mit tags: - text-classification - generated_from_trainer metrics: - f1 - precision - recall model-index: - name: deberta-v3-large-finetuned-dagpap22-only results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large-finetuned-dagpap22-only This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0037 - F1: 0.9995 - Precision: 0.9992 - Recall: 0.9997 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:------:|:---------:|:------:| | 0.1804 | 1.0 | 669 | 0.0222 | 0.9971 | 0.9975 | 0.9967 | | 0.0402 | 2.0 | 1338 | 0.0069 | 0.9990 | 0.9992 | 0.9989 | | 0.0046 | 3.0 | 2007 | 0.0037 | 0.9995 | 0.9992 | 0.9997 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Culmenus/XLMR-ENIS-finetuned-ner
[ "pytorch", "tensorboard", "xlm-roberta", "token-classification", "dataset:mim_gold_ner", "transformers", "generated_from_trainer", "license:agpl-3.0", "model-index", "autotrain_compatible" ]
token-classification
{ "architectures": [ "XLMRobertaForTokenClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: train args: conll2003 metrics: - name: Precision type: precision value: 0.9048086359175662 - name: Recall type: recall value: 0.9309996634129922 - name: F1 type: f1 value: 0.9177173191771731 - name: Accuracy type: accuracy value: 0.9816918820274327 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0712 - Precision: 0.9048 - Recall: 0.9310 - F1: 0.9177 - Accuracy: 0.9817 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0849 | 1.0 | 1756 | 0.0712 | 0.9048 | 0.9310 | 0.9177 | 0.9817 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Culmenus/opus-mt-de-is-finetuned-de-to-is
[ "pytorch", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- tags: - information retrieval - reranking license: apache-2.0 --- # Model Card for Wizard of Wikipedia Reranker in Re2G # Model Details > The approach of RAG, Multi-DPR, and KGI is to train a neural IR (Information Retrieval) component and further train it end-to-end through its impact in generating the correct output. > >It has been previously established that results from initial retrieval can be greatly improved through the use of a reranker. Therefore we hypothesized that natural language generation systems incorporating retrieval can benefit from reranking. > >In addition to improving the ranking of passages returned from DPR, a reranker can be used after merging the results of multiple retrieval methods with incomparable scores. For example, the scores returned by BM25 are not comparable to the inner products from DPR. Using the scores from a reranker, we can find the top-k documents from the union of DPR and BM25 results. The figure below illustrates our extension of RAG with a reranker. We call our system Re2G (*Re*trieve, *Re*rank, *G*enerate). <img src="https://github.com/IBM/kgi-slot-filling/raw/re2g/model_cards/Re2G_Arch2.png" width="100%"> ## Training, Evaluation and Inference The code for training, evaluation and inference is in our github in the [re2g branch](https://github.com/IBM/kgi-slot-filling/tree/re2g). ## Usage The best way to use the model is by adapting the [reranker_apply.py](https://github.com/IBM/kgi-slot-filling/blob/re2g/reranker/reranker_apply.py) ## Citation ``` @inproceedings{glass-etal-2022-re2g, title = "{R}e2{G}: Retrieve, Rerank, Generate", author = "Glass, Michael and Rossiello, Gaetano and Chowdhury, Md Faisal Mahbub and Naik, Ankita and Cai, Pengshan and Gliozzo, Alfio", booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.naacl-main.194", doi = "10.18653/v1/2022.naacl-main.194", pages = "2701--2715", abstract = "As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9{\%} to 34{\%} over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.", } ``` ## Model Description The model creators note in the [associated paper](https://aclanthology.org/2022.naacl-main.194.pdf): > As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9% to 34% over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source. - **Developed by:** IBM - **Shared by [Optional]:** IBM - **Model type:** Query/Passage Reranker - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Parent Model:** [BERT-base trained on MSMARCO](https://huggingface.co/nboost/pt-bert-base-uncased-msmarco) - **Resources for more information:** - [GitHub Repo](https://github.com/IBM/kgi-slot-filling) - [Associated Paper](https://aclanthology.org/2022.naacl-main.194.pdf) # Uses ## Direct Use This model can be used for the task of reranking passage results for a question. # Citation **BibTeX:** ```bibtex @inproceedings{glass-etal-2022-re2g, title = "{R}e2{G}: Retrieve, Rerank, Generate", author = "Glass, Michael and Rossiello, Gaetano and Chowdhury, Md Faisal Mahbub and Naik, Ankita and Cai, Pengshan and Gliozzo, Alfio", booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.naacl-main.194", doi = "10.18653/v1/2022.naacl-main.194", pages = "2701--2715", abstract = "As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9{\%} to 34{\%} over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.", } ```
Culmenus/opus-mt-de-is-finetuned-de-to-is_35g65cc
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - information retrieval - reranking license: apache-2.0 --- # Model Card for Wizard of Wikipedia Question Encoder in Re2G # Model Details > The approach of RAG, Multi-DPR, and KGI is to train a neural IR (Information Retrieval) component and further train it end-to-end through its impact in generating the correct output. <img src="https://github.com/IBM/kgi-slot-filling/raw/re2g/model_cards/Re2G_Arch2.png" width="100%"> ## Training, Evaluation and Inference The code for training, evaluation and inference is in our github in the [re2g branch](https://github.com/IBM/kgi-slot-filling/tree/re2g). ## Usage The best way to use the model is by adapting the [dpr_apply.py](https://github.com/IBM/kgi-slot-filling/blob/re2g/dpr/dpr_apply.py) ## Citation ``` @inproceedings{glass-etal-2022-re2g, title = "{R}e2{G}: Retrieve, Rerank, Generate", author = "Glass, Michael and Rossiello, Gaetano and Chowdhury, Md Faisal Mahbub and Naik, Ankita and Cai, Pengshan and Gliozzo, Alfio", booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.naacl-main.194", doi = "10.18653/v1/2022.naacl-main.194", pages = "2701--2715", abstract = "As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9{\%} to 34{\%} over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.", } ``` ## Model Description The model creators note in the [associated paper](https://aclanthology.org/2022.naacl-main.194.pdf): > As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9% to 34% over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source. - **Developed by:** IBM - **Shared by [Optional]:** IBM - **Model type:** Query/Passage Reranker - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Parent Model:** [dpr-question_encoder-multiset-base](https://huggingface.co/facebook/dpr-question_encoder-multiset-base) - **Resources for more information:** - [GitHub Repo](https://github.com/IBM/kgi-slot-filling) - [Associated Paper](https://aclanthology.org/2022.naacl-main.194.pdf) # Uses ## Direct Use This model can be used for the task of encoding a question to a vector to be used as a query into an Approximate Nearest Neighbors index. It must be used in combination with a context encoder that encodes passages to a vector and indexes them. # Citation **BibTeX:** ```bibtex @inproceedings{glass-etal-2022-re2g, title = "{R}e2{G}: Retrieve, Rerank, Generate", author = "Glass, Michael and Rossiello, Gaetano and Chowdhury, Md Faisal Mahbub and Naik, Ankita and Cai, Pengshan and Gliozzo, Alfio", booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.naacl-main.194", doi = "10.18653/v1/2022.naacl-main.194", pages = "2701--2715", abstract = "As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9{\%} to 34{\%} over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.", } ```
DARKVIP3R/DialoGPT-medium-Anakin
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
--- license: apache-2.0 datasets: - nlpaueb/finer-139 tags: - generated_from_keras_callback model-index: - name: muhtasham/bert-tiny-finetuned-finer-tf results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # muhtasham/bert-tiny-finetuned-finer-tf This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0372 - Validation Loss: 0.0296 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 168822, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1188 | 0.0420 | 0 | | 0.0438 | 0.0313 | 1 | | 0.0372 | 0.0296 | 2 | ### Framework versions - Transformers 4.21.0 - TensorFlow 2.8.2 - Datasets 2.4.0 - Tokenizers 0.12.1
DCU-NLP/bert-base-irish-cased-v1
[ "pytorch", "tf", "bert", "fill-mask", "transformers", "generated_from_keras_callback", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,244
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3-1 results: - metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="mrm8488/q-Taxi-v3-1", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
DCU-NLP/electra-base-irish-cased-discriminator-v1
[ "pytorch", "electra", "pretraining", "ga", "transformers", "irish", "license:apache-2.0" ]
null
{ "architectures": [ "ElectraForPreTraining" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: cc-by-sa-4.0 language: - id --- This KenLM model is trained on https://huggingface.co/datasets/indonesian-nlp/id_newspapers_2018 dataset. This model is **4-gram** and it was pruned. Used command: ```bash ../kenlm/build/bin/lmplz -T tmp -o 4 --prune 0 1 1 < "texts.txt" > "4gram.arpa" ```
DTAI-KULeuven/mbert-corona-tweets-belgium-topics
[ "pytorch", "jax", "bert", "text-classification", "multilingual", "nl", "fr", "en", "arxiv:2104.09947", "transformers", "Dutch", "French", "English", "Tweets", "Topic classification" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
167
null
--- datasets: - relbert/conceptnet_high_confidence model-index: - name: relbert/roberta-large-conceptnet-average-prompt-c-nce results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.7826388888888889 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5454545454545454 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5489614243323442 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.792106725958866 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.93 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6096491228070176 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6134259259259259 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9091456983576918 - name: F1 (macro) type: f1_macro value: 0.9025708311029935 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8744131455399061 - name: F1 (macro) type: f1_macro value: 0.7154495605637783 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6738894907908992 - name: F1 (macro) type: f1_macro value: 0.6505462224375916 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9601446755234054 - name: F1 (macro) type: f1_macro value: 0.8892142921251124 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9031651519899718 - name: F1 (macro) type: f1_macro value: 0.9011299997530173 --- # relbert/roberta-large-conceptnet-average-prompt-c-nce RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/conceptnet_high_confidence](https://huggingface.co/datasets/relbert/conceptnet_high_confidence). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-conceptnet-average-prompt-c-nce/raw/main/analogy.json)): - Accuracy on SAT (full): 0.5454545454545454 - Accuracy on SAT: 0.5489614243323442 - Accuracy on BATS: 0.792106725958866 - Accuracy on U2: 0.6096491228070176 - Accuracy on U4: 0.6134259259259259 - Accuracy on Google: 0.93 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-conceptnet-average-prompt-c-nce/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9091456983576918 - Micro F1 score on CogALexV: 0.8744131455399061 - Micro F1 score on EVALution: 0.6738894907908992 - Micro F1 score on K&H+N: 0.9601446755234054 - Micro F1 score on ROOT09: 0.9031651519899718 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-conceptnet-average-prompt-c-nce/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.7826388888888889 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-conceptnet-average-prompt-c-nce") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average - data: relbert/conceptnet_high_confidence - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <mask> - loss_function: nce_logout - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 112 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-conceptnet-average-prompt-c-nce/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
DTAI-KULeuven/robbertje-1-gb-merged
[ "pytorch", "roberta", "fill-mask", "nl", "dataset:oscar", "dataset:oscar (NL)", "dataset:dbrd", "dataset:lassy-ud", "dataset:europarl-mono", "dataset:conll2002", "arxiv:2101.05716", "transformers", "Dutch", "Flemish", "RoBERTa", "RobBERT", "RobBERTje", "license:mit", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- license: mit tags: - generated_from_trainer model-index: - name: DeepDunk results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DeepDunk This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001372 - train_batch_size: 1 - eval_batch_size: 8 - seed: 1360794382 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
alexandrainst/da-hatespeech-classification-base
[ "pytorch", "tf", "safetensors", "bert", "text-classification", "da", "transformers", "license:cc-by-sa-4.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
866
null
--- language: en thumbnail: http://www.huggingtweets.com/dags/1659144733206/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/722815128501026817/IMWCRzEn_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">DAGs</div> <div style="text-align: center; font-size: 14px;">@dags</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from DAGs. | Data | DAGs | | --- | --- | | Tweets downloaded | 3003 | | Retweets | 31 | | Short tweets | 158 | | Tweets kept | 2814 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3qyk6uzo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dags's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/18qzuqjb) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/18qzuqjb/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/dags') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
alexandrainst/da-ner-base
[ "pytorch", "tf", "bert", "token-classification", "da", "dataset:dane", "transformers", "license:cc-by-sa-4.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
78
null
``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln59Paraphrase") model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln59Paraphrase") ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. *** informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. *** informal english: corn fields are all across illinois, visible once you leave chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. informal english: ``` ``` infill: chrome extensions [MASK] accomplish everyday tasks. Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks. infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. infill: ``` ``` Essay Intro (Warriors vs. Rockets in Game 7): text: eagerly anticipated by fans, game 7's are the highlight of the post-season. text: ever-building in suspense, game 7's have the crowd captivated. *** Essay Intro (South Korean TV Is Becoming Popular): text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ). text: increasingly held in critical esteem, south korean television continues to impress. text: at the forefront of quality content, south korea is quickly achieving celebrity status. *** Essay Intro ( ``` ``` Search: What is the definition of Checks and Balances? https://en.wikipedia.org/wiki/Checks_and_balances Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate. https://www.harvard.edu/glossary/Checks_and_Balances Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power https://www.law.cornell.edu/library/constitution/Checks_and_Balances Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power. *** Search: What is the definition of Separation of Powers? https://en.wikipedia.org/wiki/Separation_of_powers The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power. https://www.yale.edu/tcf/Separation_of_Powers.html Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined. *** Search: What is the definition of Connection of Powers? https://en.wikipedia.org/wiki/Connection_of_powers Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches. https://simple.wikipedia.org/wiki/Connection_of_powers The term Connection of Powers describes a system of government in which there is overlap between different parts of the government. *** Search: What is the definition of ``` ``` Search: What are phrase synonyms for "second-guess"? https://www.powerthesaurus.org/second-guess/synonyms Shortest to Longest: - feel dubious about - raise an eyebrow at - wrinkle their noses at - cast a jaundiced eye at - teeter on the fence about *** Search: What are phrase synonyms for "mean to newbies"? https://www.powerthesaurus.org/mean_to_newbies/synonyms Shortest to Longest: - readiness to balk at rookies - absence of tolerance for novices - hostile attitude toward newcomers *** Search: What are phrase synonyms for "make use of"? https://www.powerthesaurus.org/make_use_of/synonyms Shortest to Longest: - call upon - glean value from - reap benefits from - derive utility from - seize on the merits of - draw on the strength of - tap into the potential of *** Search: What are phrase synonyms for "hurting itself"? https://www.powerthesaurus.org/hurting_itself/synonyms Shortest to Longest: - erring - slighting itself - forfeiting its integrity - doing itself a disservice - evincing a lack of backbone *** Search: What are phrase synonyms for " ``` ``` - nebraska - unicamerical legislature - different from federal house and senate text: featuring a unicameral legislature, nebraska's political system stands in stark contrast to the federal model, comprised of a house and senate. *** - penny has practically no value - should be taken out of circulation - just as other coins have been in us history - lost use - value not enough - to make environmental consequences worthy text: all but valueless, the penny should be retired. as with other coins in american history, it has become defunct. too minute to warrant the environmental consequences of its production, it has outlived its usefulness. *** - ``` ``` original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick. infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick. *** original: ``` ``` wordy: classical music is becoming less popular more and more. Translate into Concise Text: interest in classic music is fading. *** wordy: ``` ``` sweet: savvy voters ousted him. longer: voters who were informed delivered his defeat. *** sweet: ``` ``` 1: commercial space company spacex plans to launch a whopping 52 flights in 2022. 2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022. 3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights. 4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company. 5: a commercial space company, spacex aims to conduct 52 flights in 2022. *** 1: ``` Keywords to sentences or sentence. ``` ngos are characterized by: □ voluntary citizens' group that is organized on a local, national or international level □ encourage political participation □ often serve humanitarian functions □ work for social, economic, or environmental change *** what are the drawbacks of living near an airbnb? □ noise □ parking □ traffic □ security □ strangers *** ``` ``` original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung. adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung. *** original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark. adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark. *** original: ``` ``` original: had trouble deciding. translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation. *** original: ``` ``` input: not loyal 1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ). *** input: ``` ``` first: ( was complicit in / was involved in ). antonym: ( was blameless / was not an accomplice to / had no hand in / was uninvolved in ). *** first: ( have no qualms about / see no issue with ). antonym: ( are deeply troubled by / harbor grave reservations about / have a visceral aversion to / take ( umbrage at / exception to ) / are wary of ). *** first: ( do not see eye to eye / disagree often ). antonym: ( are in sync / are united / have excellent rapport / are like-minded / are in step / are of one mind / are in lockstep / operate in perfect harmony / march in lockstep ). *** first: ``` ``` stiff with competition, law school {A} is the launching pad for countless careers, {B} is a crowded field, {C} ranks among the most sought-after professional degrees, {D} is a professional proving ground. *** languishing in viewership, saturday night live {A} is due for a creative renaissance, {B} is no longer a ratings juggernaut, {C} has been eclipsed by its imitators, {C} can still find its mojo. *** dubbed the "manhattan of the south," atlanta {A} is a bustling metropolis, {B} is known for its vibrant downtown, {C} is a city of rich history, {D} is the pride of georgia. *** embattled by scandal, harvard {A} is feeling the heat, {B} cannot escape the media glare, {C} is facing its most intense scrutiny yet, {D} is in the spotlight for all the wrong reasons. ```
DataikuNLP/paraphrase-multilingual-MiniLM-L12-v2
[ "pytorch", "bert", "arxiv:1908.10084", "sentence-transformers", "feature-extraction", "sentence-similarity", "transformers", "license:apache-2.0" ]
sentence-similarity
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,517
null
--- tags: - generated_from_keras_callback model-index: - name: t5-tiny-finetuned-noisy-en-ms results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # t5-tiny-finetuned-noisy-en-ms This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.21.0.dev0 - TensorFlow 2.6.0 - Datasets 2.1.0 - Tokenizers 0.12.1
Davlan/m2m100_418M-eng-yor-mt
[ "pytorch", "m2m_100", "text2text-generation", "arxiv:2103.08647", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "M2M100ForConditionalGeneration" ], "model_type": "m2m_100", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - metrics: - type: mean_reward value: 1098.81 +/- 321.12 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Davlan/mbart50-large-eng-yor-mt
[ "pytorch", "mbart", "text2text-generation", "arxiv:2103.08647", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MBartForConditionalGeneration" ], "model_type": "mbart", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: rust_image_classification_3 results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9645569324493408 --- # rust_image_classification_3 Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### nonrust ![nonrust](images/nonrust.png) #### rust ![rust](images/rust.png)
Davlan/mt5_base_eng_yor_mt
[ "pytorch", "mt5", "text2text-generation", "arxiv:2103.08647", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MT5ForConditionalGeneration" ], "model_type": "mt5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: rust_image_classification_6 results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9645569324493408 --- # rust_image_classification_6 Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### nonrust ![nonrust](images/nonrust.png) #### rust ![rust](images/rust.png)
Davlan/naija-twitter-sentiment-afriberta-large
[ "pytorch", "tf", "xlm-roberta", "text-classification", "arxiv:2201.08277", "transformers", "has_space" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
61
null
--- license: cc-by-nc-4.0 --- ## COGMEN; Official Pytorch Implementation [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/cogmen-contextualized-gnn-based-multimodal/multimodal-emotion-recognition-on-iemocap)](https://paperswithcode.com/sota/multimodal-emotion-recognition-on-iemocap?p=cogmen-contextualized-gnn-based-multimodal) **CO**ntextualized **G**NN based **M**ultimodal **E**motion recognitio**N** ![Teaser image](./COGMEN_architecture.png) **Picture:** *COGMEN Model Architecture* This repository contains the official Pytorch implementation of the following paper: > **COGMEN: COntextualized GNN based Multimodal Emotion recognitioN**<br> > **Paper:** https://arxiv.org/abs/2205.02455 > **Authors:** Abhinav Joshi, Ashwani Bhat, Ayush Jain, Atin Vikram Singh, Ashutosh Modi<br> > > **Abstract:** *Emotions are an inherent part of human interactions, and consequently, it is imperative to develop AI systems that understand and recognize human emotions. During a conversation involving various people, a person’s emotions are influenced by the other speaker’s utterances and their own emotional state over the utterances. In this paper, we propose COntextualized Graph Neural Network based Multimodal Emotion recognitioN (COGMEN) system that leverages local information (i.e., inter/intra dependency between speakers) and global information (context). The proposed model uses Graph Neural Network (GNN) based architecture to model the complex dependencies (local and global information) in a conversation. Our model gives state-of-theart (SOTA) results on IEMOCAP and MOSEI datasets, and detailed ablation experiments show the importance of modeling information at both levels* ## Requirements - We use PyG (PyTorch Geometric) for the GNN component in our architecture. [RGCNConv](https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#torch_geometric.nn.conv.RGCNConv) and [TransformerConv](https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#torch_geometric.nn.conv.TransformerConv) - We use [comet](https://comet.ml) for logging all our experiments and its Bayesian optimizer for hyperparameter tuning. - For textual features we use [SBERT](https://www.sbert.net/). ### Installations - [Install PyTorch Geometric](https://pytorch-geometric.readthedocs.io/en/latest/notes/installation.html) - [Install Comet.ml](https://www.comet.ml/docs/python-sdk/advanced/) - [Install SBERT](https://www.sbert.net/) ## Preparing datasets for training python preprocess.py --dataset="iemocap_4" ## Training networks python train.py --dataset="iemocap_4" --modalities="atv" --from_begin --epochs=55 ## Run Evaluation [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1biIvonBdJWo2TiYyTiQkxZ_V88JEXa_d?usp=sharing) python eval.py --dataset="iemocap_4" --modalities="atv" Please cite the paper using following citation: ## Citation @inproceedings{joshi-etal-2022-cogmen, title = "{COGMEN}: {CO}ntextualized {GNN} based Multimodal Emotion recognitio{N}", author = "Joshi, Abhinav and Bhat, Ashwani and Jain, Ayush and Singh, Atin and Modi, Ashutosh", booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.naacl-main.306", pages = "4148--4164", abstract = "Emotions are an inherent part of human interactions, and consequently, it is imperative to develop AI systems that understand and recognize human emotions. During a conversation involving various people, a person{'}s emotions are influenced by the other speaker{'}s utterances and their own emotional state over the utterances. In this paper, we propose COntextualized Graph Neural Network based Multi- modal Emotion recognitioN (COGMEN) system that leverages local information (i.e., inter/intra dependency between speakers) and global information (context). The proposed model uses Graph Neural Network (GNN) based architecture to model the complex dependencies (local and global information) in a conversation. Our model gives state-of-the- art (SOTA) results on IEMOCAP and MOSEI datasets, and detailed ablation experiments show the importance of modeling information at both levels.",} ## Acknowledgments The structure of our code is inspired by [pytorch-DialogueGCN-mianzhang](https://github.com/mianzhang/dialogue_gcn).
Davlan/xlm-roberta-base-finetuned-amharic
[ "pytorch", "xlm-roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "XLMRobertaForMaskedLM" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
401
null
--- language: - pt thumbnail: "Portugues BERT for the Legal Domain" pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - transformers datasets: - assin - assin2 - rufimelo/PortugueseLegalSentences-v0 widget: - source_sentence: "O advogado apresentou as provas ao juíz." sentences: - "O juíz leu as provas." - "O juíz leu o recurso." - "O juíz atirou uma pedra." example_title: "Example 1" model-index: - name: BERTimbau results: - task: name: STS type: STS metrics: - name: Pearson Correlation - assin Dataset type: Pearson Correlation value: 0.71457 - name: Pearson Correlation - assin2 Dataset type: Pearson Correlation value: 0.73545 - name: Pearson Correlation - stsb_multi_mt pt Dataset type: Pearson Correlation value: 0.72383 --- # rufimelo/Legal-BERTimbau-sts-base This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. rufimelo/Legal-BERTimbau-sts-base is based on Legal-BERTimbau-large which derives from [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) base. It is adapted to the Portuguese legal domain and trained for STS on portuguese datasets. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["Isto é um exemplo", "Isto é um outro exemplo"] model = SentenceTransformer('rufimelo/Legal-BERTimbau-sts-base') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('rufimelo/Legal-BERTimbau-sts-base') model = AutoModel.from_pretrained('rufimelo/Legal-BERTimbau-sts-base') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results STS | Model| Assin | Assin2|stsb_multi_mt pt| avg| | ---------------------------------------- | ---------- | ---------- |---------- |---------- | | Legal-BERTimbau-sts-base| 0.71457| 0.73545 | 0.72383|0.72462| | Legal-BERTimbau-sts-base-ma| 0.74874 | 0.79532|0.82254 |0.78886| | Legal-BERTimbau-sts-base-ma-v2| 0.75481 | 0.80262|0.82178|0.79307| | Legal-BERTimbau-base-TSDAE-sts|0.78814 |0.81380 |0.75777|0.78657| | Legal-BERTimbau-sts-large| 0.76629| 0.82357 | 0.79120|0.79369| | Legal-BERTimbau-sts-large-v2| 0.76299 | 0.81121|0.81726 |0.79715| | Legal-BERTimbau-sts-large-ma| 0.76195| 0.81622 | 0.82608|0.80142| | Legal-BERTimbau-sts-large-ma-v2| 0.7836| 0.8462| 0.8261| 0.81863| | Legal-BERTimbau-sts-large-ma-v3| 0.7749| **0.8470**| 0.8364| **0.81943**| | Legal-BERTimbau-large-v2-sts| 0.71665| 0.80106| 0.73724| 0.75165| | Legal-BERTimbau-large-TSDAE-sts| 0.72376| 0.79261| 0.73635| 0.75090| | Legal-BERTimbau-large-TSDAE-sts-v2| 0.81326| 0.83130| 0.786314| 0.81029| | Legal-BERTimbau-large-TSDAE-sts-v3|0.80703 |0.82270 |0.77638 |0.80204 | | ---------------------------------------- | ---------- |---------- |---------- |---------- | | BERTimbau base Fine-tuned for STS|**0.78455** | 0.80626|0.82841|0.80640| | BERTimbau large Fine-tuned for STS|0.78193 | 0.81758|0.83784|0.81245| | ---------------------------------------- | ---------- |---------- |---------- |---------- | | paraphrase-multilingual-mpnet-base-v2| 0.71457| 0.79831 |0.83999 |0.78429| | paraphrase-multilingual-mpnet-base-v2 Fine-tuned with assin(s)| 0.77641|0.79831 |**0.84575**|0.80682| ## Training rufimelo/Legal-BERTimbau-sts-base is based on Legal-BERTimbau-largewhich derives from [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) base. It was trained for Semantic Textual Similarity, being submitted to a fine tuning stage with the [assin](https://huggingface.co/datasets/assin) and [assin2](https://huggingface.co/datasets/assin2) datasets. ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors ## Citing & Authors If you use this work, please cite: ```bibtex @inproceedings{souza2020bertimbau, author = {F{\'a}bio Souza and Rodrigo Nogueira and Roberto Lotufo}, title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese}, booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)}, year = {2020} } @inproceedings{fonseca2016assin, title={ASSIN: Avaliacao de similaridade semantica e inferencia textual}, author={Fonseca, E and Santos, L and Criscuolo, Marcelo and Aluisio, S}, booktitle={Computational Processing of the Portuguese Language-12th International Conference, Tomar, Portugal}, pages={13--15}, year={2016} } @inproceedings{real2020assin, title={The assin 2 shared task: a quick overview}, author={Real, Livy and Fonseca, Erick and Oliveira, Hugo Goncalo}, booktitle={International Conference on Computational Processing of the Portuguese Language}, pages={406--412}, year={2020}, organization={Springer} } @InProceedings{huggingface:dataset:stsb_multi_mt, title = {Machine translated multilingual STS benchmark dataset.}, author={Philip May}, year={2021}, url={https://github.com/PhilipMay/stsb-multi-mt} } ```
Davlan/xlm-roberta-base-finetuned-somali
[ "pytorch", "xlm-roberta", "fill-mask", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "XLMRobertaForMaskedLM" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-turkish-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-turkish-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.3864 - Wer: 0.3077 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.8894 | 3.67 | 400 | 0.7272 | 0.7232 | | 0.4265 | 7.34 | 800 | 0.4567 | 0.5033 | | 0.1963 | 11.01 | 1200 | 0.4435 | 0.4511 | | 0.1288 | 14.68 | 1600 | 0.3897 | 0.3773 | | 0.0976 | 18.35 | 2000 | 0.4021 | 0.3502 | | 0.079 | 22.02 | 2400 | 0.4140 | 0.3473 | | 0.0646 | 25.69 | 2800 | 0.3993 | 0.3255 | | 0.0502 | 29.36 | 3200 | 0.3864 | 0.3077 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
Davlan/xlm-roberta-base-finetuned-zulu
[ "pytorch", "xlm-roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "XLMRobertaForMaskedLM" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: cc-by-4.0 tags: - generated_from_trainer model-index: - name: opus-mt-ru-en-finetuned-ru-to-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-ru-en-finetuned-ru-to-en This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ru-en](https://huggingface.co/Helsinki-NLP/opus-mt-ru-en) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Davlan/xlm-roberta-base-sadilar-ner
[ "pytorch", "xlm-roberta", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "XLMRobertaForTokenClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: comodoro/testpyramidsrnd 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Dazai/Ok
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1503855535749312517/Guuii_I-_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Timothy Dalrymple</div> <div style="text-align: center; font-size: 14px;">@timdalrymple_</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Timothy Dalrymple. | Data | Timothy Dalrymple | | --- | --- | | Tweets downloaded | 384 | | Retweets | 83 | | Short tweets | 12 | | Tweets kept | 289 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/cxrysgie/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @timdalrymple_'s tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/122k36su) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/122k36su/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/timdalrymple_') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Dbluciferm3737/U
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en thumbnail: http://www.huggingtweets.com/oooo_honey/1659198603893/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1442126088944062469/p-BikvvS_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Rock'n'Pomp</div> <div style="text-align: center; font-size: 14px;">@oooo_honey</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Rock'n'Pomp. | Data | Rock'n'Pomp | | --- | --- | | Tweets downloaded | 510 | | Retweets | 100 | | Short tweets | 48 | | Tweets kept | 362 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/28blz6k6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @oooo_honey's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/35awxfoc) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/35awxfoc/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/oooo_honey') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Declan/CNN_model_v4
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
2022-07-30T17:28:17Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8648740833380706 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1365 - F1: 0.8649 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 | | 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 | | 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
Declan/FoxNews_model_v6
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- datasets: - relbert/conceptnet_high_confidence model-index: - name: relbert/roberta-large-conceptnet-average-prompt-e-nce results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8862103174603174 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.49258160237388726 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7443023902167871 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.886 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5526315789473685 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5439814814814815 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9085430164230828 - name: F1 (macro) type: f1_macro value: 0.9007282568605484 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8380281690140845 - name: F1 (macro) type: f1_macro value: 0.656362704638303 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6657638136511376 - name: F1 (macro) type: f1_macro value: 0.6498144246049421 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9565277874382695 - name: F1 (macro) type: f1_macro value: 0.8746667490411619 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8896897524287057 - name: F1 (macro) type: f1_macro value: 0.8862724322889753 --- # relbert/roberta-large-conceptnet-average-prompt-e-nce RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/conceptnet_high_confidence](https://huggingface.co/datasets/relbert/conceptnet_high_confidence). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-conceptnet-average-prompt-e-nce/raw/main/analogy.json)): - Accuracy on SAT (full): 0.5 - Accuracy on SAT: 0.49258160237388726 - Accuracy on BATS: 0.7443023902167871 - Accuracy on U2: 0.5526315789473685 - Accuracy on U4: 0.5439814814814815 - Accuracy on Google: 0.886 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-conceptnet-average-prompt-e-nce/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9085430164230828 - Micro F1 score on CogALexV: 0.8380281690140845 - Micro F1 score on EVALution: 0.6657638136511376 - Micro F1 score on K&H+N: 0.9565277874382695 - Micro F1 score on ROOT09: 0.8896897524287057 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-conceptnet-average-prompt-e-nce/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8862103174603174 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-conceptnet-average-prompt-e-nce") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average - data: relbert/conceptnet_high_confidence - template_mode: manual - template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <obj> is <subj>’s <mask> - loss_function: nce_logout - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 85 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-conceptnet-average-prompt-e-nce/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
Declan/FoxNews_model_v8
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
Logs at https://wandb.ai/yepster/long-t5-tglobal-small/runs/2wiy76y6?workspace=user-yepster
Declan/NewYorkPost_model_v1
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5_large_headline_generator_testing_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5_large_headline_generator_testing_1 This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0183 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1969 | 0.77 | 500 | 1.0183 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
DeepChem/ChemBERTa-77M-MTR
[ "pytorch", "roberta", "transformers" ]
null
{ "architectures": [ "RobertaForRegression" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7,169
null
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: imagefolder metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-afhq-cats-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `imagefolder` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/samwit/ddpm-afhq-cats-128/tensorboard?#scalars)
DeepPavlov/rubert-base-cased-sentence
[ "pytorch", "jax", "bert", "feature-extraction", "ru", "arxiv:1508.05326", "arxiv:1809.05053", "arxiv:1908.10084", "transformers", "has_space" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
46,991
2022-07-31T03:39:16Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: data-augmentation-whitenoise-timit-2310 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # data-augmentation-whitenoise-timit-2310 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5916 - Wer: 0.3408 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.6731 | 0.67 | 500 | 2.7553 | 1.0 | | 1.0656 | 1.34 | 1000 | 0.5963 | 0.5297 | | 0.5065 | 2.01 | 1500 | 0.4898 | 0.4654 | | 0.3212 | 2.68 | 2000 | 0.4265 | 0.4331 | | 0.2492 | 3.35 | 2500 | 0.4020 | 0.4073 | | 0.2116 | 4.02 | 3000 | 0.4152 | 0.3935 | | 0.1719 | 4.69 | 3500 | 0.4258 | 0.3858 | | 0.1544 | 5.36 | 4000 | 0.4542 | 0.3818 | | 0.1474 | 6.03 | 4500 | 0.4612 | 0.3821 | | 0.1248 | 6.7 | 5000 | 0.4813 | 0.3749 | | 0.1148 | 7.37 | 5500 | 0.5131 | 0.3772 | | 0.1145 | 8.04 | 6000 | 0.5383 | 0.3714 | | 0.0986 | 8.71 | 6500 | 0.5288 | 0.3777 | | 0.091 | 9.38 | 7000 | 0.5071 | 0.3869 | | 0.0789 | 10.05 | 7500 | 0.5256 | 0.3819 | | 0.0747 | 10.72 | 8000 | 0.5287 | 0.3711 | | 0.0687 | 11.39 | 8500 | 0.5179 | 0.3754 | | 0.072 | 12.06 | 9000 | 0.7438 | 0.3702 | | 0.0646 | 12.73 | 9500 | 0.5293 | 0.3777 | | 0.0621 | 13.4 | 10000 | 0.5536 | 0.3692 | | 0.0587 | 14.08 | 10500 | 0.5214 | 0.3712 | | 0.0538 | 14.75 | 11000 | 0.4853 | 0.3694 | | 0.0614 | 15.42 | 11500 | 0.5439 | 0.3637 | | 0.0493 | 16.09 | 12000 | 0.5087 | 0.3649 | | 0.0441 | 16.76 | 12500 | 0.5736 | 0.3621 | | 0.038 | 17.43 | 13000 | 0.7295 | 0.3650 | | 0.0397 | 18.1 | 13500 | 0.5722 | 0.3586 | | 0.0357 | 18.77 | 14000 | 0.5701 | 0.3616 | | 0.0349 | 19.44 | 14500 | 0.5661 | 0.3599 | | 0.0318 | 20.11 | 15000 | 0.5346 | 0.3572 | | 0.0288 | 20.78 | 15500 | 0.6972 | 0.3597 | | 0.0331 | 21.45 | 16000 | 0.5288 | 0.3576 | | 0.0304 | 22.12 | 16500 | 0.5813 | 0.3551 | | 0.0268 | 22.79 | 17000 | 0.5439 | 0.3557 | | 0.0255 | 23.46 | 17500 | 0.5790 | 0.3531 | | 0.0244 | 24.13 | 18000 | 0.5794 | 0.3493 | | 0.0335 | 24.8 | 18500 | 0.5943 | 0.3515 | | 0.026 | 25.47 | 19000 | 0.5737 | 0.3462 | | 0.0199 | 26.14 | 19500 | 0.5794 | 0.3469 | | 0.0213 | 26.81 | 20000 | 0.5955 | 0.3448 | | 0.0199 | 27.48 | 20500 | 0.5927 | 0.3407 | | 0.0143 | 28.15 | 21000 | 0.5975 | 0.3415 | | 0.0167 | 28.82 | 21500 | 0.5835 | 0.3411 | | 0.0141 | 29.49 | 22000 | 0.5916 | 0.3408 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.12.0+cu113 - Datasets 1.18.3 - Tokenizers 0.12.1
Deniskin/essays_small_2000
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-07-31T06:14:31Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - recall - precision - f1 model-index: - name: distilbert-base-uncased_fine_tuned_title results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased_fine_tuned_title This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2615 - Accuracy: {'accuracy': 0.877634820695319} - Recall: {'recall': 0.8474786132372805} - Precision: {'precision': 0.8953502200023784} - F1: {'f1': 0.8707569536806801} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | Precision | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------------------------------:|:------------------------------:|:---------------------------------:|:--------------------------:| | 0.3093 | 1.0 | 2284 | 0.3021 | {'accuracy': 0.8779085683000274} | {'recall': 0.8560333183250788} | {'precision': 0.8888499298737728} | {'f1': 0.8721330275229358} | | 0.2459 | 2.0 | 4568 | 0.2909 | {'accuracy': 0.8894059676977827} | {'recall': 0.8513057181449797} | {'precision': 0.9153957879448076} | {'f1': 0.8821882654846612} | | 0.1696 | 3.0 | 6852 | 0.3259 | {'accuracy': 0.8808102929099371} | {'recall': 0.8595227375056281} | {'precision': 0.8915353181552831} | {'f1': 0.875236403232277} | | 0.1179 | 4.0 | 9136 | 0.4946 | {'accuracy': 0.8729811114152751} | {'recall': 0.8610986042323278} | {'precision': 0.8756868131868132} | {'f1': 0.8683314415437005} | | 0.0775 | 5.0 | 11420 | 0.6547 | {'accuracy': 0.8708458800985491} | {'recall': 0.8041422782530392} | {'precision': 0.9202627850057967} | {'f1': 0.8582927854868745} | | 0.0522 | 6.0 | 13704 | 0.6699 | {'accuracy': 0.8768683274021353} | {'recall': 0.8325078793336335} | {'precision': 0.9067058967757754} | {'f1': 0.8680241769849187} | | 0.0406 | 7.0 | 15988 | 0.8149 | {'accuracy': 0.8739118532712838} | {'recall': 0.8330706888788834} | {'precision': 0.9002554433767181} | {'f1': 0.8653610055539316} | | 0.0298 | 8.0 | 18272 | 0.8906 | {'accuracy': 0.8753353408157679} | {'recall': 0.8421882035119316} | {'precision': 0.8952973555103506} | {'f1': 0.8679310944840787} | | 0.0217 | 9.0 | 20556 | 1.0192 | {'accuracy': 0.8754448398576512} | {'recall': 0.8624493471409275} | {'precision': 0.8791738382099827} | {'f1': 0.8707312915506562} | | 0.017 | 10.0 | 22840 | 1.0550 | {'accuracy': 0.8758828360251848} | {'recall': 0.8556956325979289} | {'precision': 0.8852917200419238} | {'f1': 0.8702421155056951} | | 0.0139 | 11.0 | 25124 | 1.0873 | {'accuracy': 0.8728716123733917} | {'recall': 0.8582845565060784} | {'precision': 0.8776473296500921} | {'f1': 0.8678579558388345} | | 0.0114 | 12.0 | 27408 | 1.1506 | {'accuracy': 0.8716123733917328} | {'recall': 0.8628995947771274} | {'precision': 0.8718298646650745} | {'f1': 0.8673417435085139} | | 0.0061 | 13.0 | 29692 | 1.2574 | {'accuracy': 0.8696961401587736} | {'recall': 0.874943719045475} | {'precision': 0.8596549435965495} | {'f1': 0.8672319535869686} | | 0.0035 | 14.0 | 31976 | 1.2490 | {'accuracy': 0.8784560635094443} | {'recall': 0.85006753714543} | {'precision': 0.8947867298578199} | {'f1': 0.8718540752713001} | | 0.0028 | 15.0 | 34260 | 1.2615 | {'accuracy': 0.877634820695319} | {'recall': 0.8474786132372805} | {'precision': 0.8953502200023784} | {'f1': 0.8707569536806801} | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1