modelId
stringlengths
4
81
tags
list
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
timestamp[ns, tz=UTC]
card
stringlengths
51
438k
Declan/CNN_model_v2
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
Declan/CNN_model_v3
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- tags: - generated_from_keras_callback model-index: - name: dung1308/RM_system_not_mixed__NLP_model_90_10_CPU_2_epochs results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # dung1308/RM_system_not_mixed__NLP_model_90_10_CPU_2_epochs This model is a fine-tuned version of [vinai/phobert-base](https://huggingface.co/vinai/phobert-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 4.2989 - Validation Loss: 4.2424 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -275, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 5.1315 | 4.5299 | 0 | | 4.2989 | 4.2424 | 1 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Datasets 2.7.0 - Tokenizers 0.11.0
Declan/CNN_model_v4
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
Declan/CNN_model_v5
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
Declan/ChicagoTribune_model_v7
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
Declan/ChicagoTribune_model_v8
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
Declan/FoxNews_model_v2
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
Declan/FoxNews_model_v3
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
Declan/HuffPost_model_v5
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
Declan/HuffPost_model_v6
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
Declan/NPR_model_v6
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
2022-12-06T10:03:14Z
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
Declan/NewYorkTimes_model_v1
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-12-06T10:04:19Z
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
Declan/NewYorkTimes_model_v2
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
2022-12-06T10:04:46Z
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
DeepBasak/Slack
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-12-06T10:13:31Z
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
DeepChem/ChemBERTa-10M-MLM
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
90
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
DeepChem/ChemBERTa-5M-MLM
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
29
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
DeepPavlov/distilrubert-tiny-cased-conversational-v1
[ "pytorch", "distilbert", "ru", "arxiv:2205.02340", "transformers" ]
null
{ "architectures": null, "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9,141
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
DeepPavlov/rubert-base-cased-sentence
[ "pytorch", "jax", "bert", "feature-extraction", "ru", "arxiv:1508.05326", "arxiv:1809.05053", "arxiv:1908.10084", "transformers", "has_space" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
46,991
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
DeepPavlov/rubert-base-cased
[ "pytorch", "jax", "bert", "feature-extraction", "ru", "arxiv:1905.07213", "transformers", "has_space" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
148,127
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
DeepPavlov/xlm-roberta-large-en-ru-mnli
[ "pytorch", "xlm-roberta", "text-classification", "en", "ru", "dataset:glue", "dataset:mnli", "transformers", "xlm-roberta-large", "xlm-roberta-large-en-ru", "xlm-roberta-large-en-ru-mnli", "has_space" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
227
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
DeepPavlov/xlm-roberta-large-en-ru
[ "pytorch", "xlm-roberta", "feature-extraction", "en", "ru", "transformers" ]
feature-extraction
{ "architectures": [ "XLMRobertaModel" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
190
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
DeividasM/wav2vec2-large-xlsr-53-lithuanian
[ "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "lt", "dataset:common_voice", "transformers", "audio", "speech", "xlsr-fine-tuning-week", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
DeltaHub/adapter_t5-3b_cola
[ "pytorch", "transformers" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
DeltaHub/adapter_t5-3b_mrpc
[ "pytorch", "transformers" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
DeltaHub/adapter_t5-3b_qnli
[ "pytorch", "transformers" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
DeltaHub/lora_t5-base_mrpc
[ "pytorch", "transformers" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
DemangeJeremy/4-sentiments-with-flaubert
[ "pytorch", "flaubert", "text-classification", "fr", "transformers", "sentiments", "french", "flaubert-large" ]
text-classification
{ "architectures": [ "FlaubertForSequenceClassification" ], "model_type": "flaubert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
226
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
Denilson/gbert-base-germaner
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
Deniskin/essays_small_2000i
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
Deniskin/gpt3_medium
[ "pytorch", "gpt2", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
52
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
Denny29/DialoGPT-medium-asunayuuki
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
DeskDown/MarianMixFT_en-fil
[ "pytorch", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
DeskDown/MarianMixFT_en-hi
[ "pytorch", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
DeskDown/MarianMixFT_en-id
[ "pytorch", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
DeskDown/MarianMixFT_en-ja
[ "pytorch", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
DeskDown/MarianMixFT_en-my
[ "pytorch", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 268.09 +/- 15.44 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
DeskDown/MarianMixFT_en-th
[ "pytorch", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
DeskDown/MarianMixFT_en-vi
[ "pytorch", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
DeskDown/MarianMix_en-ja-10
[ "pytorch", "tensorboard", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
DeskDown/MarianMix_en-zh-10
[ "pytorch", "tensorboard", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
DeskDown/MarianMix_en-zh_to_vi-ms-hi-ja
[ "pytorch", "tensorboard", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
Despin89/test
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
Dev-DGT/food-dbert-multiling
[ "pytorch", "distilbert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "DistilBertForTokenClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
17
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
Devid/DialoGPT-small-Miku
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
Devmapall/paraphrase-quora
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
3
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
Devrim/prism-default
[ "license:mit" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
DevsIA/Devs_IA
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
DevsIA/imagenes
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
DewiBrynJones/wav2vec2-large-xlsr-welsh
[ "cy", "dataset:common_voice", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
DheerajPranav/Dialo-GPT-Rick-bot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
Dhito/am
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: sun1638650145/ML-Agents-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
DicoTiar/wisdomfiy
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- language: - zh license: apache-2.0 tags: - hf-asr-leaderboard - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper Small zh - howl results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 11.0 type: mozilla-foundation/common_voice_11_0 config: zh-CN split: test args: 'config: zh, split: test' metrics: - name: Wer type: wer value: 75.2976752976753 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small zh - howl This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3644 - Wer: 75.2977 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 2000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.2309 | 1.51 | 1000 | 0.3694 | 76.4411 | | 0.1069 | 3.02 | 2000 | 0.3644 | 75.2977 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.7.1 - Tokenizers 0.13.2
DiegoAlysson/opus-mt-en-ro-finetuned-en-to-ro
[ "pytorch", "tensorboard", "marian", "text2text-generation", "dataset:wmt16", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2203 - Accuracy: 0.9245 - F1: 0.9246 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8038 | 1.0 | 250 | 0.3028 | 0.913 | 0.9115 | | 0.246 | 2.0 | 500 | 0.2203 | 0.9245 | 0.9246 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.5.1 - Tokenizers 0.11.6
DimaOrekhov/cubert-method-name
[ "pytorch", "encoder-decoder", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- language: en license: mit tags: - vision model_name: microsoft/git-base-vqav2 inference: false pipeline_tag: visual-question-answering --- # GIT (GenerativeImage2Text), base-sized, fine-tuned on VQAv2 GIT (short for GenerativeImage2Text) model, base-sized version, fine-tuned on VQAv2. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [this repository](https://github.com/microsoft/GenerativeImage2Text). Disclaimer: The team releasing GIT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description GIT is a Transformer decoder conditioned on both CLIP image tokens and text tokens. The model is trained using "teacher forcing" on a lot of (image, text) pairs. The goal for the model is simply to predict the next text token, giving the image tokens and previous text tokens. The model has full access to (i.e. a bidirectional attention mask is used for) the image patch tokens, but only has access to the previous text tokens (i.e. a causal attention mask is used for the text tokens) when predicting the next text token. ![GIT architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/git_architecture.jpg) This allows the model to be used for tasks like: - image and video captioning - visual question answering (VQA) on images and videos - even image classification (by simply conditioning the model on the image and asking it to generate a class for it in text). ## Intended uses & limitations You can use the raw model for visual question answering (VQA). See the [model hub](https://huggingface.co/models?search=microsoft/git) to look for fine-tuned versions on a task that interests you. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/model_doc/git#transformers.GitForCausalLM.forward.example-2). ## Training data From the paper: > We collect 0.8B image-text pairs for pre-training, which include COCO (Lin et al., 2014), Conceptual Captions (CC3M) (Sharma et al., 2018), SBU (Ordonez et al., 2011), Visual Genome (VG) (Krishna et al., 2016), Conceptual Captions (CC12M) (Changpinyo et al., 2021), ALT200M (Hu et al., 2021a), and an extra 0.6B data following a similar collection procedure in Hu et al. (2021a). => however this is for the model referred to as "GIT" in the paper, which is not open-sourced. This checkpoint is "GIT-base", which is a smaller variant of GIT trained on 10 million image-text pairs. Next, the model was fine-tuned on VQAv2. See table 11 in the [paper](https://arxiv.org/abs/2205.14100) for more details. ### Preprocessing We refer to the original repo regarding details for preprocessing during training. During validation, one resizes the shorter edge of each image, after which center cropping is performed to a fixed-size resolution. Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation. ## Evaluation results For evaluation results, we refer readers to the [paper](https://arxiv.org/abs/2205.14100).
DimaOrekhov/transformer-method-name
[ "pytorch", "encoder-decoder", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2022-12-06T11:08:00Z
--- language: - hi license: apache-2.0 tags: - whisper-event - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper Small Hindi results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_11_0 hi type: mozilla-foundation/common_voice_11_0 config: hi split: test args: hi metrics: - name: Wer type: wer value: 22.429210134128166 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Hindi This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 hi dataset. It achieves the following results on the evaluation set: - Loss: 0.6260 - Wer: 22.4292 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-06 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 3000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0176 | 7.01 | 500 | 0.4165 | 22.5066 | | 0.0015 | 14.01 | 1000 | 0.5186 | 22.2573 | | 0.0004 | 21.02 | 1500 | 0.5741 | 22.2401 | | 0.0002 | 28.02 | 2000 | 0.6025 | 22.3834 | | 0.0002 | 36.01 | 2500 | 0.6197 | 22.3977 | | 0.0002 | 43.01 | 3000 | 0.6260 | 22.4292 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
DivyanshuSheth/T5-Seq2Seq-Final
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en license: mit tags: - vision model_name: microsoft/git-base-textvqa inference: false pipeline_tag: visual-question-answering --- # GIT (GenerativeImage2Text), base-sized, fine-tuned on TextVQA GIT (short for GenerativeImage2Text) model, base-sized version, fine-tuned on TextVQA. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [this repository](https://github.com/microsoft/GenerativeImage2Text). Disclaimer: The team releasing GIT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description GIT is a Transformer decoder conditioned on both CLIP image tokens and text tokens. The model is trained using "teacher forcing" on a lot of (image, text) pairs. The goal for the model is simply to predict the next text token, giving the image tokens and previous text tokens. The model has full access to (i.e. a bidirectional attention mask is used for) the image patch tokens, but only has access to the previous text tokens (i.e. a causal attention mask is used for the text tokens) when predicting the next text token. ![GIT architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/git_architecture.jpg) This allows the model to be used for tasks like: - image and video captioning - visual question answering (VQA) on images and videos - even image classification (by simply conditioning the model on the image and asking it to generate a class for it in text). ## Intended uses & limitations You can use the raw model for visual question answering (VQA). See the [model hub](https://huggingface.co/models?search=microsoft/git) to look for fine-tuned versions on a task that interests you. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/git.html). ## Training data From the paper: > We collect 0.8B image-text pairs for pre-training, which include COCO (Lin et al., 2014), Conceptual Captions (CC3M) (Sharma et al., 2018), SBU (Ordonez et al., 2011), Visual Genome (VG) (Krishna et al., 2016), Conceptual Captions (CC12M) (Changpinyo et al., 2021), ALT200M (Hu et al., 2021a), and an extra 0.6B data following a similar collection procedure in Hu et al. (2021a). => however this is for the model referred to as "GIT" in the paper, which is not open-sourced. This checkpoint is "GIT-base", which is a smaller variant of GIT trained on 10 million image-text pairs. Next, the model was fine-tuned on TextVQA. See table 11 in the [paper](https://arxiv.org/abs/2205.14100) for more details. ### Preprocessing We refer to the original repo regarding details for preprocessing during training. During validation, one resizes the shorter edge of each image, after which center cropping is performed to a fixed-size resolution. Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation. ## Evaluation results For evaluation results, we refer readers to the [paper](https://arxiv.org/abs/2205.14100).
Dmitriiserg/Pxd
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: et license: cc-by-4.0 datasets: - ERRnews --- # mBART ERRnews Pretrained mbart-large-cc25 model finetuned on ERRnews Estonian news story dataset. ## How to use Here is how to use this model to get a summary of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("TalTechNLP/mBART-ERRnews") model = AutoModelForSeq2SeqLM.from_pretrained("TalTechNLP/mBART-ERRnews") text = "Riigikogu rahanduskomisjon võttis esmaspäeval maha riigieelarvesse esitatud investeeringuettepanekutest siseministeeriumi investeeringud koolidele ja lasteaedadele, sest komisjoni hinnangul ei peaks siseministeerium tegelema investeeringutega väljaspoole oma vastutusala. Komisjoni esimees Aivar Kokk ütles, et komisjon lähtus otsuse tegemisel riigikontrolör Janar Holmi soovitusest ja seadustest." inputs = tokenizer(text, return_tensors='pt', max_length=1024) summary_ids = model.generate(inputs['input_ids']) summary = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids] ``` ## Training data The mBART model was finetuned on [ERRnews](https://huggingface.co/datasets/TalTechNLP/ERRnews), a dataset consisting of 10 420 Estonian news story transcripts and summaries. ### Training The model was trained on 2 cloud GPUs with a batch size of 16 for 16 epochs. The optimizer used is Adam with a learning rate of 5e-05, betas of 0.9 and 0.999. ## Evaluation results This model achieves the following results: | Dataset | ROUGE-1 | ROUGE-2 | ROUGE-L | ROUGE-L-SUM | |:-------:|:-------:|:-------:|:-------:|:-----------:| | ERRnews | 19.2 | 6.7 | 16.1 | 17.4 | ### BibTeX entry and citation info ```bibtex article{henryabstractive, title={Abstractive Summarization of Broadcast News Stories for {Estonian}}, author={Henry, H{\"a}rm and Tanel, Alum{\"a}e}, journal={Baltic J. Modern Computing}, volume={10}, number={3}, pages={511-524}, year={2022} } ```
Doiman/DialoGPT-medium-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
--- language: - sv license: apache-2.0 tags: - hf-asr-leaderboard - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 model-index: - name: my_tuned_whisper_cn results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_tuned_whisper_cn This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - eval_loss: 0.5297 - eval_wer: 80.2457 - eval_runtime: 457.7207 - eval_samples_per_second: 2.311 - eval_steps_per_second: 0.291 - epoch: 2.02 - step: 1000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
DongHyoungLee/distilbert-base-uncased-finetuned-cola
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
null
--- language: - it license: apache-2.0 tags: - hf-asr-leaderboard - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper Tiny It 3 - Gianluca Ruberto results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 11.0 type: mozilla-foundation/common_voice_11_0 config: it split: test[:10%] args: 'config: hi, split: test' metrics: - name: Wer type: wer value: 43.233499722684414 --- # Whisper Tiny It 3 - Gianluca Ruberto This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.711673 - Wer: 43.233500 ## Model description This model is the openai whisper small transformer adapted for Italian audio to text transcription. This model has weight decay set to 0.1 to cope with overfitting. ## Intended uses & limitations The model is available through its [HuggingFace web app](https://huggingface.co/spaces/GIanlucaRub/whisper-it) ## Training and evaluation data Data used for training is the initial 10% of train and validation of [Italian Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/viewer/it/train) 11.0 from Mozilla Foundation. The dataset used for evaluation is the initial 10% of test of Italian Common Voice. Weight decay showed to have slightly better result also on the evaluation dataset. ## Training procedure After loading the pre trained model, it has been trained on the dataset. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP - weight_decay: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.5837 | 0.95 | 1000 | 0.790374 | 50.2981 | | 0.4183 | 1.91 | 2000 | 0.730100 | 45.4174 | | 0.3147 | 2.86 | 3000 | 0.713152 | 44.3150 | | 0.2670 | 3.82 | 4000 | 0.711673 | 43.2335 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
Doogie/Waynehills-KE-T5-doogie
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('fluorine/sd-class-butterflies-32') image = pipeline().images[0] image ```
Waynehillsdev/Waynehills_summary_tensorflow
[ "tf", "t5", "text2text-generation", "transformers", "generated_from_keras_callback", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: mit tags: - generated_from_trainer model-index: - name: recipe-nlg-gpt2-ingredient-to-recipe-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # recipe-nlg-gpt2-ingredient-to-recipe-model This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0 - Datasets 2.7.1 - Tokenizers 0.13.2
Doquey/DialoGPT-small-Luisbot1
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -204.64 +/- 88.46 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo.py' 'gym_id': 'LunarLander-v2' 'seed': 1 'learning_rate': 0.00025 'total_timesteps': 25000 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'lsaulier/ppo-LunarLander-v2' 'batch_size': 512 'minibatch_size': 128} ```
Doxophobia/DialoGPT-medium-celeste
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- license: mit tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: roberta-large-finetuned-mnli-batch_size_4_100000_samples results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: mnli split: train args: mnli metrics: - name: Accuracy type: accuracy value: 0.3544574630667346 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large-finetuned-mnli-batch_size_4_100000_samples This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 1.0980 - Accuracy: 0.3545 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.1026 | 1.0 | 25000 | 1.0980 | 0.3545 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
DoyyingFace/bert-asian-hate-tweets-asian-unclean-freeze-8
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
--- license: creativeml-openrail-m --- Science Fiction/Horror monster textual embedding for Stable Diffusion 2.0. This embedding is trained initially on 49 images from Tod Ryan's Artstation (https://www.artstation.com/todryan), then further tuned with an expanded dataset that includes 119 additional images generated with the initial embedding alongside specific prompting tailored to improving the quality. These generated training images were color graded collectively to mimic the visual aesthetic of modern horror media. I have also included the initial version of the embedding that circulated on the Stable Diffusion discord. It is excellent for disgusting (but repetitive) monster/grossness. Example generations: ![04855-2889499143-Macro Terror.png](https://s3.amazonaws.com/moonup/production/uploads/1670330732386-632799fd3476801d8f27a0b9.png) _Prompt: Macro Terror, Steps: 15, Sampler: DPM++ SDE Karras, CFG scale: 3.5, Seed: 2889499141, Size: 768x768, Model hash: 2c02b20a_ ![04865-2324809867-Macro Terror.png](https://s3.amazonaws.com/moonup/production/uploads/1670330911955-632799fd3476801d8f27a0b9.png) _Prompt: Macro Terror, Steps: 15, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 2324809867, Size: 768x768, Model hash: 2c02b20a_ ![04869-2276531391-Macro Terror.png](https://s3.amazonaws.com/moonup/production/uploads/1670331119173-632799fd3476801d8f27a0b9.png) _Prompt: Macro Terror, Steps: 15, Sampler: DPM++ 2S a, CFG scale: 5, Seed: 2276531391, Size: 768x768, Model hash: 2c02b20a_
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-25
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 248.96 +/- 25.61 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
DoyyingFace/bert-asian-hate-tweets-concat-clean-with-unclean-valid
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
25
null
--- license: mit tags: - generated_from_trainer model-index: - name: camembert-base-squad-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # camembert-base-squad-fr This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5182 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7504 | 1.0 | 3581 | 1.6470 | | 1.4776 | 2.0 | 7162 | 1.5182 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1 - Tokenizers 0.13.2
DoyyingFace/bert-asian-hate-tweets-concat-clean
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
25
null
--- license: mit tags: - generated_from_trainer model-index: - name: bert-base-historic-multilingual-cased-squad-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-historic-multilingual-cased-squad-fr This model is a fine-tuned version of [dbmdz/bert-base-historic-multilingual-cased](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7001 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.9769 | 1.0 | 3660 | 1.8046 | | 1.6309 | 2.0 | 7320 | 1.7001 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1 - Tokenizers 0.13.2
albert-base-v1
[ "pytorch", "tf", "safetensors", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "exbert", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "AlbertForMaskedLM" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
38,156
2022-12-06T13:03:24Z
--- license: mit tags: - generated_from_trainer model-index: - name: bert-base-french-europeana-cased-squad-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-french-europeana-cased-squad-fr This model is a fine-tuned version of [dbmdz/bert-base-french-europeana-cased](https://huggingface.co/dbmdz/bert-base-french-europeana-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7031 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.9069 | 1.0 | 3539 | 1.7853 | | 1.6263 | 2.0 | 7078 | 1.7031 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1 - Tokenizers 0.13.2
albert-xlarge-v1
[ "pytorch", "tf", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "AlbertForMaskedLM" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
341
2022-12-06T13:09:03Z
--- license: wtfpl --- Cat picture embedding for 2.0. Trained on high quality Unsplash images, so it tends to prefer photorealism. Warning: the weights are quite strong. But, when tamed, it works great with stylistic embeddings like the last couple of images! Trained for 1500 steps, but added the 1000 steps one as well which also works pretty decently and is a bit less strong. ![03171-4193474301-cute [kittyhelper] wearing a tuxedo, sharp focus, photohelper, fur, extremely detailed.png](https://s3.amazonaws.com/moonup/production/uploads/1670332296701-6312579fc7577b68d90a7646.png) ![03164-4193474294-cute [kittyhelper] wearing a tuxedo, sharp focus, photohelper, fur, extremely detailed.png](https://s3.amazonaws.com/moonup/production/uploads/1670332307002-6312579fc7577b68d90a7646.png) ![03121-2828626995-kittyhelper, sharp focus, fur, extremely detailed,kipaki.png](https://s3.amazonaws.com/moonup/production/uploads/1670332315259-6312579fc7577b68d90a7646.png) ![03131-3693647088-kittyhelper wearing a tuxedo, sharp focus, fur, extremely detailed.png](https://s3.amazonaws.com/moonup/production/uploads/1670332324414-6312579fc7577b68d90a7646.png) ![03188-2279795372-knollingcase, cute kittypic, sharp focus, fur, extremely detailed,.png](https://s3.amazonaws.com/moonup/production/uploads/1670332259284-6312579fc7577b68d90a7646.png) ![03190-1679759101-knollingcase, kipaki, cute kittypic, sharp focus, fur, extremely detailed,.png](https://s3.amazonaws.com/moonup/production/uploads/1670332271419-6312579fc7577b68d90a7646.png) ![03199-318487485-kipaki, (((kittypic))), sharp focus, fur, extremely detailed,.png](https://s3.amazonaws.com/moonup/production/uploads/1670332274191-6312579fc7577b68d90a7646.png)
bert-base-cased
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "exbert", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8,621,271
2022-12-06T13:28:42Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: Proximal Policy Optimisation (PPO) results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 290.88 +/- 17.28 name: mean_reward verified: false --- # **Proximal Policy Optimisation (PPO)** Agent playing **LunarLander-v2** This is a trained model of a **Proximal Policy Optimisation (PPO)** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
bert-base-chinese
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "zh", "arxiv:1810.04805", "transformers", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3,377,486
2022-12-06T13:30:14Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Gorenzelg/bert-finetuned-squad11 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Gorenzelg/bert-finetuned-squad11 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.0664 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 55450, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Epoch | |:----------:|:-----:| | 1.0664 | 0 | ### Framework versions - Transformers 4.24.0 - TensorFlow 2.10.1 - Datasets 2.6.1 - Tokenizers 0.11.0
bert-base-german-cased
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "de", "transformers", "exbert", "license:mit", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
175,983
2022-12-06T13:32:44Z
--- language: ru datasets: - bond005/sberdevices_golos_10h_crowd - bond005/sberdevices_golos_100h_farfield - common_voice - bond005/sova_rudevices - bond005/rulibrispeech metrics: - wer - cer tags: - audio - automatic-speech-recognition - speech - common_voice - SberDevices/Golos - sova_rudevices - rulibrispeech license: apache-2.0 widget: - example_title: test sound with Russian speech src: https://huggingface.co/bond005/wav2vec2-mbart50-ru/resolve/main/test_sound.wav model-index: - name: Wav2Vec2-mBART-50 for speech-to-text in Russian by Ivan Bondarenko results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Sberdevices Golos (crowd) type: SberDevices/Golos args: ru metrics: - name: Test WER type: wer value: 13.204 - name: Test CER type: cer value: 4.157 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Sberdevices Golos (farfield) type: SberDevices/Golos args: ru metrics: - name: Test WER type: wer value: 17.681 - name: Test CER type: cer value: 6.773 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice ru type: common_voice args: ru metrics: - name: Test WER type: wer value: 14.693 - name: Test CER type: cer value: 5.765 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Sova RuDevices type: sova_rudevices args: ru metrics: - name: Test WER type: wer value: 22.727 - name: Test CER type: cer value: 9.183 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Russian Librispeech type: rulibrispeech args: ru metrics: - name: Test WER type: wer value: 32.540 - name: Test CER type: cer value: 10.369 --- # Wav2Vec2-mBART-50-Ru Wav2Vec2-mBART-50-Ru is a speech-sequence-to-text-sequence model, which can convert an input audio with Russian speech into a text with punctuation, capitalization and so on. Wav2Vec2-mBART-50-Ru is the [SpeechEncoderDecoderModel](https://huggingface.co/docs/transformers/model_doc/speech-encoder-decoder), which was initialized with [Wav2Vec2-Large-Ru-Golos](https://huggingface.co/bond005/wav2vec2-large-ru-golos) as the encoder and [mBART-large-50](https://huggingface.co/facebook/mbart-large-50) as the decoder. After its initialization the model was fine-tuned using the training parts of several annotated speech corpora: - [the 10 hours crowd subset of SberDevices Golos](https://huggingface.co/datasets/bond005/sberdevices_golos_10h_crowd) - [the 100 hours farfield subset of SberDevices Golos](https://huggingface.co/datasets/bond005/sberdevices_golos_100h_farfield) - [the Russian subset of Common Voice 6.0](https://huggingface.co/datasets/common_voice) - [Sova RuDevices](https://huggingface.co/datasets/bond005/sova_rudevices) - 15% part of the training subset of [Russian Librispeech](https://huggingface.co/datasets/bond005/rulibrispeech) CommonVoice 6.0 contains "rich" text annotations with punctuation and capitalization, but other speech corpora includes plain texts only. Therefore, text annotations of these corpora were riched automatically using the [Silero text enhancement model](https://github.com/snakers4/silero-models#text-enhancement). ## Usage When using this model, make sure that your speech input is sampled at 16kHz. You can use this model by writing your own inference script: ```python import os import warnings import torch from datasets import load_dataset from datasets.features import Audio from transformers import SpeechEncoderDecoderModel, Wav2Vec2Processor LANG_ID = "ru" MODEL_ID = "bond005/wav2vec2-mbart50-ru" SAMPLES = 30 num_processes = max(1, os.cpu_count()) processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) model = SpeechEncoderDecoderModel.from_pretrained(MODEL_ID) test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]") if test_dataset.features['audio'].sampling_rate != 16_000: test_dataset = test_dataset.cast_column( 'audio', Audio(sampling_rate=16_000) ) audio_data = [test_dataset[i]['audio']['array'] for i in range(SAMPLES)] processed = processor(audio_data, sampling_rate=16_000, return_tensors="pt", padding='longest') with torch.no_grad(): predicted_ids = model.generate(**processed) predicted_sentences = processor.batch_decode( predicted_ids, num_processes=num_processes, skip_special_tokens=True ) with warnings.catch_warnings(): warnings.simplefilter("ignore") for i, predicted_sentence in enumerate(predicted_sentences): print("-" * 100) print("Reference: ", test_dataset[i]["sentence"]) print("Prediction:", predicted_sentence) ``` ```text ---------------------------------------------------------------------------------------------------- Reference: Я беру маленький кусочек бумажки. Prediction: Я беру маленькие кусочек бумажки. ---------------------------------------------------------------------------------------------------- Reference: О потерях пока не сообщается. Prediction: А потеря их пока не сообщается. ---------------------------------------------------------------------------------------------------- Reference: Ваша воля. Prediction: Ваша воля. ---------------------------------------------------------------------------------------------------- Reference: Мы высоко ценим ее роль в этом отношении. Prediction: Мы высоко ценим ее роль в этом отношении. ---------------------------------------------------------------------------------------------------- Reference: Вот это вызывало у нас жуткое отторжение. Prediction: Вот это вызвало у нас жуткое отвержение. ---------------------------------------------------------------------------------------------------- Reference: Он положил ей букет на книгу. Prediction: Он положил ее букет на книгу. ---------------------------------------------------------------------------------------------------- Reference: Ну и положу, – обиделась Женя. Prediction: – Ну и положи, – обиделась Женя. ---------------------------------------------------------------------------------------------------- Reference: Благодарю представителя Австралии за ее заявление. Prediction: Благодарю представителя Австралии за ее заявление. ---------------------------------------------------------------------------------------------------- Reference: Для меня это не было неожиданностью. Prediction: Для меня это не было неожиданностью. ---------------------------------------------------------------------------------------------------- Reference: Поздняя ночь. Prediction: Поздняя ночь. ---------------------------------------------------------------------------------------------------- Reference: Тем не менее нужно вновь вычленить некоторые элементы наших политических установок. Prediction: Тем не менее нужно назвать нищие нынешние элементы наших политических устоков. ---------------------------------------------------------------------------------------------------- Reference: Мы не можем позволить себе упустить эту возможность. Prediction: Мы не можем позволить себе упустить эту возможность. ---------------------------------------------------------------------------------------------------- Reference: В предстоящие месяцы Суд примет решение по ордеру на арест министра обороны Хусейна. Prediction: В предстоящие месяцы Суд примет решение по оратору на орифлейм министра иностранных дел Кубы. ---------------------------------------------------------------------------------------------------- Reference: Валерия живет в старом панельном доме советских времён. Prediction: Валерия живет в старом Баньяном, да не советских временах. ---------------------------------------------------------------------------------------------------- Reference: Я вернусь скоро. Prediction: Я вернусь скоро... ---------------------------------------------------------------------------------------------------- Reference: Слово предоставляется Его Превосходительству принцу Зайду. Prediction: Слово предоставляется Его Превосходительству Пан Ги Муну. ---------------------------------------------------------------------------------------------------- Reference: Ну конечно, тебе бы этого хотелось. Prediction: Ну, конечно, тебе бы этого хотелось. ---------------------------------------------------------------------------------------------------- Reference: Общественные объединения равны перед законом. Prediction: Общественные объединения равны перед законом. ---------------------------------------------------------------------------------------------------- Reference: Ну, что же, нету этики, эстетики. Prediction: Ну что же, не туда зайти? Не туда зайти? ---------------------------------------------------------------------------------------------------- Reference: Сразу же она легла в постель. Prediction: Сразу же она легла в постель. ---------------------------------------------------------------------------------------------------- Reference: Сейчас я сделаю заявление в своем национальном качестве. Prediction: Сейчас я сделаю заявление в своем национальном качестве. ---------------------------------------------------------------------------------------------------- Reference: Что там сейчас происходит в Твиттере? Prediction: Что там сейчас происходит в Твиттере? ---------------------------------------------------------------------------------------------------- Reference: Ну хорошо, что револьвер был заряжен холостыми. Prediction: Ну хорошо, что Револьвер был заряжен холостыми. ---------------------------------------------------------------------------------------------------- Reference: А потом дальше может проходить работа такая. Prediction: А потом дальше может проходить работа такая. ---------------------------------------------------------------------------------------------------- Reference: Из Microsoft написали что на текущий момент у них нет открытых вакансий. Prediction: Из моих красотов написали, что на текущий момент у них нет открытых вакансий. ---------------------------------------------------------------------------------------------------- Reference: Мы добились многого, но сейчас не время терять набранную динамику. Prediction: Мы добились многого, но сейчас не время терять набранную динамику. ---------------------------------------------------------------------------------------------------- Reference: Мы внимательно проанализировали документ и содержащиеся в нем выводы и рекомендации. Prediction: Мы внимательно проанализировали документ, содержащийся в нем, выводы рекомендаций. ---------------------------------------------------------------------------------------------------- Reference: А сейчас слово имеет представитель Соединенных Штатов Америки. Prediction: А сейчас слово имеет представитель Соединенных Штатов Америки. ---------------------------------------------------------------------------------------------------- Reference: Обстоятельства изменились, и мы должны учитывать это. Prediction: Обстоятельно изменились и мы должны учитывать это. ---------------------------------------------------------------------------------------------------- Reference: На этом принципе основывается и наша позиция по Фолклендским островам. Prediction: На этом принципе основывается и наша позиция по Фолклендским островам. ``` The Google Colab version of [this script](https://colab.research.google.com/drive/1VlTrsc9d9wyzLPAWagpXLzoDLn2PRvZA?usp=sharing) is available too. ## Evaluation This model was evaluated on the test subsets of [SberDevices Golos](https://huggingface.co/datasets/SberDevices/Golos), [Common Voice 6.0](https://huggingface.co/datasets/common_voice) (Russian part), and [Sova RuDevices](https://huggingface.co/datasets/bond005/sova_rudevices). The evaluation script [wav2vec2_mbart50_ru_eval](https://www.kaggle.com/code/bond005/wav2vec2-mbart50-ru-eval) is available for checking and reproducibility. ## Citation If you want to cite this model you can use this: ```bibtex @misc{bondarenko2023-wav2vec2-mbart50-ru, title={Wav2Vec2-mBART-50 for speech-to-text in Russian by Ivan Bondarenko}, author={Bondarenko, Ivan}, publisher={Hugging Face}, journal={Hugging Face Hub}, howpublished={\url{https://huggingface.co/bond005/wav2vec2-mbart50-ru}}, year={2023} } ```
bert-base-german-dbmdz-cased
[ "pytorch", "jax", "bert", "fill-mask", "de", "transformers", "license:mit", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,814
2022-12-06T13:36:21Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion widget: - text: "a photo of dpkbjwn and mnlrvr at a christmas market" --- ### Deepika and Manuel Simulator classifiers: "dpkbjwn" for Deepika "mnlrvr" for Manuel Example prompt: a photo of dpkbjwn and mnlrvr at a christmas market Sample pictures: ![0](https://huggingface.co/ManuelRivoir/dpkbjwn-and-mnlrvr/resolve/main/sample_images/DeepikaAndManuel28.png) ![1](https://huggingface.co/ManuelRivoir/dpkbjwn-and-mnlrvr/resolve/main/sample_images/DeepikaAndManuel16.jpg) ![2](https://huggingface.co/ManuelRivoir/dpkbjwn-and-mnlrvr/resolve/main/sample_images/DeepikaAndManuel44.png) ![3](https://huggingface.co/ManuelRivoir/dpkbjwn-and-mnlrvr/resolve/main/sample_images/DeepikaAndManuel45.png) ![4](https://huggingface.co/ManuelRivoir/dpkbjwn-and-mnlrvr/resolve/main/sample_images/DeepikaAndManuel31.png) ![5](https://huggingface.co/ManuelRivoir/dpkbjwn-and-mnlrvr/resolve/main/sample_images/DeepikaAndManuel33.png) ![6](https://huggingface.co/ManuelRivoir/dpkbjwn-and-mnlrvr/resolve/main/sample_images/DeepikaAndManuel12.png)
bert-base-multilingual-cased
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", "fi", "fr", "gl", "ka", "de", "el", "gu", "ht", "he", "hi", "hu", "is", "io", "id", "ga", "it", "ja", "jv", "kn", "kk", "ky", "ko", "la", "lv", "lt", "roa", "nds", "lm", "mk", "mg", "ms", "ml", "mr", "mn", "min", "ne", "new", "nb", "nn", "oc", "fa", "pms", "pl", "pt", "pa", "ro", "ru", "sco", "sr", "scn", "sk", "sl", "aze", "es", "su", "sw", "sv", "tl", "tg", "th", "ta", "tt", "te", "tr", "uk", "ud", "uz", "vi", "vo", "war", "cy", "fry", "pnb", "yo", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4,749,504
2022-12-06T13:37:38Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned-im-rahmen-der-rechtlichen-und-ethischen-bestimmungen-arbeiten results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-im-rahmen-der-rechtlichen-und-ethischen-bestimmungen-arbeiten This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2919 - Accuracy: 0.8970 - F1: 0.8843 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.2946 | 1.0 | 1365 | 0.2791 | 0.8992 | 0.8829 | | 0.2204 | 2.0 | 2730 | 0.2919 | 0.8970 | 0.8843 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1+cu113 - Datasets 2.3.2 - Tokenizers 0.13.2
bert-base-multilingual-uncased
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", "fi", "fr", "gl", "ka", "de", "el", "gu", "ht", "he", "hi", "hu", "is", "io", "id", "ga", "it", "ja", "jv", "kn", "kk", "ky", "ko", "la", "lv", "lt", "roa", "nds", "lm", "mk", "mg", "ms", "ml", "mr", "min", "ne", "new", "nb", "nn", "oc", "fa", "pms", "pl", "pt", "pa", "ro", "ru", "sco", "sr", "scn", "sk", "sl", "aze", "es", "su", "sw", "sv", "tl", "tg", "ta", "tt", "te", "tr", "uk", "ud", "uz", "vi", "vo", "war", "cy", "fry", "pnb", "yo", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
328,585
2022-12-06T13:42:40Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-Test results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.902 - name: F1 type: f1 value: 0.9037328094302554 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-Test This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.2793 - Accuracy: 0.902 - F1: 0.9037 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
bert-base-uncased
[ "pytorch", "tf", "jax", "rust", "safetensors", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "exbert", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
59,663,489
2022-12-06T13:42:46Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - top_v2 model-index: - name: t5-base-pointer-top_v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-pointer-top_v2 This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the top_v2 dataset. It achieves the following results on the evaluation set: - Loss: 0.0256 - Exact Match: 0.8517 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 128 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 3000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Exact Match | |:-------------:|:-----:|:----:|:---------------:|:-----------:| | 1.4545 | 0.82 | 200 | 0.2542 | 0.1294 | | 0.1878 | 1.65 | 400 | 0.0668 | 0.2128 | | 0.0796 | 2.47 | 600 | 0.0466 | 0.2276 | | 0.0536 | 3.29 | 800 | 0.0356 | 0.2309 | | 0.0424 | 4.12 | 1000 | 0.0317 | 0.2328 | | 0.0356 | 4.94 | 1200 | 0.0295 | 0.2340 | | 0.0306 | 5.76 | 1400 | 0.0288 | 0.2357 | | 0.0277 | 6.58 | 1600 | 0.0271 | 0.2351 | | 0.0243 | 7.41 | 1800 | 0.0272 | 0.2351 | | 0.0225 | 8.23 | 2000 | 0.0272 | 0.2353 | | 0.0206 | 9.05 | 2200 | 0.0267 | 0.2368 | | 0.0187 | 9.88 | 2400 | 0.0260 | 0.2367 | | 0.0173 | 10.7 | 2600 | 0.0256 | 0.2383 | | 0.0161 | 11.52 | 2800 | 0.0260 | 0.2383 | | 0.0153 | 12.35 | 3000 | 0.0257 | 0.2377 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu117 - Datasets 2.7.1 - Tokenizers 0.13.2
bert-large-uncased-whole-word-masking
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
76,685
2022-12-06T13:49:42Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: Bert-test-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Bert-test-model This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.3708 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 250 | 1.7369 | | 2.2639 | 2.0 | 500 | 1.3940 | | 2.2639 | 3.0 | 750 | 1.3708 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
bert-large-uncased
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,058,496
2022-12-06T13:51:46Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: idrak_wav2vec_timit_subsample results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # idrak_wav2vec_timit_subsample This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
camembert-base
[ "pytorch", "tf", "safetensors", "camembert", "fill-mask", "fr", "dataset:oscar", "arxiv:1911.03894", "transformers", "license:mit", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "CamembertForMaskedLM" ], "model_type": "camembert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,440,898
2022-12-06T13:59:52Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
distilbert-base-cased-distilled-squad
[ "pytorch", "tf", "rust", "safetensors", "openvino", "distilbert", "question-answering", "en", "dataset:squad", "arxiv:1910.01108", "arxiv:1910.09700", "transformers", "license:apache-2.0", "model-index", "autotrain_compatible", "has_space" ]
question-answering
{ "architectures": [ "DistilBertForQuestionAnswering" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
257,745
null
Access to model syndikatet/kaia is restricted and you are not in the authorized list. Visit https://huggingface.co/syndikatet/kaia to ask for access.
distilbert-base-multilingual-cased
[ "pytorch", "tf", "onnx", "safetensors", "distilbert", "fill-mask", "multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", "fi", "fr", "gl", "ka", "de", "el", "gu", "ht", "he", "hi", "hu", "is", "io", "id", "ga", "it", "ja", "jv", "kn", "kk", "ky", "ko", "la", "lv", "lt", "roa", "nds", "lm", "mk", "mg", "ms", "ml", "mr", "mn", "min", "ne", "new", "nb", "nn", "oc", "fa", "pms", "pl", "pt", "pa", "ro", "ru", "sco", "sr", "scn", "sk", "sl", "aze", "es", "su", "sw", "sv", "tl", "tg", "th", "ta", "tt", "te", "tr", "uk", "ud", "uz", "vi", "vo", "war", "cy", "fry", "pnb", "yo", "dataset:wikipedia", "arxiv:1910.01108", "arxiv:1910.09700", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8,339,633
2022-12-06T14:05:41Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - cstop_artificial model-index: - name: t5-base-pointer-cstop_artificial results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-pointer-cstop_artificial This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the cstop_artificial dataset. It achieves the following results on the evaluation set: - Loss: 0.0776 - Exact Match: 0.7746 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 64 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 3000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Exact Match | |:-------------:|:------:|:----:|:---------------:|:-----------:| | 1.7482 | 28.5 | 200 | 0.2505 | 0.1020 | | 0.1366 | 57.13 | 400 | 0.0776 | 0.3238 | | 0.0275 | 85.63 | 600 | 0.0881 | 0.3381 | | 0.0114 | 114.25 | 800 | 0.0990 | 0.3399 | | 0.0064 | 142.75 | 1000 | 0.1120 | 0.3417 | | 0.0045 | 171.38 | 1200 | 0.1081 | 0.3435 | | 0.0036 | 199.88 | 1400 | 0.1230 | 0.3435 | | 0.0025 | 228.5 | 1600 | 0.1211 | 0.3399 | | 0.002 | 257.13 | 1800 | 0.1367 | 0.3399 | | 0.0016 | 285.63 | 2000 | 0.1324 | 0.3435 | | 0.0013 | 314.25 | 2200 | 0.1340 | 0.3470 | | 0.001 | 342.75 | 2400 | 0.1374 | 0.3435 | | 0.0009 | 371.38 | 2600 | 0.1384 | 0.3417 | | 0.0007 | 399.88 | 2800 | 0.1422 | 0.3435 | | 0.0006 | 428.5 | 3000 | 0.1452 | 0.3417 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu117 - Datasets 2.7.1 - Tokenizers 0.13.2
Akshay-Vs/AI
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
Access to model rybread01/email-ds-bert is restricted and you are not in the authorized list. Visit https://huggingface.co/rybread01/email-ds-bert to ask for access.
ASCCCCCCCC/distilbert-base-chinese-amazon_zh_20000
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers", "generated_from_trainer" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
93
2022-12-06T19:14:49Z
--- license: apache-2.0 tags: - text-classification - generated_from_trainer datasets: - paws-x metrics: - accuracy model-index: - name: paws_x_m_bert_only_ko results: - task: name: Text Classification type: text-classification dataset: name: paws-x type: paws-x config: ko split: train args: ko metrics: - name: Accuracy type: accuracy value: 0.8215 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # paws_x_m_bert_only_ko This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the paws-x dataset. It achieves the following results on the evaluation set: - Loss: 0.7649 - Accuracy: 0.8215 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5446 | 1.0 | 386 | 0.4837 | 0.768 | | 0.3443 | 2.0 | 772 | 0.4530 | 0.8125 | | 0.258 | 3.0 | 1158 | 0.4496 | 0.8145 | | 0.2023 | 4.0 | 1544 | 0.4944 | 0.81 | | 0.1581 | 5.0 | 1930 | 0.5040 | 0.814 | | 0.1263 | 6.0 | 2316 | 0.5937 | 0.8145 | | 0.1041 | 7.0 | 2702 | 0.6578 | 0.8115 | | 0.0828 | 8.0 | 3088 | 0.6841 | 0.8215 | | 0.0697 | 9.0 | 3474 | 0.7239 | 0.82 | | 0.0596 | 10.0 | 3860 | 0.7649 | 0.8215 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0 - Datasets 2.6.1 - Tokenizers 0.13.1
ASCCCCCCCC/distilbert-base-multilingual-cased-amazon_zh_20000
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
39
null
--- language: - mn license: apache-2.0 tags: - whisper-event - hf-asr-leaderboard - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 - google/fleurs - bayartsogt/ulaanbal-v0 metrics: - wer model-index: - name: whisper-medium-mn-5 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 11.0 type: mozilla-foundation/common_voice_11_0 config: mn split: test metrics: - name: Wer type: wer value: 24.7268953462967 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-medium-mn-4 This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3396 - Wer: 24.7268 - Cer: 8.6712 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 12000 - mixed_precision_training: Native AMP ### Training results ``` {'eval_loss': 0.3396347761154175, 'eval_wer': 24.7268953462967, 'eval_cer': 8.671234994074913, 'eval_runtime': 2202.1539, 'eval_samples_per_second': 0.856, 'eval_steps_per_second': 0.027, 'epoch': 7 .3} ``` ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
Aeroxas/Botroxas-small
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 257.25 +/- 21.61 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
AetherIT/DialoGPT-small-Hal
[ "conversational" ]
conversational
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: wav2vec2-large-teacher-base-student-en-asr-timit results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-teacher-base-student-en-asr-timit This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 73.5882 - Wer: 0.3422 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 920.6083 | 3.17 | 200 | 1256.0675 | 1.0 | | 660.5993 | 6.35 | 400 | 717.6098 | 0.9238 | | 336.5288 | 9.52 | 600 | 202.0025 | 0.5306 | | 131.3178 | 12.7 | 800 | 108.0701 | 0.4335 | | 73.4232 | 15.87 | 1000 | 90.2797 | 0.3728 | | 54.9439 | 19.05 | 1200 | 76.9043 | 0.3636 | | 44.6595 | 22.22 | 1400 | 79.2443 | 0.3550 | | 38.6381 | 25.4 | 1600 | 73.6277 | 0.3493 | | 35.074 | 28.57 | 1800 | 73.5882 | 0.3422 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 1.18.3 - Tokenizers 0.13.2
AethiQs-Max/aethiqs-base_bertje-data_rotterdam-epochs_10
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: apache-2.0 tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: krirk-finetuned-Helsinki-NLP_opus-mt-ar-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # krirk-finetuned-Helsinki-NLP_opus-mt-ar-en This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3665 - Bleu: 35.0219 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 1.4469 | 1.0 | 32 | 1.3744 | 34.9616 | | 1.2938 | 2.0 | 64 | 1.3674 | 34.9145 | | 1.2582 | 3.0 | 96 | 1.3665 | 35.0219 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
AethiQs-Max/aethiqs-base_bertje-data_rotterdam-epochs_30-epoch_30
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: huggan/smithsonian_butterflies_subset metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python from diffusers import DDPMPipeline model_id = "hjjeon/ddpm-butterflies-128" # load model and scheduler pipeline = DDPMPipeline.from_pretrained(model_id) # run pipeline in inference image = pipeline()["sample"] # save image image[0].save("butterfly.png") ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/hjjeon/ddpm-butterflies-128/tensorboard?#scalars)
Ahmedahmed/Wewe
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - vi --- ## Introduction This model was initialized from [vinai/bartpho-word-base](https://huggingface.co/vinai/bartpho-word-base) and converted to [Allenai's Longformer Encoder-Decoder (LED)](https://github.com/allenai/longformer#longformer) based on [Longformer: The Long-Document Transformer](https://arxiv.org/pdf/2004.05150.pdf). To be able to process 16K tokens, *bartpho-word-base*'s position embedding matrix was simply copied 16 times. This model is especially interesting for long-range summarization and question answering. ## Fine-tuning for down-stream task [This notebook](https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing) shows how led model can effectively be fine-tuned on a downstream task.
Akash7897/test-clm
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
Access to model DocPIXL/DOCPICL is restricted and you are not in the authorized list. Visit https://huggingface.co/DocPIXL/DOCPICL to ask for access.
Akashpb13/Hausa_xlsr
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "ha", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "license:apache-2.0", "model-index", "has_space" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
31
null
--- tags: - mteb model-index: - name: e5-small results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 76.22388059701493 - type: ap value: 40.27466219523129 - type: f1 value: 70.60533006025108 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 87.525775 - type: ap value: 83.51063993897611 - type: f1 value: 87.49342736805572 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 42.611999999999995 - type: f1 value: 42.05088045932892 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 23.826 - type: map_at_10 value: 38.269 - type: map_at_100 value: 39.322 - type: map_at_1000 value: 39.344 - type: map_at_3 value: 33.428000000000004 - type: map_at_5 value: 36.063 - type: mrr_at_1 value: 24.253 - type: mrr_at_10 value: 38.425 - type: mrr_at_100 value: 39.478 - type: mrr_at_1000 value: 39.5 - type: mrr_at_3 value: 33.606 - type: mrr_at_5 value: 36.195 - type: ndcg_at_1 value: 23.826 - type: ndcg_at_10 value: 46.693 - type: ndcg_at_100 value: 51.469 - type: ndcg_at_1000 value: 52.002 - type: ndcg_at_3 value: 36.603 - type: ndcg_at_5 value: 41.365 - type: precision_at_1 value: 23.826 - type: precision_at_10 value: 7.383000000000001 - type: precision_at_100 value: 0.9530000000000001 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 15.268 - type: precision_at_5 value: 11.479000000000001 - type: recall_at_1 value: 23.826 - type: recall_at_10 value: 73.82600000000001 - type: recall_at_100 value: 95.306 - type: recall_at_1000 value: 99.431 - type: recall_at_3 value: 45.804 - type: recall_at_5 value: 57.397 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 44.13995374767436 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 37.13950072624313 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 59.35843292105327 - type: mrr value: 73.72312359846987 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 84.55140418324174 - type: cos_sim_spearman value: 84.21637675860022 - type: euclidean_pearson value: 81.26069614610006 - type: euclidean_spearman value: 83.25069210421785 - type: manhattan_pearson value: 80.17441422581014 - type: manhattan_spearman value: 81.87596198487877 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 81.87337662337661 - type: f1 value: 81.76647866926402 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 35.80600542614507 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 31.86321613256603 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 32.054 - type: map_at_10 value: 40.699999999999996 - type: map_at_100 value: 41.818 - type: map_at_1000 value: 41.959999999999994 - type: map_at_3 value: 37.742 - type: map_at_5 value: 39.427 - type: mrr_at_1 value: 38.769999999999996 - type: mrr_at_10 value: 46.150000000000006 - type: mrr_at_100 value: 46.865 - type: mrr_at_1000 value: 46.925 - type: mrr_at_3 value: 43.705 - type: mrr_at_5 value: 45.214999999999996 - type: ndcg_at_1 value: 38.769999999999996 - type: ndcg_at_10 value: 45.778 - type: ndcg_at_100 value: 50.38 - type: ndcg_at_1000 value: 52.922999999999995 - type: ndcg_at_3 value: 41.597 - type: ndcg_at_5 value: 43.631 - type: precision_at_1 value: 38.769999999999996 - type: precision_at_10 value: 8.269 - type: precision_at_100 value: 1.278 - type: precision_at_1000 value: 0.178 - type: precision_at_3 value: 19.266 - type: precision_at_5 value: 13.705 - type: recall_at_1 value: 32.054 - type: recall_at_10 value: 54.947 - type: recall_at_100 value: 74.79599999999999 - type: recall_at_1000 value: 91.40899999999999 - type: recall_at_3 value: 42.431000000000004 - type: recall_at_5 value: 48.519 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 29.035 - type: map_at_10 value: 38.007000000000005 - type: map_at_100 value: 39.125 - type: map_at_1000 value: 39.251999999999995 - type: map_at_3 value: 35.77 - type: map_at_5 value: 37.057 - type: mrr_at_1 value: 36.497 - type: mrr_at_10 value: 44.077 - type: mrr_at_100 value: 44.743 - type: mrr_at_1000 value: 44.79 - type: mrr_at_3 value: 42.123 - type: mrr_at_5 value: 43.308 - type: ndcg_at_1 value: 36.497 - type: ndcg_at_10 value: 42.986000000000004 - type: ndcg_at_100 value: 47.323 - type: ndcg_at_1000 value: 49.624 - type: ndcg_at_3 value: 39.805 - type: ndcg_at_5 value: 41.286 - type: precision_at_1 value: 36.497 - type: precision_at_10 value: 7.8340000000000005 - type: precision_at_100 value: 1.269 - type: precision_at_1000 value: 0.178 - type: precision_at_3 value: 19.023 - type: precision_at_5 value: 13.248 - type: recall_at_1 value: 29.035 - type: recall_at_10 value: 51.06 - type: recall_at_100 value: 69.64099999999999 - type: recall_at_1000 value: 84.49 - type: recall_at_3 value: 41.333999999999996 - type: recall_at_5 value: 45.663 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 37.239 - type: map_at_10 value: 47.873 - type: map_at_100 value: 48.842999999999996 - type: map_at_1000 value: 48.913000000000004 - type: map_at_3 value: 45.050000000000004 - type: map_at_5 value: 46.498 - type: mrr_at_1 value: 42.508 - type: mrr_at_10 value: 51.44 - type: mrr_at_100 value: 52.087 - type: mrr_at_1000 value: 52.129999999999995 - type: mrr_at_3 value: 49.164 - type: mrr_at_5 value: 50.343 - type: ndcg_at_1 value: 42.508 - type: ndcg_at_10 value: 53.31399999999999 - type: ndcg_at_100 value: 57.245000000000005 - type: ndcg_at_1000 value: 58.794000000000004 - type: ndcg_at_3 value: 48.295 - type: ndcg_at_5 value: 50.415 - type: precision_at_1 value: 42.508 - type: precision_at_10 value: 8.458 - type: precision_at_100 value: 1.133 - type: precision_at_1000 value: 0.132 - type: precision_at_3 value: 21.191 - type: precision_at_5 value: 14.307 - type: recall_at_1 value: 37.239 - type: recall_at_10 value: 65.99000000000001 - type: recall_at_100 value: 82.99499999999999 - type: recall_at_1000 value: 94.128 - type: recall_at_3 value: 52.382 - type: recall_at_5 value: 57.648999999999994 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.039 - type: map_at_10 value: 29.694 - type: map_at_100 value: 30.587999999999997 - type: map_at_1000 value: 30.692999999999998 - type: map_at_3 value: 27.708 - type: map_at_5 value: 28.774 - type: mrr_at_1 value: 24.633 - type: mrr_at_10 value: 31.478 - type: mrr_at_100 value: 32.299 - type: mrr_at_1000 value: 32.381 - type: mrr_at_3 value: 29.435 - type: mrr_at_5 value: 30.446 - type: ndcg_at_1 value: 24.633 - type: ndcg_at_10 value: 33.697 - type: ndcg_at_100 value: 38.080000000000005 - type: ndcg_at_1000 value: 40.812 - type: ndcg_at_3 value: 29.654000000000003 - type: ndcg_at_5 value: 31.474000000000004 - type: precision_at_1 value: 24.633 - type: precision_at_10 value: 5.0729999999999995 - type: precision_at_100 value: 0.753 - type: precision_at_1000 value: 0.10300000000000001 - type: precision_at_3 value: 12.279 - type: precision_at_5 value: 8.452 - type: recall_at_1 value: 23.039 - type: recall_at_10 value: 44.275999999999996 - type: recall_at_100 value: 64.4 - type: recall_at_1000 value: 85.135 - type: recall_at_3 value: 33.394 - type: recall_at_5 value: 37.687 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 13.594999999999999 - type: map_at_10 value: 19.933999999999997 - type: map_at_100 value: 20.966 - type: map_at_1000 value: 21.087 - type: map_at_3 value: 17.749000000000002 - type: map_at_5 value: 19.156000000000002 - type: mrr_at_1 value: 17.662 - type: mrr_at_10 value: 24.407 - type: mrr_at_100 value: 25.385 - type: mrr_at_1000 value: 25.465 - type: mrr_at_3 value: 22.056 - type: mrr_at_5 value: 23.630000000000003 - type: ndcg_at_1 value: 17.662 - type: ndcg_at_10 value: 24.391 - type: ndcg_at_100 value: 29.681 - type: ndcg_at_1000 value: 32.923 - type: ndcg_at_3 value: 20.271 - type: ndcg_at_5 value: 22.621 - type: precision_at_1 value: 17.662 - type: precision_at_10 value: 4.44 - type: precision_at_100 value: 0.8200000000000001 - type: precision_at_1000 value: 0.125 - type: precision_at_3 value: 9.577 - type: precision_at_5 value: 7.313 - type: recall_at_1 value: 13.594999999999999 - type: recall_at_10 value: 33.976 - type: recall_at_100 value: 57.43000000000001 - type: recall_at_1000 value: 80.958 - type: recall_at_3 value: 22.897000000000002 - type: recall_at_5 value: 28.714000000000002 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 26.683 - type: map_at_10 value: 35.068 - type: map_at_100 value: 36.311 - type: map_at_1000 value: 36.436 - type: map_at_3 value: 32.371 - type: map_at_5 value: 33.761 - type: mrr_at_1 value: 32.435 - type: mrr_at_10 value: 40.721000000000004 - type: mrr_at_100 value: 41.535 - type: mrr_at_1000 value: 41.593 - type: mrr_at_3 value: 38.401999999999994 - type: mrr_at_5 value: 39.567 - type: ndcg_at_1 value: 32.435 - type: ndcg_at_10 value: 40.538000000000004 - type: ndcg_at_100 value: 45.963 - type: ndcg_at_1000 value: 48.400999999999996 - type: ndcg_at_3 value: 36.048 - type: ndcg_at_5 value: 37.899 - type: precision_at_1 value: 32.435 - type: precision_at_10 value: 7.1129999999999995 - type: precision_at_100 value: 1.162 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 16.683 - type: precision_at_5 value: 11.684 - type: recall_at_1 value: 26.683 - type: recall_at_10 value: 51.517 - type: recall_at_100 value: 74.553 - type: recall_at_1000 value: 90.649 - type: recall_at_3 value: 38.495000000000005 - type: recall_at_5 value: 43.495 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.186 - type: map_at_10 value: 31.972 - type: map_at_100 value: 33.117000000000004 - type: map_at_1000 value: 33.243 - type: map_at_3 value: 29.423 - type: map_at_5 value: 30.847 - type: mrr_at_1 value: 29.794999999999998 - type: mrr_at_10 value: 36.767 - type: mrr_at_100 value: 37.645 - type: mrr_at_1000 value: 37.716 - type: mrr_at_3 value: 34.513 - type: mrr_at_5 value: 35.791000000000004 - type: ndcg_at_1 value: 29.794999999999998 - type: ndcg_at_10 value: 36.786 - type: ndcg_at_100 value: 41.94 - type: ndcg_at_1000 value: 44.830999999999996 - type: ndcg_at_3 value: 32.504 - type: ndcg_at_5 value: 34.404 - type: precision_at_1 value: 29.794999999999998 - type: precision_at_10 value: 6.518 - type: precision_at_100 value: 1.0659999999999998 - type: precision_at_1000 value: 0.149 - type: precision_at_3 value: 15.296999999999999 - type: precision_at_5 value: 10.731 - type: recall_at_1 value: 24.186 - type: recall_at_10 value: 46.617 - type: recall_at_100 value: 68.75 - type: recall_at_1000 value: 88.864 - type: recall_at_3 value: 34.199 - type: recall_at_5 value: 39.462 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.22083333333333 - type: map_at_10 value: 31.606666666666662 - type: map_at_100 value: 32.6195 - type: map_at_1000 value: 32.739999999999995 - type: map_at_3 value: 29.37825 - type: map_at_5 value: 30.596083333333336 - type: mrr_at_1 value: 28.607916666666668 - type: mrr_at_10 value: 35.54591666666666 - type: mrr_at_100 value: 36.33683333333333 - type: mrr_at_1000 value: 36.40624999999999 - type: mrr_at_3 value: 33.526250000000005 - type: mrr_at_5 value: 34.6605 - type: ndcg_at_1 value: 28.607916666666668 - type: ndcg_at_10 value: 36.07966666666667 - type: ndcg_at_100 value: 40.73308333333333 - type: ndcg_at_1000 value: 43.40666666666666 - type: ndcg_at_3 value: 32.23525 - type: ndcg_at_5 value: 33.97083333333333 - type: precision_at_1 value: 28.607916666666668 - type: precision_at_10 value: 6.120333333333335 - type: precision_at_100 value: 0.9921666666666668 - type: precision_at_1000 value: 0.14091666666666666 - type: precision_at_3 value: 14.54975 - type: precision_at_5 value: 10.153166666666667 - type: recall_at_1 value: 24.22083333333333 - type: recall_at_10 value: 45.49183333333334 - type: recall_at_100 value: 66.28133333333332 - type: recall_at_1000 value: 85.16541666666667 - type: recall_at_3 value: 34.6485 - type: recall_at_5 value: 39.229749999999996 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 21.842 - type: map_at_10 value: 27.573999999999998 - type: map_at_100 value: 28.410999999999998 - type: map_at_1000 value: 28.502 - type: map_at_3 value: 25.921 - type: map_at_5 value: 26.888 - type: mrr_at_1 value: 24.08 - type: mrr_at_10 value: 29.915999999999997 - type: mrr_at_100 value: 30.669 - type: mrr_at_1000 value: 30.746000000000002 - type: mrr_at_3 value: 28.349000000000004 - type: mrr_at_5 value: 29.246 - type: ndcg_at_1 value: 24.08 - type: ndcg_at_10 value: 30.898999999999997 - type: ndcg_at_100 value: 35.272999999999996 - type: ndcg_at_1000 value: 37.679 - type: ndcg_at_3 value: 27.881 - type: ndcg_at_5 value: 29.432000000000002 - type: precision_at_1 value: 24.08 - type: precision_at_10 value: 4.678 - type: precision_at_100 value: 0.744 - type: precision_at_1000 value: 0.10300000000000001 - type: precision_at_3 value: 11.860999999999999 - type: precision_at_5 value: 8.16 - type: recall_at_1 value: 21.842 - type: recall_at_10 value: 38.66 - type: recall_at_100 value: 59.169000000000004 - type: recall_at_1000 value: 76.887 - type: recall_at_3 value: 30.532999999999998 - type: recall_at_5 value: 34.354 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 17.145 - type: map_at_10 value: 22.729 - type: map_at_100 value: 23.574 - type: map_at_1000 value: 23.695 - type: map_at_3 value: 21.044 - type: map_at_5 value: 21.981 - type: mrr_at_1 value: 20.888 - type: mrr_at_10 value: 26.529000000000003 - type: mrr_at_100 value: 27.308 - type: mrr_at_1000 value: 27.389000000000003 - type: mrr_at_3 value: 24.868000000000002 - type: mrr_at_5 value: 25.825 - type: ndcg_at_1 value: 20.888 - type: ndcg_at_10 value: 26.457000000000004 - type: ndcg_at_100 value: 30.764000000000003 - type: ndcg_at_1000 value: 33.825 - type: ndcg_at_3 value: 23.483999999999998 - type: ndcg_at_5 value: 24.836 - type: precision_at_1 value: 20.888 - type: precision_at_10 value: 4.58 - type: precision_at_100 value: 0.784 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 10.874 - type: precision_at_5 value: 7.639 - type: recall_at_1 value: 17.145 - type: recall_at_10 value: 33.938 - type: recall_at_100 value: 53.672 - type: recall_at_1000 value: 76.023 - type: recall_at_3 value: 25.363000000000003 - type: recall_at_5 value: 29.023 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.275 - type: map_at_10 value: 30.438 - type: map_at_100 value: 31.489 - type: map_at_1000 value: 31.601000000000003 - type: map_at_3 value: 28.647 - type: map_at_5 value: 29.660999999999998 - type: mrr_at_1 value: 28.077999999999996 - type: mrr_at_10 value: 34.098 - type: mrr_at_100 value: 35.025 - type: mrr_at_1000 value: 35.109 - type: mrr_at_3 value: 32.4 - type: mrr_at_5 value: 33.379999999999995 - type: ndcg_at_1 value: 28.077999999999996 - type: ndcg_at_10 value: 34.271 - type: ndcg_at_100 value: 39.352 - type: ndcg_at_1000 value: 42.199 - type: ndcg_at_3 value: 30.978 - type: ndcg_at_5 value: 32.498 - type: precision_at_1 value: 28.077999999999996 - type: precision_at_10 value: 5.345 - type: precision_at_100 value: 0.897 - type: precision_at_1000 value: 0.125 - type: precision_at_3 value: 13.526 - type: precision_at_5 value: 9.16 - type: recall_at_1 value: 24.275 - type: recall_at_10 value: 42.362 - type: recall_at_100 value: 64.461 - type: recall_at_1000 value: 84.981 - type: recall_at_3 value: 33.249 - type: recall_at_5 value: 37.214999999999996 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 22.358 - type: map_at_10 value: 30.062 - type: map_at_100 value: 31.189 - type: map_at_1000 value: 31.386999999999997 - type: map_at_3 value: 27.672 - type: map_at_5 value: 28.76 - type: mrr_at_1 value: 26.877000000000002 - type: mrr_at_10 value: 33.948 - type: mrr_at_100 value: 34.746 - type: mrr_at_1000 value: 34.816 - type: mrr_at_3 value: 31.884 - type: mrr_at_5 value: 33.001000000000005 - type: ndcg_at_1 value: 26.877000000000002 - type: ndcg_at_10 value: 34.977000000000004 - type: ndcg_at_100 value: 39.753 - type: ndcg_at_1000 value: 42.866 - type: ndcg_at_3 value: 30.956 - type: ndcg_at_5 value: 32.381 - type: precision_at_1 value: 26.877000000000002 - type: precision_at_10 value: 6.7 - type: precision_at_100 value: 1.287 - type: precision_at_1000 value: 0.215 - type: precision_at_3 value: 14.360999999999999 - type: precision_at_5 value: 10.119 - type: recall_at_1 value: 22.358 - type: recall_at_10 value: 44.183 - type: recall_at_100 value: 67.14 - type: recall_at_1000 value: 87.53999999999999 - type: recall_at_3 value: 32.79 - type: recall_at_5 value: 36.829 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 19.198999999999998 - type: map_at_10 value: 25.229000000000003 - type: map_at_100 value: 26.003 - type: map_at_1000 value: 26.111 - type: map_at_3 value: 23.442 - type: map_at_5 value: 24.343 - type: mrr_at_1 value: 21.072 - type: mrr_at_10 value: 27.02 - type: mrr_at_100 value: 27.735 - type: mrr_at_1000 value: 27.815 - type: mrr_at_3 value: 25.416 - type: mrr_at_5 value: 26.173999999999996 - type: ndcg_at_1 value: 21.072 - type: ndcg_at_10 value: 28.862 - type: ndcg_at_100 value: 33.043 - type: ndcg_at_1000 value: 36.003 - type: ndcg_at_3 value: 25.35 - type: ndcg_at_5 value: 26.773000000000003 - type: precision_at_1 value: 21.072 - type: precision_at_10 value: 4.436 - type: precision_at_100 value: 0.713 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 10.659 - type: precision_at_5 value: 7.32 - type: recall_at_1 value: 19.198999999999998 - type: recall_at_10 value: 38.376 - type: recall_at_100 value: 58.36900000000001 - type: recall_at_1000 value: 80.92099999999999 - type: recall_at_3 value: 28.715000000000003 - type: recall_at_5 value: 32.147 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 5.9319999999999995 - type: map_at_10 value: 10.483 - type: map_at_100 value: 11.97 - type: map_at_1000 value: 12.171999999999999 - type: map_at_3 value: 8.477 - type: map_at_5 value: 9.495000000000001 - type: mrr_at_1 value: 13.094 - type: mrr_at_10 value: 21.282 - type: mrr_at_100 value: 22.556 - type: mrr_at_1000 value: 22.628999999999998 - type: mrr_at_3 value: 18.218999999999998 - type: mrr_at_5 value: 19.900000000000002 - type: ndcg_at_1 value: 13.094 - type: ndcg_at_10 value: 15.811 - type: ndcg_at_100 value: 23.035 - type: ndcg_at_1000 value: 27.089999999999996 - type: ndcg_at_3 value: 11.905000000000001 - type: ndcg_at_5 value: 13.377 - type: precision_at_1 value: 13.094 - type: precision_at_10 value: 5.225 - type: precision_at_100 value: 1.2970000000000002 - type: precision_at_1000 value: 0.203 - type: precision_at_3 value: 8.86 - type: precision_at_5 value: 7.309 - type: recall_at_1 value: 5.9319999999999995 - type: recall_at_10 value: 20.305 - type: recall_at_100 value: 46.314 - type: recall_at_1000 value: 69.612 - type: recall_at_3 value: 11.21 - type: recall_at_5 value: 14.773 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 8.674 - type: map_at_10 value: 17.822 - type: map_at_100 value: 24.794 - type: map_at_1000 value: 26.214 - type: map_at_3 value: 12.690999999999999 - type: map_at_5 value: 15.033 - type: mrr_at_1 value: 61.75000000000001 - type: mrr_at_10 value: 71.58 - type: mrr_at_100 value: 71.923 - type: mrr_at_1000 value: 71.932 - type: mrr_at_3 value: 70.125 - type: mrr_at_5 value: 71.038 - type: ndcg_at_1 value: 51 - type: ndcg_at_10 value: 38.637 - type: ndcg_at_100 value: 42.398 - type: ndcg_at_1000 value: 48.962 - type: ndcg_at_3 value: 43.29 - type: ndcg_at_5 value: 40.763 - type: precision_at_1 value: 61.75000000000001 - type: precision_at_10 value: 30.125 - type: precision_at_100 value: 9.53 - type: precision_at_1000 value: 1.9619999999999997 - type: precision_at_3 value: 45.583 - type: precision_at_5 value: 38.95 - type: recall_at_1 value: 8.674 - type: recall_at_10 value: 23.122 - type: recall_at_100 value: 47.46 - type: recall_at_1000 value: 67.662 - type: recall_at_3 value: 13.946 - type: recall_at_5 value: 17.768 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 46.86000000000001 - type: f1 value: 41.343580452760776 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 36.609 - type: map_at_10 value: 47.552 - type: map_at_100 value: 48.283 - type: map_at_1000 value: 48.321 - type: map_at_3 value: 44.869 - type: map_at_5 value: 46.509 - type: mrr_at_1 value: 39.214 - type: mrr_at_10 value: 50.434999999999995 - type: mrr_at_100 value: 51.122 - type: mrr_at_1000 value: 51.151 - type: mrr_at_3 value: 47.735 - type: mrr_at_5 value: 49.394 - type: ndcg_at_1 value: 39.214 - type: ndcg_at_10 value: 53.52400000000001 - type: ndcg_at_100 value: 56.997 - type: ndcg_at_1000 value: 57.975 - type: ndcg_at_3 value: 48.173 - type: ndcg_at_5 value: 51.05800000000001 - type: precision_at_1 value: 39.214 - type: precision_at_10 value: 7.573 - type: precision_at_100 value: 0.9440000000000001 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 19.782 - type: precision_at_5 value: 13.453000000000001 - type: recall_at_1 value: 36.609 - type: recall_at_10 value: 69.247 - type: recall_at_100 value: 84.99600000000001 - type: recall_at_1000 value: 92.40899999999999 - type: recall_at_3 value: 54.856 - type: recall_at_5 value: 61.797000000000004 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 16.466 - type: map_at_10 value: 27.060000000000002 - type: map_at_100 value: 28.511999999999997 - type: map_at_1000 value: 28.693 - type: map_at_3 value: 22.777 - type: map_at_5 value: 25.086000000000002 - type: mrr_at_1 value: 32.716 - type: mrr_at_10 value: 41.593999999999994 - type: mrr_at_100 value: 42.370000000000005 - type: mrr_at_1000 value: 42.419000000000004 - type: mrr_at_3 value: 38.143 - type: mrr_at_5 value: 40.288000000000004 - type: ndcg_at_1 value: 32.716 - type: ndcg_at_10 value: 34.795 - type: ndcg_at_100 value: 40.58 - type: ndcg_at_1000 value: 43.993 - type: ndcg_at_3 value: 29.573 - type: ndcg_at_5 value: 31.583 - type: precision_at_1 value: 32.716 - type: precision_at_10 value: 9.937999999999999 - type: precision_at_100 value: 1.585 - type: precision_at_1000 value: 0.22 - type: precision_at_3 value: 19.496 - type: precision_at_5 value: 15.247 - type: recall_at_1 value: 16.466 - type: recall_at_10 value: 42.886 - type: recall_at_100 value: 64.724 - type: recall_at_1000 value: 85.347 - type: recall_at_3 value: 26.765 - type: recall_at_5 value: 33.603 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 33.025 - type: map_at_10 value: 47.343 - type: map_at_100 value: 48.207 - type: map_at_1000 value: 48.281 - type: map_at_3 value: 44.519 - type: map_at_5 value: 46.217000000000006 - type: mrr_at_1 value: 66.05 - type: mrr_at_10 value: 72.94699999999999 - type: mrr_at_100 value: 73.289 - type: mrr_at_1000 value: 73.30499999999999 - type: mrr_at_3 value: 71.686 - type: mrr_at_5 value: 72.491 - type: ndcg_at_1 value: 66.05 - type: ndcg_at_10 value: 56.338 - type: ndcg_at_100 value: 59.599999999999994 - type: ndcg_at_1000 value: 61.138000000000005 - type: ndcg_at_3 value: 52.034000000000006 - type: ndcg_at_5 value: 54.352000000000004 - type: precision_at_1 value: 66.05 - type: precision_at_10 value: 11.693000000000001 - type: precision_at_100 value: 1.425 - type: precision_at_1000 value: 0.163 - type: precision_at_3 value: 32.613 - type: precision_at_5 value: 21.401999999999997 - type: recall_at_1 value: 33.025 - type: recall_at_10 value: 58.467 - type: recall_at_100 value: 71.242 - type: recall_at_1000 value: 81.452 - type: recall_at_3 value: 48.92 - type: recall_at_5 value: 53.504 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 75.5492 - type: ap value: 69.42911637216271 - type: f1 value: 75.39113704261024 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 23.173 - type: map_at_10 value: 35.453 - type: map_at_100 value: 36.573 - type: map_at_1000 value: 36.620999999999995 - type: map_at_3 value: 31.655 - type: map_at_5 value: 33.823 - type: mrr_at_1 value: 23.868000000000002 - type: mrr_at_10 value: 36.085 - type: mrr_at_100 value: 37.15 - type: mrr_at_1000 value: 37.193 - type: mrr_at_3 value: 32.376 - type: mrr_at_5 value: 34.501 - type: ndcg_at_1 value: 23.854 - type: ndcg_at_10 value: 42.33 - type: ndcg_at_100 value: 47.705999999999996 - type: ndcg_at_1000 value: 48.91 - type: ndcg_at_3 value: 34.604 - type: ndcg_at_5 value: 38.473 - type: precision_at_1 value: 23.854 - type: precision_at_10 value: 6.639 - type: precision_at_100 value: 0.932 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 14.685 - type: precision_at_5 value: 10.782 - type: recall_at_1 value: 23.173 - type: recall_at_10 value: 63.441 - type: recall_at_100 value: 88.25 - type: recall_at_1000 value: 97.438 - type: recall_at_3 value: 42.434 - type: recall_at_5 value: 51.745 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 92.05426356589147 - type: f1 value: 91.88068588063942 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 73.23985408116735 - type: f1 value: 55.858906745287506 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.21923335574984 - type: f1 value: 70.0174116204253 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.77673167451245 - type: f1 value: 75.44811354778666 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 31.340414710728737 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 28.196676760061578 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 29.564149683482206 - type: mrr value: 30.28995474250486 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.93 - type: map_at_10 value: 12.828000000000001 - type: map_at_100 value: 15.501000000000001 - type: map_at_1000 value: 16.791 - type: map_at_3 value: 9.727 - type: map_at_5 value: 11.318999999999999 - type: mrr_at_1 value: 47.678 - type: mrr_at_10 value: 55.893 - type: mrr_at_100 value: 56.491 - type: mrr_at_1000 value: 56.53 - type: mrr_at_3 value: 54.386 - type: mrr_at_5 value: 55.516 - type: ndcg_at_1 value: 45.975 - type: ndcg_at_10 value: 33.928999999999995 - type: ndcg_at_100 value: 30.164 - type: ndcg_at_1000 value: 38.756 - type: ndcg_at_3 value: 41.077000000000005 - type: ndcg_at_5 value: 38.415 - type: precision_at_1 value: 47.678 - type: precision_at_10 value: 24.365000000000002 - type: precision_at_100 value: 7.344 - type: precision_at_1000 value: 1.994 - type: precision_at_3 value: 38.184000000000005 - type: precision_at_5 value: 33.003 - type: recall_at_1 value: 5.93 - type: recall_at_10 value: 16.239 - type: recall_at_100 value: 28.782999999999998 - type: recall_at_1000 value: 60.11 - type: recall_at_3 value: 10.700999999999999 - type: recall_at_5 value: 13.584 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 36.163000000000004 - type: map_at_10 value: 51.520999999999994 - type: map_at_100 value: 52.449 - type: map_at_1000 value: 52.473000000000006 - type: map_at_3 value: 47.666 - type: map_at_5 value: 50.043000000000006 - type: mrr_at_1 value: 40.266999999999996 - type: mrr_at_10 value: 54.074 - type: mrr_at_100 value: 54.722 - type: mrr_at_1000 value: 54.739000000000004 - type: mrr_at_3 value: 51.043000000000006 - type: mrr_at_5 value: 52.956 - type: ndcg_at_1 value: 40.238 - type: ndcg_at_10 value: 58.73199999999999 - type: ndcg_at_100 value: 62.470000000000006 - type: ndcg_at_1000 value: 63.083999999999996 - type: ndcg_at_3 value: 51.672 - type: ndcg_at_5 value: 55.564 - type: precision_at_1 value: 40.238 - type: precision_at_10 value: 9.279 - type: precision_at_100 value: 1.139 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 23.078000000000003 - type: precision_at_5 value: 16.176 - type: recall_at_1 value: 36.163000000000004 - type: recall_at_10 value: 77.88199999999999 - type: recall_at_100 value: 93.83399999999999 - type: recall_at_1000 value: 98.465 - type: recall_at_3 value: 59.857000000000006 - type: recall_at_5 value: 68.73599999999999 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 70.344 - type: map_at_10 value: 83.907 - type: map_at_100 value: 84.536 - type: map_at_1000 value: 84.557 - type: map_at_3 value: 80.984 - type: map_at_5 value: 82.844 - type: mrr_at_1 value: 81.02000000000001 - type: mrr_at_10 value: 87.158 - type: mrr_at_100 value: 87.268 - type: mrr_at_1000 value: 87.26899999999999 - type: mrr_at_3 value: 86.17 - type: mrr_at_5 value: 86.87 - type: ndcg_at_1 value: 81.02000000000001 - type: ndcg_at_10 value: 87.70700000000001 - type: ndcg_at_100 value: 89.004 - type: ndcg_at_1000 value: 89.139 - type: ndcg_at_3 value: 84.841 - type: ndcg_at_5 value: 86.455 - type: precision_at_1 value: 81.02000000000001 - type: precision_at_10 value: 13.248999999999999 - type: precision_at_100 value: 1.516 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 36.963 - type: precision_at_5 value: 24.33 - type: recall_at_1 value: 70.344 - type: recall_at_10 value: 94.75099999999999 - type: recall_at_100 value: 99.30499999999999 - type: recall_at_1000 value: 99.928 - type: recall_at_3 value: 86.506 - type: recall_at_5 value: 91.083 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 42.873718018378305 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 56.39477366450528 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 3.868 - type: map_at_10 value: 9.611 - type: map_at_100 value: 11.087 - type: map_at_1000 value: 11.332 - type: map_at_3 value: 6.813 - type: map_at_5 value: 8.233 - type: mrr_at_1 value: 19 - type: mrr_at_10 value: 28.457 - type: mrr_at_100 value: 29.613 - type: mrr_at_1000 value: 29.695 - type: mrr_at_3 value: 25.55 - type: mrr_at_5 value: 27.29 - type: ndcg_at_1 value: 19 - type: ndcg_at_10 value: 16.419 - type: ndcg_at_100 value: 22.817999999999998 - type: ndcg_at_1000 value: 27.72 - type: ndcg_at_3 value: 15.379000000000001 - type: ndcg_at_5 value: 13.645 - type: precision_at_1 value: 19 - type: precision_at_10 value: 8.540000000000001 - type: precision_at_100 value: 1.7819999999999998 - type: precision_at_1000 value: 0.297 - type: precision_at_3 value: 14.267 - type: precision_at_5 value: 12.04 - type: recall_at_1 value: 3.868 - type: recall_at_10 value: 17.288 - type: recall_at_100 value: 36.144999999999996 - type: recall_at_1000 value: 60.199999999999996 - type: recall_at_3 value: 8.688 - type: recall_at_5 value: 12.198 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 83.96614722598582 - type: cos_sim_spearman value: 78.9003023008781 - type: euclidean_pearson value: 81.01829384436505 - type: euclidean_spearman value: 78.93248416788914 - type: manhattan_pearson value: 81.1665428926402 - type: manhattan_spearman value: 78.93264116287453 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 83.54613363895993 - type: cos_sim_spearman value: 75.1883451602451 - type: euclidean_pearson value: 79.70320886899894 - type: euclidean_spearman value: 74.5917140136796 - type: manhattan_pearson value: 79.82157067185999 - type: manhattan_spearman value: 74.74185720594735 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 81.30430156721782 - type: cos_sim_spearman value: 81.79962989974364 - type: euclidean_pearson value: 80.89058823224924 - type: euclidean_spearman value: 81.35929372984597 - type: manhattan_pearson value: 81.12204370487478 - type: manhattan_spearman value: 81.6248963282232 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 81.13064504403134 - type: cos_sim_spearman value: 78.48371403924872 - type: euclidean_pearson value: 80.16794919665591 - type: euclidean_spearman value: 78.29216082221699 - type: manhattan_pearson value: 80.22308565207301 - type: manhattan_spearman value: 78.37829229948022 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 86.52918899541099 - type: cos_sim_spearman value: 87.49276894673142 - type: euclidean_pearson value: 86.77440570164254 - type: euclidean_spearman value: 87.5753295736756 - type: manhattan_pearson value: 86.86098573892133 - type: manhattan_spearman value: 87.65848591821947 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 82.86805307244882 - type: cos_sim_spearman value: 84.58066253757511 - type: euclidean_pearson value: 84.38377000876991 - type: euclidean_spearman value: 85.1837278784528 - type: manhattan_pearson value: 84.41903291363842 - type: manhattan_spearman value: 85.19023736251052 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 86.77218560282436 - type: cos_sim_spearman value: 87.94243515296604 - type: euclidean_pearson value: 88.22800939214864 - type: euclidean_spearman value: 87.91106839439841 - type: manhattan_pearson value: 88.17063269848741 - type: manhattan_spearman value: 87.72751904126062 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 60.40731554300387 - type: cos_sim_spearman value: 63.76300532966479 - type: euclidean_pearson value: 62.94727878229085 - type: euclidean_spearman value: 63.678039531461216 - type: manhattan_pearson value: 63.00661039863549 - type: manhattan_spearman value: 63.6282591984376 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 84.92731569745344 - type: cos_sim_spearman value: 86.36336704300167 - type: euclidean_pearson value: 86.09122224841195 - type: euclidean_spearman value: 86.2116149319238 - type: manhattan_pearson value: 86.07879456717032 - type: manhattan_spearman value: 86.2022069635119 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 79.75976311752326 - type: mrr value: 94.15782837351466 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 51.193999999999996 - type: map_at_10 value: 61.224999999999994 - type: map_at_100 value: 62.031000000000006 - type: map_at_1000 value: 62.066 - type: map_at_3 value: 59.269000000000005 - type: map_at_5 value: 60.159 - type: mrr_at_1 value: 53.667 - type: mrr_at_10 value: 62.74999999999999 - type: mrr_at_100 value: 63.39399999999999 - type: mrr_at_1000 value: 63.425 - type: mrr_at_3 value: 61.389 - type: mrr_at_5 value: 61.989000000000004 - type: ndcg_at_1 value: 53.667 - type: ndcg_at_10 value: 65.596 - type: ndcg_at_100 value: 68.906 - type: ndcg_at_1000 value: 69.78999999999999 - type: ndcg_at_3 value: 62.261 - type: ndcg_at_5 value: 63.453 - type: precision_at_1 value: 53.667 - type: precision_at_10 value: 8.667 - type: precision_at_100 value: 1.04 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 24.556 - type: precision_at_5 value: 15.6 - type: recall_at_1 value: 51.193999999999996 - type: recall_at_10 value: 77.156 - type: recall_at_100 value: 91.43299999999999 - type: recall_at_1000 value: 98.333 - type: recall_at_3 value: 67.994 - type: recall_at_5 value: 71.14399999999999 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.81485148514851 - type: cos_sim_ap value: 95.28896513388551 - type: cos_sim_f1 value: 90.43478260869566 - type: cos_sim_precision value: 92.56544502617801 - type: cos_sim_recall value: 88.4 - type: dot_accuracy value: 99.30594059405941 - type: dot_ap value: 61.6432597455472 - type: dot_f1 value: 59.46481665014866 - type: dot_precision value: 58.93909626719057 - type: dot_recall value: 60 - type: euclidean_accuracy value: 99.81980198019802 - type: euclidean_ap value: 95.21411049527 - type: euclidean_f1 value: 91.06090373280944 - type: euclidean_precision value: 89.47876447876449 - type: euclidean_recall value: 92.7 - type: manhattan_accuracy value: 99.81782178217821 - type: manhattan_ap value: 95.32449994414968 - type: manhattan_f1 value: 90.86395233366436 - type: manhattan_precision value: 90.23668639053254 - type: manhattan_recall value: 91.5 - type: max_accuracy value: 99.81980198019802 - type: max_ap value: 95.32449994414968 - type: max_f1 value: 91.06090373280944 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 59.08045614613064 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 30.297802606804748 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 49.12801740706292 - type: mrr value: 50.05592956879722 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.523347880124497 - type: cos_sim_spearman value: 31.388214436391014 - type: dot_pearson value: 24.55403435439901 - type: dot_spearman value: 23.50153210841191 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.243 - type: map_at_10 value: 1.886 - type: map_at_100 value: 10.040000000000001 - type: map_at_1000 value: 23.768 - type: map_at_3 value: 0.674 - type: map_at_5 value: 1.079 - type: mrr_at_1 value: 88 - type: mrr_at_10 value: 93.667 - type: mrr_at_100 value: 93.667 - type: mrr_at_1000 value: 93.667 - type: mrr_at_3 value: 93.667 - type: mrr_at_5 value: 93.667 - type: ndcg_at_1 value: 83 - type: ndcg_at_10 value: 76.777 - type: ndcg_at_100 value: 55.153 - type: ndcg_at_1000 value: 47.912 - type: ndcg_at_3 value: 81.358 - type: ndcg_at_5 value: 80.74799999999999 - type: precision_at_1 value: 88 - type: precision_at_10 value: 80.80000000000001 - type: precision_at_100 value: 56.02 - type: precision_at_1000 value: 21.51 - type: precision_at_3 value: 86 - type: precision_at_5 value: 86 - type: recall_at_1 value: 0.243 - type: recall_at_10 value: 2.0869999999999997 - type: recall_at_100 value: 13.014000000000001 - type: recall_at_1000 value: 44.433 - type: recall_at_3 value: 0.6910000000000001 - type: recall_at_5 value: 1.1440000000000001 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 3.066 - type: map_at_10 value: 10.615 - type: map_at_100 value: 16.463 - type: map_at_1000 value: 17.815 - type: map_at_3 value: 5.7860000000000005 - type: map_at_5 value: 7.353999999999999 - type: mrr_at_1 value: 38.775999999999996 - type: mrr_at_10 value: 53.846000000000004 - type: mrr_at_100 value: 54.37 - type: mrr_at_1000 value: 54.37 - type: mrr_at_3 value: 48.980000000000004 - type: mrr_at_5 value: 51.735 - type: ndcg_at_1 value: 34.694 - type: ndcg_at_10 value: 26.811 - type: ndcg_at_100 value: 37.342999999999996 - type: ndcg_at_1000 value: 47.964 - type: ndcg_at_3 value: 30.906 - type: ndcg_at_5 value: 27.77 - type: precision_at_1 value: 38.775999999999996 - type: precision_at_10 value: 23.878 - type: precision_at_100 value: 7.632999999999999 - type: precision_at_1000 value: 1.469 - type: precision_at_3 value: 31.973000000000003 - type: precision_at_5 value: 26.939 - type: recall_at_1 value: 3.066 - type: recall_at_10 value: 17.112 - type: recall_at_100 value: 47.723 - type: recall_at_1000 value: 79.50500000000001 - type: recall_at_3 value: 6.825 - type: recall_at_5 value: 9.584 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 72.76460000000002 - type: ap value: 14.944240012137053 - type: f1 value: 55.89805777266571 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 63.30503678551217 - type: f1 value: 63.57492701921179 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 37.51066495006874 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 86.07021517553794 - type: cos_sim_ap value: 74.15520712370555 - type: cos_sim_f1 value: 68.64321608040201 - type: cos_sim_precision value: 65.51558752997602 - type: cos_sim_recall value: 72.0844327176781 - type: dot_accuracy value: 80.23484532395541 - type: dot_ap value: 54.298763810214176 - type: dot_f1 value: 53.22254659779924 - type: dot_precision value: 46.32525410476936 - type: dot_recall value: 62.532981530343015 - type: euclidean_accuracy value: 86.04637301066937 - type: euclidean_ap value: 73.85333854233123 - type: euclidean_f1 value: 68.77723660599845 - type: euclidean_precision value: 66.87437686939182 - type: euclidean_recall value: 70.79155672823218 - type: manhattan_accuracy value: 85.98676759849795 - type: manhattan_ap value: 73.56016090035973 - type: manhattan_f1 value: 68.48878539036647 - type: manhattan_precision value: 63.9505607690547 - type: manhattan_recall value: 73.7203166226913 - type: max_accuracy value: 86.07021517553794 - type: max_ap value: 74.15520712370555 - type: max_f1 value: 68.77723660599845 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.92769821865176 - type: cos_sim_ap value: 85.78879502899773 - type: cos_sim_f1 value: 78.14414083990464 - type: cos_sim_precision value: 74.61651607480563 - type: cos_sim_recall value: 82.0218663381583 - type: dot_accuracy value: 84.95750378390964 - type: dot_ap value: 75.80219641857563 - type: dot_f1 value: 70.13966179585681 - type: dot_precision value: 65.71140262361251 - type: dot_recall value: 75.20788420080073 - type: euclidean_accuracy value: 88.93546008460433 - type: euclidean_ap value: 85.72056428301667 - type: euclidean_f1 value: 78.14387902598124 - type: euclidean_precision value: 75.3376688344172 - type: euclidean_recall value: 81.16723129042192 - type: manhattan_accuracy value: 88.96262661543835 - type: manhattan_ap value: 85.76605136314335 - type: manhattan_f1 value: 78.26696165191743 - type: manhattan_precision value: 75.0990659496179 - type: manhattan_recall value: 81.71388974437943 - type: max_accuracy value: 88.96262661543835 - type: max_ap value: 85.78879502899773 - type: max_f1 value: 78.26696165191743 language: - en license: mit --- # E5-small [Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf). Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022 This model has 12 layers and the embedding size is 384. ## Usage Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset. ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] # Each input text should start with "query: " or "passage: ". # For tasks other than retrieval, you can simply use the "query: " prefix. input_texts = ['query: how much protein should a female eat', 'query: summit define', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."] tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-small') model = AutoModel.from_pretrained('intfloat/e5-small') # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # (Optionally) normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) ``` ## Training Details Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf). ## Benchmark Evaluation Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316). ## Citation If you find our paper or models helpful, please consider cite as follows: ``` @article{wang2022text, title={Text Embeddings by Weakly-Supervised Contrastive Pre-training}, author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu}, journal={arXiv preprint arXiv:2212.03533}, year={2022} } ``` ## Limitations This model only works for English texts. Long texts will be truncated to at most 512 tokens.
Akashpb13/Swahili_xlsr
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "sw", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: my_awesome_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1+cu113 - Tokenizers 0.13.2
Akashpb13/xlsr_hungarian_new
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "hu", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: creativeml-openrail-m --- ### June from [Obituary - A Grave Beginning](https://invidious.weblibre.org/watch?v=0l940bPkV1o) on [WD](https://huggingface.co/hakurei/waifu-diffusion) via Dreambooth #### model by no3 This your waifu-diffusion v1.3 model fine-tuned june taught to waifu-diffusion v1.3 with Dreambooth. It can be used by modifying the `instance_prompt`: **sks_june** ou can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts). ### note If you want to to use in UI like [AUTOMATIC1111](https://github.com/AUTOMATIC1111/stable-diffusion-webui) or any UI that's uses .ckpt files just download ckpt file here for your convenience. **just click on "june-wd-1.3-beta2.ckpt"** [june-wd-1.3-beta2.ckpt](https://huggingface.co/no3/june-wd-1.3-beta2/resolve/main/june-wd-1.3-beta2.ckpt) If you have issues or questions feel free to visit the Community Tab and start discussion about it. Here are images used for training this concept: ![image 1](https://huggingface.co/no3/june-wd-1.3-beta2/resolve/main/concept_images/1.png) ![image 2](https://huggingface.co/no3/june-wd-1.3-beta2/resolve/main/concept_images/2.png) ![image 3](https://huggingface.co/no3/june-wd-1.3-beta2/resolve/main/concept_images/3.png) ![image 4](https://huggingface.co/no3/june-wd-1.3-beta2/resolve/main/concept_images/4.png) ![image 5](https://huggingface.co/no3/june-wd-1.3-beta2/resolve/main/concept_images/5.png) ![image 6](https://huggingface.co/no3/june-wd-1.3-beta2/resolve/main/concept_images/6.png) ![image 7](https://huggingface.co/no3/june-wd-1.3-beta2/resolve/main/concept_images/7.png) ![image 8](https://huggingface.co/no3/june-wd-1.3-beta2/resolve/main/concept_images/8.png) ![image 9](https://huggingface.co/no3/june-wd-1.3-beta2/resolve/main/concept_images/9.png) ![image 10](https://huggingface.co/no3/june-wd-1.3-beta2/resolve/main/concept_images/10.png) ![image 11](https://huggingface.co/no3/june-wd-1.3-beta2/resolve/main/concept_images/11.png) ![image 12](https://huggingface.co/no3/june-wd-1.3-beta2/resolve/main/concept_images/june-021.png) ![image 13](https://huggingface.co/no3/june-wd-1.3-beta2/resolve/main/concept_images/june-023.png) ![image 14](https://huggingface.co/no3/june-wd-1.3-beta2/resolve/main/concept_images/june-026.png) ![image 15](https://huggingface.co/no3/june-wd-1.3-beta2/resolve/main/concept_images/june-029.png) ![image 16](https://huggingface.co/no3/june-wd-1.3-beta2/resolve/main/concept_images/june-030.png) ![image 17](https://huggingface.co/no3/june-wd-1.3-beta2/resolve/main/concept_images/june-031.png) ![image 18](https://huggingface.co/no3/june-wd-1.3-beta2/resolve/main/concept_images/june-032.png) ![image 19](https://huggingface.co/no3/june-wd-1.3-beta2/resolve/main/concept_images/june-033.png) ![image 20](https://huggingface.co/no3/june-wd-1.3-beta2/resolve/main/concept_images/june-034.png) ![image 21](https://huggingface.co/no3/june-wd-1.3-beta2/resolve/main/concept_images/june-035.png)
AkshatSurolia/ConvNeXt-FaceMask-Finetuned
[ "pytorch", "safetensors", "convnext", "image-classification", "dataset:Face-Mask18K", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
image-classification
{ "architectures": [ "ConvNextForImageClassification" ], "model_type": "convnext", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
56
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 226.55 +/- 49.07 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
AkshatSurolia/DeiT-FaceMask-Finetuned
[ "pytorch", "deit", "image-classification", "dataset:Face-Mask18K", "transformers", "license:apache-2.0", "autotrain_compatible" ]
image-classification
{ "architectures": [ "DeiTForImageClassification" ], "model_type": "deit", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
46
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: my_awesome_wnut_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_wnut_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1+cu113 - Tokenizers 0.13.2
AkshayDev/BERT_Fine_Tuning
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: EmileEsmaili/sheet_music_ede2110 metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-sheetmusic-colabVM ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `EmileEsmaili/sheet_music_ede2110` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 50 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: no ### Training results 📈 [TensorBoard logs](https://huggingface.co/EmileEsmaili/ddpm-sheetmusic-colabVM/tensorboard?#scalars)
AkshaySg/GrammarCorrection
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - it license: apache-2.0 tags: - whisper-event - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper Small Italian results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_11_0 it type: mozilla-foundation/common_voice_11_0 config: it split: test args: it metrics: - name: Wer type: wer value: 9.26934935147778 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Italian This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 it dataset. It achieves the following results on the evaluation set: - Loss: 0.2013 - Wer: 9.2693 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.2851 | 0.25 | 1000 | 0.2604 | 11.9744 | | 0.1885 | 0.5 | 2000 | 0.2176 | 10.1358 | | 0.1176 | 1.15 | 3000 | 0.2111 | 9.5664 | | 0.1256 | 1.4 | 4000 | 0.2013 | 9.2693 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
Ale/Alen
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wikisql model-index: - name: t5-small-finetuned-wikisql-with-cols results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-wikisql-with-cols This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wikisql dataset using a (slightly modified) training script by [Manuel Romero](https://huggingface.co/mrm8488). It achieves the following results on the evaluation set: - Loss: 0.0282 - Rouge2 Precision: 0.9172 - Rouge2 Recall: 0.819 - Rouge2 Fmeasure: 0.8578 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | |:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:| | 0.0557 | 1.0 | 4049 | 0.0384 | 0.9004 | 0.8038 | 0.8417 | | 0.0438 | 2.0 | 8098 | 0.0323 | 0.9101 | 0.8121 | 0.8507 | | 0.0374 | 3.0 | 12147 | 0.0298 | 0.914 | 0.8162 | 0.8548 | | 0.0353 | 4.0 | 16196 | 0.0286 | 0.9169 | 0.8189 | 0.8576 | | 0.0343 | 5.0 | 20245 | 0.0282 | 0.9172 | 0.819 | 0.8578 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
Aleenbo/Arcane
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-12-07T08:37:14Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 190.04 +/- 65.99 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Aleksandar/bert-srb-ner-setimes
[ "pytorch", "bert", "token-classification", "transformers", "generated_from_trainer", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -238.48 +/- 82.63 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```