modelId
stringlengths
4
81
tags
list
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
timestamp[ns, tz=UTC]
card
stringlengths
51
438k
Brendan/cse244b-hw2-roberta
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- license: apache-2.0 datasets: - mrqa language: - en metrics: - exact_match - f1 model-index: - name: VMware/bert-base-mrqa results: - task: type: Question-Answering dataset: type: mrqa name: MRQA metrics: - type: exact_match value: 64.48 name: Eval EM - type: f1 value: 76.14 name: Eval F1 - type: exact_match value: 48.89 name: Test EM - type: f1 value: 59.89 name: Test F1 --- This model release is part of a joint research project with Howard University's Innovation Foundry/AIM-AHEAD Lab. # Model Details - **Model name:** BERT-Base-MRQA - **Model type:** Extractive Question Answering - **Parent Model:** [BERT-Base-uncased](https://huggingface.co/bert-base-uncased) - **Training dataset:** [MRQA](https://huggingface.co/datasets/mrqa) (Machine Reading for Question Answering) - **Training data size:** 516,819 examples - **Training time:** 8:39:10 on 1 Nvidia V100 32GB GPU - **Language:** English - **Framework:** PyTorch - **Model version:** 1.0 # Intended Use This model is intended to provide accurate answers to questions based on context passages. It can be used for a variety of tasks, including question-answering for search engines, chatbots, customer service systems, and other applications that require natural language understanding. # How to Use ```python from transformers import pipeline question_answerer = pipeline("question-answering", model='VMware/bert-base-mrqa') context = "We present the results of the Machine Reading for Question Answering (MRQA) 2019 shared task on evaluating the generalization capabilities of reading comprehension systems. In this task, we adapted and unified 18 distinct question answering datasets into the same format. Among them, six datasets were made available for training, six datasets were made available for development, and the final six were hidden for final evaluation. Ten teams submitted systems, which explored various ideas including data sampling, multi-task learning, adversarial training and ensembling. The best system achieved an average F1 score of 72.5 on the 12 held-out datasets, 10.7 absolute points higher than our initial baseline based on BERT." question = "What is MRQA?" result = question_answerer(question=question, context=context) print(result) # { # 'score': 0.9254004955291748, # 'start': 30, # 'end': 68, # 'answer': 'Machine Reading for Question Answering' # } ``` # Training Details The model was trained for 1 epoch on the MRQA training set. ## Training Hyperparameters ```python args = TrainingArguments( "bert-base-mrqa", save_strategy="epoch", learning_rate=1e-5, num_train_epochs=1, weight_decay=0.01, per_device_train_batch_size=16, ) ``` # Evaluation Metrics The model was evaluated using standard metrics for question-answering models, including: Exact match (EM): The percentage of questions for which the model produces an exact match with the ground truth answer. F1 score: A weighted average of precision and recall, which measures the overlap between the predicted answer and the ground truth answer. # Model Family Performance | Parent Language Model | Number of Parameters | Training Time | Eval Time | Test Time | Eval EM | Eval F1 | Test EM | Test F1 | |---|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| | BERT-Tiny | 4,369,666 | 26:11 | 0:41 | 0:04 | 22.78 | 32.42 | 10.18 | 18.72 | | BERT-Base | 108,893,186 | 8:39:10 | 18:42 | 2:13 | 64.48 | 76.14 | 48.89 | 59.89 | | BERT-Large | 334,094,338 | 28:35:38 | 1:00:56 | 7:14 | 69.52 | 80.50 | 55.00 | 65.78 | | DeBERTa-v3-Extra-Small | 70,682,882 | 5:19:05 | 11:29 | 1:16 | 65.58 | 77.17 | 50.92 | 62.58 | | DeBERTa-v3-Base | 183,833,090 | 12:13:41 | 28:18 | 3:09 | 71.43 | 82.59 | 59.49 | 70.46 | | DeBERTa-v3-Large | 434,014,210 | 38:36:13 | 1:25:47 | 9:33 | **76.08** | **86.23** | **64.27** | **75.22** | | ELECTRA-Small | 13,483,522 | 2:16:36 | 3:55 | 0:27 | 57.63 | 69.38 | 38.68 | 51.56 | | ELECTRA-Base | 108,893,186 | 8:40:57 | 18:41 | 2:12 | 68.78 | 80.16 | 54.70 | 65.80 | | ELECTRA-Large | 334,094,338 | 28:31:59 | 1:00:40 | 7:13 | 74.15 | 84.96 | 62.35 | 73.28 | | MiniLMv2-L6-H384-from-BERT-Large | 22,566,146 | 2:12:48 | 4:23 | 0:40 | 59.31 | 71.09 | 41.78 | 53.30 | | MiniLMv2-L6-H768-from-BERT-Large | 66,365,954 | 4:42:59 | 10:01 | 1:10 | 64.27 | 75.84 | 49.05 | 59.82 | | MiniLMv2-L6-H384-from-RoBERTa-Large | 30,147,842 | 2:15:10 | 4:19 | 0:30 | 59.27 | 70.64 | 42.95 | 54.03 | | MiniLMv2-L12-H384-from-RoBERTa-Large | 40,794,626 | 4:14:22 | 8:27 | 0:58 | 64.58 | 76.23 | 51.28 | 62.83 | | MiniLMv2-L6-H768-from-RoBERTa-Large | 81,529,346 | 4:39:02 | 9:34 | 1:06 | 65.80 | 77.17 | 51.72 | 63.27 | | TinyRoBERTa | 81,529.346 | 4:27:06\* | 9:54 | 1:04 | 69.38 | 80.07 | 53.29 | 64.16 | | RoBERTa-Base | 124,056,578 | 8:50:29 | 18:59 | 2:11 | 69.06 | 80.08 | 55.53 | 66.49 | | RoBERTa-Large | 354,312,194 | 29:16:06 | 1:01:10 | 7:04 | 74.08 | 84.38 | 62.20 | 72.88 | \* TinyRoBERTa's training time isn't directly comparable to the other models since it was distilled from [VMware/roberta-large-mrqa](https://huggingface.co/VMware/roberta-large-mrqa) that was already trained on MRQA. # Limitations and Bias The model is based on a large and diverse dataset, but it may still have limitations and biases in certain areas. Some limitations include: - Language: The model is designed to work with English text only and may not perform as well on other languages. - Domain-specific knowledge: The model has been trained on a general dataset and may not perform well on questions that require domain-specific knowledge. - Out-of-distribution questions: The model may struggle with questions that are outside the scope of the MRQA dataset. This is best demonstrated by the delta between its scores on the eval vs test datasets. In addition, the model may have some bias in terms of the data it was trained on. The dataset includes questions from a variety of sources, but it may not be representative of all populations or perspectives. As a result, the model may perform better or worse for certain types of questions or on certain types of texts.
Broadus20/DialoGPT-small-joshua
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
2023-02-17T20:46:18Z
--- license: apache-2.0 datasets: - mrqa language: - en metrics: - exact_match - f1 model-index: - name: VMware/bert-large-mrqa results: - task: type: Question-Answering dataset: type: mrqa name: MRQA metrics: - type: exact_match value: 69.52 name: Eval EM - type: f1 value: 80.50 name: Eval F1 - type: exact_match value: 55.00 name: Test EM - type: f1 value: 65.78 name: Test F1 --- This model release is part of a joint research project with Howard University's Innovation Foundry/AIM-AHEAD Lab. # Model Details - **Model name:** BERT-Large-MRQA - **Model type:** Extractive Question Answering - **Parent Model:** [BERT-Large-uncased](https://huggingface.co/bert-large-uncased) - **Training dataset:** [MRQA](https://huggingface.co/datasets/mrqa) (Machine Reading for Question Answering) - **Training data size:** 516,819 examples - **Training time:** 28:35:38 on 1 Nvidia V100 32GB GPU - **Language:** English - **Framework:** PyTorch - **Model version:** 1.0 # Intended Use This model is intended to provide accurate answers to questions based on context passages. It can be used for a variety of tasks, including question-answering for search engines, chatbots, customer service systems, and other applications that require natural language understanding. # How to Use ```python from transformers import pipeline question_answerer = pipeline("question-answering", model='VMware/bert-large-mrqa') context = "We present the results of the Machine Reading for Question Answering (MRQA) 2019 shared task on evaluating the generalization capabilities of reading comprehension systems. In this task, we adapted and unified 18 distinct question answering datasets into the same format. Among them, six datasets were made available for training, six datasets were made available for development, and the final six were hidden for final evaluation. Ten teams submitted systems, which explored various ideas including data sampling, multi-task learning, adversarial training and ensembling. The best system achieved an average F1 score of 72.5 on the 12 held-out datasets, 10.7 absolute points higher than our initial baseline based on BERT." question = "What is MRQA?" result = question_answerer(question=question, context=context) print(result) # { # 'score': 0.864973783493042, # 'start': 30, # 'end': 68, # 'answer': 'Machine Reading for Question Answering' # } ``` # Training Details The model was trained for 1 epoch on the MRQA training set. ## Training Hyperparameters ```python args = TrainingArguments( "bert-large-mrqa", save_strategy="epoch", learning_rate=1e-5, num_train_epochs=1, weight_decay=0.01, per_device_train_batch_size=8, ) ``` # Evaluation Metrics The model was evaluated using standard metrics for question-answering models, including: Exact match (EM): The percentage of questions for which the model produces an exact match with the ground truth answer. F1 score: A weighted average of precision and recall, which measures the overlap between the predicted answer and the ground truth answer. # Model Family Performance | Parent Language Model | Number of Parameters | Training Time | Eval Time | Test Time | Eval EM | Eval F1 | Test EM | Test F1 | |---|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| | BERT-Tiny | 4,369,666 | 26:11 | 0:41 | 0:04 | 22.78 | 32.42 | 10.18 | 18.72 | | BERT-Base | 108,893,186 | 8:39:10 | 18:42 | 2:13 | 64.48 | 76.14 | 48.89 | 59.89 | | BERT-Large | 334,094,338 | 28:35:38 | 1:00:56 | 7:14 | 69.52 | 80.50 | 55.00 | 65.78 | | DeBERTa-v3-Extra-Small | 70,682,882 | 5:19:05 | 11:29 | 1:16 | 65.58 | 77.17 | 50.92 | 62.58 | | DeBERTa-v3-Base | 183,833,090 | 12:13:41 | 28:18 | 3:09 | 71.43 | 82.59 | 59.49 | 70.46 | | DeBERTa-v3-Large | 434,014,210 | 38:36:13 | 1:25:47 | 9:33 | **76.08** | **86.23** | **64.27** | **75.22** | | ELECTRA-Small | 13,483,522 | 2:16:36 | 3:55 | 0:27 | 57.63 | 69.38 | 38.68 | 51.56 | | ELECTRA-Base | 108,893,186 | 8:40:57 | 18:41 | 2:12 | 68.78 | 80.16 | 54.70 | 65.80 | | ELECTRA-Large | 334,094,338 | 28:31:59 | 1:00:40 | 7:13 | 74.15 | 84.96 | 62.35 | 73.28 | | MiniLMv2-L6-H384-from-BERT-Large | 22,566,146 | 2:12:48 | 4:23 | 0:40 | 59.31 | 71.09 | 41.78 | 53.30 | | MiniLMv2-L6-H768-from-BERT-Large | 66,365,954 | 4:42:59 | 10:01 | 1:10 | 64.27 | 75.84 | 49.05 | 59.82 | | MiniLMv2-L6-H384-from-RoBERTa-Large | 30,147,842 | 2:15:10 | 4:19 | 0:30 | 59.27 | 70.64 | 42.95 | 54.03 | | MiniLMv2-L12-H384-from-RoBERTa-Large | 40,794,626 | 4:14:22 | 8:27 | 0:58 | 64.58 | 76.23 | 51.28 | 62.83 | | MiniLMv2-L6-H768-from-RoBERTa-Large | 81,529,346 | 4:39:02 | 9:34 | 1:06 | 65.80 | 77.17 | 51.72 | 63.27 | |TinyRoBERTa | 81,529.346 | 4:27:06 *| 9:54 | 1:04 | 69.38 | 80.07| 53.29| 64.16| | RoBERTa-Base | 124,056,578 | 8:50:29 | 18:59 | 2:11 | 69.06 | 80.08 | 55.53 | 66.49 | | RoBERTa-Large | 354,312,194 | 29:16:06 | 1:01:10 | 7:04 | 74.08 | 84.38 | 62.20 | 72.88 | \* TinyRoBERTa's training time isn't directly comparable to the other models since it was distilled from [VMware/roberta-large-mrqa](https://huggingface.co/VMware/roberta-large-mrqa) that was already trained on MRQA. # Limitations and Bias The model is based on a large and diverse dataset, but it may still have limitations and biases in certain areas. Some limitations include: - Language: The model is designed to work with English text only and may not perform as well on other languages. - Domain-specific knowledge: The model has been trained on a general dataset and may not perform well on questions that require domain-specific knowledge. - Out-of-distribution questions: The model may struggle with questions that are outside the scope of the MRQA dataset. This is best demonstrated by the delta between its scores on the eval vs test datasets. In addition, the model may have some bias in terms of the data it was trained on. The dataset includes questions from a variety of sources, but it may not be representative of all populations or perspectives. As a result, the model may perform better or worse for certain types of questions or on certain types of texts.
Brona/model1
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-02-17T20:52:30Z
--- license: apache-2.0 datasets: - mrqa language: - en metrics: - exact_match - f1 model-index: - name: VMware/TinyRoBERTa-MRQA results: - task: type: Question-Answering dataset: type: mrqa # Required. Example: common_voice. Use dataset id from https://hf.co/datasets name: mrqa # Required. A pretty name for the dataset. Example: Common Voice (French) metrics: - type: exact_match value: 22.78 name: Eval EM - type: f1 value: 32.42 name: Eval F1 - type: exact_match value: 10.18 name: Test EM - type: f1 value: 18.72 name: Test F1 --- This model release is part of a joint research project with Howard University's Innovation Foundry/AIM-AHEAD Lab. # Model Details - **Model name:** BERT-Tiny-MRQA - **Model type:** Extractive Question Answering - **Parent Model:** [BERT-Tiny-uncased](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) - **Training dataset:** [MRQA](https://huggingface.co/datasets/mrqa) (Machine Reading for Question Answering) - **Training data size:** 516,819 examples - **Training time:** 26:11 on 1 Nvidia V100 32GB GPU - **Language:** English - **Framework:** PyTorch - **Model version:** 1.0 # Intended Use This model is intended to provide accurate answers to questions based on context passages. It can be used for a variety of tasks, including question-answering for search engines, chatbots, customer service systems, and other applications that require natural language understanding. # How to Use ```python from transformers import pipeline question_answerer = pipeline("question-answering", model='VMware/bert-tiny-mrqa') context = "We present the results of the Machine Reading for Question Answering (MRQA) 2019 shared task on evaluating the generalization capabilities of reading comprehension systems. In this task, we adapted and unified 18 distinct question answering datasets into the same format. Among them, six datasets were made available for training, six datasets were made available for development, and the final six were hidden for final evaluation. Ten teams submitted systems, which explored various ideas including data sampling, multi-task learning, adversarial training and ensembling. The best system achieved an average F1 score of 72.5 on the 12 held-out datasets, 10.7 absolute points higher than our initial baseline based on BERT." question = "What is MRQA?" result = question_answerer(question=question, context=context) print(result) # { # 'score': 0.134057879447937, # 'start': 76, # 'end': 80, # 'answer': '2019' # } ``` Yes, you read that correctly ... this model thinks MRQA is "2019". Look at its eval and test scores. A coin toss is more likely to get you a decent answer, haha. # Training Details The model was trained for 1 epoch on the MRQA training set. ## Training Hyperparameters ```python args = TrainingArguments( "bert-tiny-mrqa", save_strategy="epoch", learning_rate=1e-5, num_train_epochs=1, weight_decay=0.01, per_device_train_batch_size=16, ) ``` # Evaluation Metrics The model was evaluated using standard metrics for question-answering models, including: Exact match (EM): The percentage of questions for which the model produces an exact match with the ground truth answer. F1 score: A weighted average of precision and recall, which measures the overlap between the predicted answer and the ground truth answer. # Model Family Performance | Parent Language Model | Number of Parameters | Training Time | Eval Time | Test Time | Eval EM | Eval F1 | Test EM | Test F1 | |---|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| | BERT-Tiny | 4,369,666 | 26:11 | 0:41 | 0:04 | 22.78 | 32.42 | 10.18 | 18.72 | | BERT-Base | 108,893,186 | 8:39:10 | 18:42 | 2:13 | 64.48 | 76.14 | 48.89 | 59.89 | | BERT-Large | 334,094,338 | 28:35:38 | 1:00:56 | 7:14 | 69.52 | 80.50 | 55.00 | 65.78 | | DeBERTa-v3-Extra-Small | 70,682,882 | 5:19:05 | 11:29 | 1:16 | 65.58 | 77.17 | 50.92 | 62.58 | | DeBERTa-v3-Base | 183,833,090 | 12:13:41 | 28:18 | 3:09 | 71.43 | 82.59 | 59.49 | 70.46 | | DeBERTa-v3-Large | 434,014,210 | 38:36:13 | 1:25:47 | 9:33 | **76.08** | **86.23** | **64.27** | **75.22** | | ELECTRA-Small | 13,483,522 | 2:16:36 | 3:55 | 0:27 | 57.63 | 69.38 | 38.68 | 51.56 | | ELECTRA-Base | 108,893,186 | 8:40:57 | 18:41 | 2:12 | 68.78 | 80.16 | 54.70 | 65.80 | | ELECTRA-Large | 334,094,338 | 28:31:59 | 1:00:40 | 7:13 | 74.15 | 84.96 | 62.35 | 73.28 | | MiniLMv2-L6-H384-from-BERT-Large | 22,566,146 | 2:12:48 | 4:23 | 0:40 | 59.31 | 71.09 | 41.78 | 53.30 | | MiniLMv2-L6-H768-from-BERT-Large | 66,365,954 | 4:42:59 | 10:01 | 1:10 | 64.27 | 75.84 | 49.05 | 59.82 | | MiniLMv2-L6-H384-from-RoBERTa-Large | 30,147,842 | 2:15:10 | 4:19 | 0:30 | 59.27 | 70.64 | 42.95 | 54.03 | | MiniLMv2-L12-H384-from-RoBERTa-Large | 40,794,626 | 4:14:22 | 8:27 | 0:58 | 64.58 | 76.23 | 51.28 | 62.83 | | MiniLMv2-L6-H768-from-RoBERTa-Large | 81,529,346 | 4:39:02 | 9:34 | 1:06 | 65.80 | 77.17 | 51.72 | 63.27 | | TinyRoBERTa | 81,529.346 | 4:27:06\* | 9:54 | 1:04 | 69.38 | 80.07 | 53.29 | 64.16 | | RoBERTa-Base | 124,056,578 | 8:50:29 | 18:59 | 2:11 | 69.06 | 80.08 | 55.53 | 66.49 | | RoBERTa-Large | 354,312,194 | 29:16:06 | 1:01:10 | 7:04 | 74.08 | 84.38 | 62.20 | 72.88 | \* TinyRoBERTa's training time isn't directly comparable to the other models since it was distilled from [VMware/roberta-large-mrqa](https://huggingface.co/VMware/roberta-large-mrqa) that was already trained on MRQA. # Limitations and Bias The model is based on a large and diverse dataset, but it may still have limitations and biases in certain areas. Some limitations include: - Language: The model is designed to work with English text only and may not perform as well on other languages. - Domain-specific knowledge: The model has been trained on a general dataset and may not perform well on questions that require domain-specific knowledge. - Out-of-distribution questions: The model may struggle with questions that are outside the scope of the MRQA dataset. This is best demonstrated by the delta between its scores on the eval vs test datasets. In addition, the model may have some bias in terms of the data it was trained on. The dataset includes questions from a variety of sources, but it may not be representative of all populations or perspectives. As a result, the model may perform better or worse for certain types of questions or on certain types of texts.
BrunoNogueira/DialoGPT-kungfupanda
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
2023-02-17T20:56:59Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: Wav2vec2-large-arabic-no-diacs_V1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Wav2vec2-large-arabic-no-diacs_V1 This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.2894 - Wer: 0.2470 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6563 | 0.83 | 1000 | 0.5231 | 0.5077 | | 0.4532 | 1.66 | 2000 | 0.3642 | 0.3588 | | 0.2687 | 2.5 | 3000 | 0.3245 | 0.2920 | | 0.1846 | 3.33 | 4000 | 0.3050 | 0.2640 | | 0.1587 | 4.16 | 5000 | 0.2894 | 0.2470 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.10.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
Bryan190/Aguy190
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-02-17T20:58:10Z
--- license: creativeml-openrail-m language: - en tags: - embedding - textual inversion - isometric - isometric dreams --- # Isometric Dreams TEXTUAL INVERSION HUB It won't let me upload a zip file or the text files, so this is the WHOLE LOT. # coffee is nice: https://ko-fi.com/DUSKFALLcrew Yea we're low on funds and need to keep up with the insanity of making things :3 # Invidiual files are on Civit https://civitai.com/models/9064/isometric-ti-sets https://civitai.com/models/9265/isometric-ti-sets-pt-2 https://civitai.com/models/9666/isometric-dreams-15-ti-set-3 https://civitai.com/models/9062/isometric-dreams-15-2000-ti https://civitai.com/models/9060/isometric-ti-4000 https://civitai.com/models/9059/isometric-dreams-ti-4400 https://civitai.com/models/9058/isometricdreams-3900 https://civitai.com/models/8132/isometricdreams-300
Brykee/BrykeeBot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-02-17T21:03:42Z
--- license: apache-2.0 datasets: - mrqa language: - en metrics: - exact_match - f1 model-index: - name: VMware/deberta-v3-xsmall-mrqa results: - task: type: Question-Answering dataset: type: mrqa name: MRQA metrics: - type: exact_match value: 65.58 name: Eval EM - type: f1 value: 77.17 name: Eval F1 - type: exact_match value: 50.92 name: Test EM - type: f1 value: 62.58 name: Test F1 --- This model release is part of a joint research project with Howard University's Innovation Foundry/AIM-AHEAD Lab. # Model Details - **Model name:** DeBERTa-v3-xsmall-MRQA - **Model type:** Extractive Question Answering - **Parent Model:** [DeBERTa-v3-xsmall](https://huggingface.co/microsoft/deberta-v3-xsmall) - **Training dataset:** [MRQA](https://huggingface.co/datasets/mrqa) (Machine Reading for Question Answering) - **Training data size:** 516,819 examples - **Training time:** 5:19:05 on 1 Nvidia V100 32GB GPU - **Language:** English - **Framework:** PyTorch - **Model version:** 1.0 # Intended Use This model is intended to provide accurate answers to questions based on context passages. It can be used for a variety of tasks, including question-answering for search engines, chatbots, customer service systems, and other applications that require natural language understanding. # How to Use ```python from transformers import pipeline question_answerer = pipeline("question-answering", model='VMware/deberta-v3-xsmall-mrqa') context = "We present the results of the Machine Reading for Question Answering (MRQA) 2019 shared task on evaluating the generalization capabilities of reading comprehension systems. In this task, we adapted and unified 18 distinct question answering datasets into the same format. Among them, six datasets were made available for training, six datasets were made available for development, and the final six were hidden for final evaluation. Ten teams submitted systems, which explored various ideas including data sampling, multi-task learning, adversarial training and ensembling. The best system achieved an average F1 score of 72.5 on the 12 held-out datasets, 10.7 absolute points higher than our initial baseline based on BERT." question = "What is MRQA?" result = question_answerer(question=question, context=context) print(result) # { # 'score': 0.8907278776168823, # 'start': 29, # 'end': 68, # 'answer': ' Machine Reading for Question Answering' # } ``` # Training Details The model was trained for 1 epoch on the MRQA training set. ## Training Hyperparameters ```python args = TrainingArguments( "deberta-v3-xsmall-mrqa", save_strategy="epoch", learning_rate=1e-5, num_train_epochs=1, weight_decay=0.01, per_device_train_batch_size=16, ) ``` # Evaluation Metrics The model was evaluated using standard metrics for question-answering models, including: Exact match (EM): The percentage of questions for which the model produces an exact match with the ground truth answer. F1 score: A weighted average of precision and recall, which measures the overlap between the predicted answer and the ground truth answer. # Model Family Performance | Parent Language Model | Number of Parameters | Training Time | Eval Time | Test Time | Eval EM | Eval F1 | Test EM | Test F1 | |---|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| | BERT-Tiny | 4,369,666 | 26:11 | 0:41 | 0:04 | 22.78 | 32.42 | 10.18 | 18.72 | | BERT-Base | 108,893,186 | 8:39:10 | 18:42 | 2:13 | 64.48 | 76.14 | 48.89 | 59.89 | | BERT-Large | 334,094,338 | 28:35:38 | 1:00:56 | 7:14 | 69.52 | 80.50 | 55.00 | 65.78 | | DeBERTa-v3-Extra-Small | 70,682,882 | 5:19:05 | 11:29 | 1:16 | 65.58 | 77.17 | 50.92 | 62.58 | | DeBERTa-v3-Base | 183,833,090 | 12:13:41 | 28:18 | 3:09 | 71.43 | 82.59 | 59.49 | 70.46 | | DeBERTa-v3-Large | 434,014,210 | 38:36:13 | 1:25:47 | 9:33 | **76.08** | **86.23** | **64.27** | **75.22** | | ELECTRA-Small | 13,483,522 | 2:16:36 | 3:55 | 0:27 | 57.63 | 69.38 | 38.68 | 51.56 | | ELECTRA-Base | 108,893,186 | 8:40:57 | 18:41 | 2:12 | 68.78 | 80.16 | 54.70 | 65.80 | | ELECTRA-Large | 334,094,338 | 28:31:59 | 1:00:40 | 7:13 | 74.15 | 84.96 | 62.35 | 73.28 | | MiniLMv2-L6-H384-from-BERT-Large | 22,566,146 | 2:12:48 | 4:23 | 0:40 | 59.31 | 71.09 | 41.78 | 53.30 | | MiniLMv2-L6-H768-from-BERT-Large | 66,365,954 | 4:42:59 | 10:01 | 1:10 | 64.27 | 75.84 | 49.05 | 59.82 | | MiniLMv2-L6-H384-from-RoBERTa-Large | 30,147,842 | 2:15:10 | 4:19 | 0:30 | 59.27 | 70.64 | 42.95 | 54.03 | | MiniLMv2-L12-H384-from-RoBERTa-Large | 40,794,626 | 4:14:22 | 8:27 | 0:58 | 64.58 | 76.23 | 51.28 | 62.83 | | MiniLMv2-L6-H768-from-RoBERTa-Large | 81,529,346 | 4:39:02 | 9:34 | 1:06 | 65.80 | 77.17 | 51.72 | 63.27 | | TinyRoBERTa | 81,529.346 | 4:27:06\* | 9:54 | 1:04 | 69.38 | 80.07 | 53.29 | 64.16 | | RoBERTa-Base | 124,056,578 | 8:50:29 | 18:59 | 2:11 | 69.06 | 80.08 | 55.53 | 66.49 | | RoBERTa-Large | 354,312,194 | 29:16:06 | 1:01:10 | 7:04 | 74.08 | 84.38 | 62.20 | 72.88 | \* TinyRoBERTa's training time isn't directly comparable to the other models since it was distilled from [VMware/roberta-large-mrqa](https://huggingface.co/VMware/roberta-large-mrqa) that was already trained on MRQA. # Limitations and Bias The model is based on a large and diverse dataset, but it may still have limitations and biases in certain areas. Some limitations include: - Language: The model is designed to work with English text only and may not perform as well on other languages. - Domain-specific knowledge: The model has been trained on a general dataset and may not perform well on questions that require domain-specific knowledge. - Out-of-distribution questions: The model may struggle with questions that are outside the scope of the MRQA dataset. This is best demonstrated by the delta between its scores on the eval vs test datasets. In addition, the model may have some bias in terms of the data it was trained on. The dataset includes questions from a variety of sources, but it may not be representative of all populations or perspectives. As a result, the model may perform better or worse for certain types of questions or on certain types of texts.
Bryson575x/riceboi
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-02-17T21:04:04Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 223.70 +/- 73.03 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
BumBelDumBel/TRUMP
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "generated_from_trainer", "license:mit" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
Access to model s-1-n-t-h/bart-summarizer is restricted and you are not in the authorized list. Visit https://huggingface.co/s-1-n-t-h/bart-summarizer to ask for access.
BumBelDumBel/ZORK-AI-TEST
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "generated_from_trainer", "license:mit" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: apache-2.0 datasets: - mrqa language: - en metrics: - exact_match - f1 model-index: - name: VMware/deberta-v3-large-mrqa results: - task: type: Question-Answering dataset: type: mrqa name: MRQA metrics: - type: exact_match value: 76.08 name: Eval EM - type: f1 value: 86.23 name: Eval F1 - type: exact_match value: 64.27 name: Test EM - type: f1 value: 75.22 name: Test F1 --- This model release is part of a joint research project with Howard University's Innovation Foundry/AIM-AHEAD Lab. # Model Details - **Model name:** DeBERTa-v3-Large-MRQA - **Model type:** Extractive Question Answering - **Parent Model:** [DeBERTa-v3-Large](https://huggingface.co/microsoft/deberta-v3-large) - **Training dataset:** [MRQA](https://huggingface.co/datasets/mrqa) (Machine Reading for Question Answering) - **Training data size:** 516,819 examples - **Training time:** 38:36:13 on 1 Nvidia V100 32GB GPU - **Language:** English - **Framework:** PyTorch - **Model version:** 1.0 # Intended Use This model is intended to provide accurate answers to questions based on context passages. It can be used for a variety of tasks, including question-answering for search engines, chatbots, customer service systems, and other applications that require natural language understanding. # How to Use ```python from transformers import pipeline question_answerer = pipeline("question-answering", model='VMware/deberta-v3-large-mrqa') context = "We present the results of the Machine Reading for Question Answering (MRQA) 2019 shared task on evaluating the generalization capabilities of reading comprehension systems. In this task, we adapted and unified 18 distinct question answering datasets into the same format. Among them, six datasets were made available for training, six datasets were made available for development, and the final six were hidden for final evaluation. Ten teams submitted systems, which explored various ideas including data sampling, multi-task learning, adversarial training and ensembling. The best system achieved an average F1 score of 72.5 on the 12 held-out datasets, 10.7 absolute points higher than our initial baseline based on BERT." question = "What is MRQA?" result = question_answerer(question=question, context=context) print(result) # { # 'score': 0.9785431623458862, # 'start': 29, # 'end': 68, # 'answer': ' Machine Reading for Question Answering' # } ``` # Training Details The model was trained for 1 epoch on the MRQA training set. ## Training Hyperparameters ```python args = TrainingArguments( "deberta-v3-large-mrqa", save_strategy="epoch", learning_rate=1e-5, num_train_epochs=1, weight_decay=0.01, per_device_train_batch_size=8, ) ``` # Evaluation Metrics The model was evaluated using standard metrics for question-answering models, including: Exact match (EM): The percentage of questions for which the model produces an exact match with the ground truth answer. F1 score: A weighted average of precision and recall, which measures the overlap between the predicted answer and the ground truth answer. # Model Family Performance | Parent Language Model | Number of Parameters | Training Time | Eval Time | Test Time | Eval EM | Eval F1 | Test EM | Test F1 | |---|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| | BERT-Tiny | 4,369,666 | 26:11 | 0:41 | 0:04 | 22.78 | 32.42 | 10.18 | 18.72 | | BERT-Base | 108,893,186 | 8:39:10 | 18:42 | 2:13 | 64.48 | 76.14 | 48.89 | 59.89 | | BERT-Large | 334,094,338 | 28:35:38 | 1:00:56 | 7:14 | 69.52 | 80.50 | 55.00 | 65.78 | | DeBERTa-v3-Extra-Small | 70,682,882 | 5:19:05 | 11:29 | 1:16 | 65.58 | 77.17 | 50.92 | 62.58 | | DeBERTa-v3-Base | 183,833,090 | 12:13:41 | 28:18 | 3:09 | 71.43 | 82.59 | 59.49 | 70.46 | | DeBERTa-v3-Large | 434,014,210 | 38:36:13 | 1:25:47 | 9:33 | **76.08** | **86.23** | **64.27** | **75.22** | | ELECTRA-Small | 13,483,522 | 2:16:36 | 3:55 | 0:27 | 57.63 | 69.38 | 38.68 | 51.56 | | ELECTRA-Base | 108,893,186 | 8:40:57 | 18:41 | 2:12 | 68.78 | 80.16 | 54.70 | 65.80 | | ELECTRA-Large | 334,094,338 | 28:31:59 | 1:00:40 | 7:13 | 74.15 | 84.96 | 62.35 | 73.28 | | MiniLMv2-L6-H384-from-BERT-Large | 22,566,146 | 2:12:48 | 4:23 | 0:40 | 59.31 | 71.09 | 41.78 | 53.30 | | MiniLMv2-L6-H768-from-BERT-Large | 66,365,954 | 4:42:59 | 10:01 | 1:10 | 64.27 | 75.84 | 49.05 | 59.82 | | MiniLMv2-L6-H384-from-RoBERTa-Large | 30,147,842 | 2:15:10 | 4:19 | 0:30 | 59.27 | 70.64 | 42.95 | 54.03 | | MiniLMv2-L12-H384-from-RoBERTa-Large | 40,794,626 | 4:14:22 | 8:27 | 0:58 | 64.58 | 76.23 | 51.28 | 62.83 | | MiniLMv2-L6-H768-from-RoBERTa-Large | 81,529,346 | 4:39:02 | 9:34 | 1:06 | 65.80 | 77.17 | 51.72 | 63.27 | | TinyRoBERTa | 81,529.346 | 4:27:06\* | 9:54 | 1:04 | 69.38 | 80.07 | 53.29 | 64.16 | | RoBERTa-Base | 124,056,578 | 8:50:29 | 18:59 | 2:11 | 69.06 | 80.08 | 55.53 | 66.49 | | RoBERTa-Large | 354,312,194 | 29:16:06 | 1:01:10 | 7:04 | 74.08 | 84.38 | 62.20 | 72.88 | \* TinyRoBERTa's training time isn't directly comparable to the other models since it was distilled from [VMware/roberta-large-mrqa](https://huggingface.co/VMware/roberta-large-mrqa) that was already trained on MRQA. # Limitations and Bias The model is based on a large and diverse dataset, but it may still have limitations and biases in certain areas. Some limitations include: - Language: The model is designed to work with English text only and may not perform as well on other languages. - Domain-specific knowledge: The model has been trained on a general dataset and may not perform well on questions that require domain-specific knowledge. - Out-of-distribution questions: The model may struggle with questions that are outside the scope of the MRQA dataset. This is best demonstrated by the delta between its scores on the eval vs test datasets. In addition, the model may have some bias in terms of the data it was trained on. The dataset includes questions from a variety of sources, but it may not be representative of all populations or perspectives. As a result, the model may perform better or worse for certain types of questions or on certain types of texts.
BumBelDumBel/ZORK_AI_FANTASY
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-02-17T21:11:00Z
--- license: apache-2.0 language: - bn Tags: - paraphrase tags: - bangla - semantics - paraphrase ---
CALM/backup
[ "lean_albert", "transformers" ]
null
{ "architectures": [ "LeanAlbertForPretraining", "LeanAlbertForTokenClassification", "LeanAlbertForSequenceClassification" ], "model_type": "lean_albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2023-02-17T21:20:52Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Yelinz/FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
CAMeL-Lab/bert-base-arabic-camelbert-ca-ner
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
85
null
--- license: mit tags: - generated_from_trainer datasets: - imdb model-index: - name: gpt-neo-2.7B-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt-neo-2.7B-imdb This model is a fine-tuned version of [EleutherAI/gpt-neo-2.7B](https://huggingface.co/EleutherAI/gpt-neo-2.7B) on the imdb dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu117 - Datasets 2.9.0 - Tokenizers 0.13.2
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-egy
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
16,451
null
--- license: mit --- ### tim-sale1 on Stable Diffusion This is the `<cat-toy>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<cat-toy> 0](https://huggingface.co/sd-concepts-library/tim-sale1/resolve/main/concept_images/23.jpeg) ![<cat-toy> 1](https://huggingface.co/sd-concepts-library/tim-sale1/resolve/main/concept_images/10.jpeg) ![<cat-toy> 2](https://huggingface.co/sd-concepts-library/tim-sale1/resolve/main/concept_images/19.jpeg) ![<cat-toy> 3](https://huggingface.co/sd-concepts-library/tim-sale1/resolve/main/concept_images/0.jpeg) ![<cat-toy> 4](https://huggingface.co/sd-concepts-library/tim-sale1/resolve/main/concept_images/26.jpeg) ![<cat-toy> 5](https://huggingface.co/sd-concepts-library/tim-sale1/resolve/main/concept_images/3.jpeg) ![<cat-toy> 6](https://huggingface.co/sd-concepts-library/tim-sale1/resolve/main/concept_images/24.jpeg) ![<cat-toy> 7](https://huggingface.co/sd-concepts-library/tim-sale1/resolve/main/concept_images/6.jpeg) ![<cat-toy> 8](https://huggingface.co/sd-concepts-library/tim-sale1/resolve/main/concept_images/1.jpeg) ![<cat-toy> 9](https://huggingface.co/sd-concepts-library/tim-sale1/resolve/main/concept_images/5.jpeg) ![<cat-toy> 10](https://huggingface.co/sd-concepts-library/tim-sale1/resolve/main/concept_images/13.jpeg) ![<cat-toy> 11](https://huggingface.co/sd-concepts-library/tim-sale1/resolve/main/concept_images/12.jpeg) ![<cat-toy> 12](https://huggingface.co/sd-concepts-library/tim-sale1/resolve/main/concept_images/9.jpeg) ![<cat-toy> 13](https://huggingface.co/sd-concepts-library/tim-sale1/resolve/main/concept_images/2.jpeg) ![<cat-toy> 14](https://huggingface.co/sd-concepts-library/tim-sale1/resolve/main/concept_images/4.jpeg) ![<cat-toy> 15](https://huggingface.co/sd-concepts-library/tim-sale1/resolve/main/concept_images/18.jpeg) ![<cat-toy> 16](https://huggingface.co/sd-concepts-library/tim-sale1/resolve/main/concept_images/16.jpeg) ![<cat-toy> 17](https://huggingface.co/sd-concepts-library/tim-sale1/resolve/main/concept_images/25.jpeg) ![<cat-toy> 18](https://huggingface.co/sd-concepts-library/tim-sale1/resolve/main/concept_images/11.jpeg) ![<cat-toy> 19](https://huggingface.co/sd-concepts-library/tim-sale1/resolve/main/concept_images/22.jpeg) ![<cat-toy> 20](https://huggingface.co/sd-concepts-library/tim-sale1/resolve/main/concept_images/17.jpeg) ![<cat-toy> 21](https://huggingface.co/sd-concepts-library/tim-sale1/resolve/main/concept_images/14.jpeg) ![<cat-toy> 22](https://huggingface.co/sd-concepts-library/tim-sale1/resolve/main/concept_images/20.jpeg) ![<cat-toy> 23](https://huggingface.co/sd-concepts-library/tim-sale1/resolve/main/concept_images/15.jpeg) ![<cat-toy> 24](https://huggingface.co/sd-concepts-library/tim-sale1/resolve/main/concept_images/21.jpeg) ![<cat-toy> 25](https://huggingface.co/sd-concepts-library/tim-sale1/resolve/main/concept_images/7.jpeg) ![<cat-toy> 26](https://huggingface.co/sd-concepts-library/tim-sale1/resolve/main/concept_images/8.jpeg)
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-msa
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
71
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 238.75 +/- 22.75 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
CAMeL-Lab/bert-base-arabic-camelbert-ca
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
580
null
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -1.37 +/- 0.34 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-egy
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
32
2023-02-17T21:38:08Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget library_name: ml-agents --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Write your model_id: menoua/ML-Agents-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-msa
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
2023-02-17T21:40:58Z
--- license: apache-2.0 datasets: - mrqa language: - en metrics: - exact_match - f1 model-index: - name: VMware/minilmv2-l6-h384-from-bert-large-mrqa results: - task: type: Question-Answering dataset: type: mrqa name: MRQA metrics: - type: exact_match value: 59.31 name: Eval EM - type: f1 value: 71.09 name: Eval F1 - type: exact_match value: 41.78 name: Test EM - type: f1 value: 53.30 name: Test F1 --- This model release is part of a joint research project with Howard University's Innovation Foundry/AIM-AHEAD Lab. # Model Details - **Model name:** MiniLMv2-L6-H384-from-BERT-Large-MRQA - **Model type:** Extractive Question Answering - **Parent Model:** [MiniLMv2-L6-H384-distilled-from-BERT-Large](https://huggingface.co/nreimers/MiniLMv2-L6-H384-distilled-from-BERT-Large) - **Training dataset:** [MRQA](https://huggingface.co/datasets/mrqa) (Machine Reading for Question Answering) - **Training data size:** 516,819 examples - **Training time:** 2:12:48 on 1 Nvidia V100 32GB GPU - **Language:** English - **Framework:** PyTorch - **Model version:** 1.0 # Intended Use This model is intended to provide accurate answers to questions based on context passages. It can be used for a variety of tasks, including question-answering for search engines, chatbots, customer service systems, and other applications that require natural language understanding. # How to Use ```python from transformers import pipeline question_answerer = pipeline("question-answering", model='VMware/minilmv2-l6-h384-from-bert-large-mrqa') context = "We present the results of the Machine Reading for Question Answering (MRQA) 2019 shared task on evaluating the generalization capabilities of reading comprehension systems. In this task, we adapted and unified 18 distinct question answering datasets into the same format. Among them, six datasets were made available for training, six datasets were made available for development, and the final six were hidden for final evaluation. Ten teams submitted systems, which explored various ideas including data sampling, multi-task learning, adversarial training and ensembling. The best system achieved an average F1 score of 72.5 on the 12 held-out datasets, 10.7 absolute points higher than our initial baseline based on BERT." question = "What is MRQA?" result = question_answerer(question=question, context=context) print(result) # { # 'score': 0.30876627564430237, # 'start': 30, # 'end': 68, # 'answer': 'Machine Reading for Question Answering' # } ``` # Training Details The model was trained for 1 epoch on the MRQA training set. ## Training Hyperparameters ```python args = TrainingArguments( "minilmv2-l6-h384-from-bert-large-mrqa", save_strategy="epoch", learning_rate=1e-5, num_train_epochs=1, weight_decay=0.01, per_device_train_batch_size=16, ) ``` # Evaluation Metrics The model was evaluated using standard metrics for question-answering models, including: Exact match (EM): The percentage of questions for which the model produces an exact match with the ground truth answer. F1 score: A weighted average of precision and recall, which measures the overlap between the predicted answer and the ground truth answer. # Model Family Performance | Parent Language Model | Number of Parameters | Training Time | Eval Time | Test Time | Eval EM | Eval F1 | Test EM | Test F1 | |---|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| | BERT-Tiny | 4,369,666 | 26:11 | 0:41 | 0:04 | 22.78 | 32.42 | 10.18 | 18.72 | | BERT-Base | 108,893,186 | 8:39:10 | 18:42 | 2:13 | 64.48 | 76.14 | 48.89 | 59.89 | | BERT-Large | 334,094,338 | 28:35:38 | 1:00:56 | 7:14 | 69.52 | 80.50 | 55.00 | 65.78 | | DeBERTa-v3-Extra-Small | 70,682,882 | 5:19:05 | 11:29 | 1:16 | 65.58 | 77.17 | 50.92 | 62.58 | | DeBERTa-v3-Base | 183,833,090 | 12:13:41 | 28:18 | 3:09 | 71.43 | 82.59 | 59.49 | 70.46 | | DeBERTa-v3-Large | 434,014,210 | 38:36:13 | 1:25:47 | 9:33 | **76.08** | **86.23** | **64.27** | **75.22** | | ELECTRA-Small | 13,483,522 | 2:16:36 | 3:55 | 0:27 | 57.63 | 69.38 | 38.68 | 51.56 | | ELECTRA-Base | 108,893,186 | 8:40:57 | 18:41 | 2:12 | 68.78 | 80.16 | 54.70 | 65.80 | | ELECTRA-Large | 334,094,338 | 28:31:59 | 1:00:40 | 7:13 | 74.15 | 84.96 | 62.35 | 73.28 | | MiniLMv2-L6-H384-from-BERT-Large | 22,566,146 | 2:12:48 | 4:23 | 0:40 | 59.31 | 71.09 | 41.78 | 53.30 | | MiniLMv2-L6-H768-from-BERT-Large | 66,365,954 | 4:42:59 | 10:01 | 1:10 | 64.27 | 75.84 | 49.05 | 59.82 | | MiniLMv2-L6-H384-from-RoBERTa-Large | 30,147,842 | 2:15:10 | 4:19 | 0:30 | 59.27 | 70.64 | 42.95 | 54.03 | | MiniLMv2-L12-H384-from-RoBERTa-Large | 40,794,626 | 4:14:22 | 8:27 | 0:58 | 64.58 | 76.23 | 51.28 | 62.83 | | MiniLMv2-L6-H768-from-RoBERTa-Large | 81,529,346 | 4:39:02 | 9:34 | 1:06 | 65.80 | 77.17 | 51.72 | 63.27 | | TinyRoBERTa | 81,529.346 | 4:27:06\* | 9:54 | 1:04 | 69.38 | 80.07 | 53.29 | 64.16 | | RoBERTa-Base | 124,056,578 | 8:50:29 | 18:59 | 2:11 | 69.06 | 80.08 | 55.53 | 66.49 | | RoBERTa-Large | 354,312,194 | 29:16:06 | 1:01:10 | 7:04 | 74.08 | 84.38 | 62.20 | 72.88 | \* TinyRoBERTa's training time isn't directly comparable to the other models since it was distilled from [VMware/roberta-large-mrqa](https://huggingface.co/VMware/roberta-large-mrqa) that was already trained on MRQA. # Limitations and Bias The model is based on a large and diverse dataset, but it may still have limitations and biases in certain areas. Some limitations include: - Language: The model is designed to work with English text only and may not perform as well on other languages. - Domain-specific knowledge: The model has been trained on a general dataset and may not perform well on questions that require domain-specific knowledge. - Out-of-distribution questions: The model may struggle with questions that are outside the scope of the MRQA dataset. This is best demonstrated by the delta between its scores on the eval vs test datasets. In addition, the model may have some bias in terms of the data it was trained on. The dataset includes questions from a variety of sources, but it may not be representative of all populations or perspectives. As a result, the model may perform better or worse for certain types of questions or on certain types of texts.
CAMeL-Lab/bert-base-arabic-camelbert-da
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
449
2023-02-17T21:42:19Z
--- license: apache-2.0 datasets: - mrqa language: - en metrics: - exact_match - f1 model-index: - name: VMware/minilmv2-l6-h384-from-roberta-large-mrqa results: - task: type: Question-Answering dataset: type: mrqa name: MRQA metrics: - type: exact_match value: 59.27 name: Eval EM - type: f1 value: 70.64 name: Eval F1 - type: exact_match value: 42.95 name: Test EM - type: f1 value: 54.03 name: Test F1 --- This model release is part of a joint research project with Howard University's Innovation Foundry/AIM-AHEAD Lab. # Model Details - **Model name:** MiniLMv2-L6-H384-from-RoBERTa-Large-MRQA - **Model type:** Extractive Question Answering - **Parent Model:** [MiniLMv2-L6-H384-distilled-from-RoBERTa-Large](https://huggingface.co/nreimers/MiniLMv2-L6-H384-distilled-from-RoBERTa-Large) - **Training dataset:** [MRQA](https://huggingface.co/datasets/mrqa) (Machine Reading for Question Answering) - **Training data size:** 516,819 examples - **Training time:** 2:15:10 on 1 Nvidia V100 32GB GPU - **Language:** English - **Framework:** PyTorch - **Model version:** 1.0 # Intended Use This model is intended to provide accurate answers to questions based on context passages. It can be used for a variety of tasks, including question-answering for search engines, chatbots, customer service systems, and other applications that require natural language understanding. # How to Use ```python from transformers import pipeline question_answerer = pipeline("question-answering", model='VMware/minilmv2-l6-h384-from-roberta-large-mrqa') context = "We present the results of the Machine Reading for Question Answering (MRQA) 2019 shared task on evaluating the generalization capabilities of reading comprehension systems. In this task, we adapted and unified 18 distinct question answering datasets into the same format. Among them, six datasets were made available for training, six datasets were made available for development, and the final six were hidden for final evaluation. Ten teams submitted systems, which explored various ideas including data sampling, multi-task learning, adversarial training and ensembling. The best system achieved an average F1 score of 72.5 on the 12 held-out datasets, 10.7 absolute points higher than our initial baseline based on BERT." question = "What is MRQA?" result = question_answerer(question=question, context=context) print(result) # { # 'score': 0.5269668698310852, # 'start': 30, # 'end': 68, # 'answer': 'Machine Reading for Question Answering' # } ``` # Training Details The model was trained for 1 epoch on the MRQA training set. ## Training Hyperparameters ```python args = TrainingArguments( "minilmv2-l6-h384-from-roberta-large-mrqa", save_strategy="epoch", learning_rate=1e-5, num_train_epochs=1, weight_decay=0.01, per_device_train_batch_size=16, ) ``` # Evaluation Metrics The model was evaluated using standard metrics for question-answering models, including: Exact match (EM): The percentage of questions for which the model produces an exact match with the ground truth answer. F1 score: A weighted average of precision and recall, which measures the overlap between the predicted answer and the ground truth answer. # Model Family Performance | Parent Language Model | Number of Parameters | Training Time | Eval Time | Test Time | Eval EM | Eval F1 | Test EM | Test F1 | |---|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| | BERT-Tiny | 4,369,666 | 26:11 | 0:41 | 0:04 | 22.78 | 32.42 | 10.18 | 18.72 | | BERT-Base | 108,893,186 | 8:39:10 | 18:42 | 2:13 | 64.48 | 76.14 | 48.89 | 59.89 | | BERT-Large | 334,094,338 | 28:35:38 | 1:00:56 | 7:14 | 69.52 | 80.50 | 55.00 | 65.78 | | DeBERTa-v3-Extra-Small | 70,682,882 | 5:19:05 | 11:29 | 1:16 | 65.58 | 77.17 | 50.92 | 62.58 | | DeBERTa-v3-Base | 183,833,090 | 12:13:41 | 28:18 | 3:09 | 71.43 | 82.59 | 59.49 | 70.46 | | DeBERTa-v3-Large | 434,014,210 | 38:36:13 | 1:25:47 | 9:33 | **76.08** | **86.23** | **64.27** | **75.22** | | ELECTRA-Small | 13,483,522 | 2:16:36 | 3:55 | 0:27 | 57.63 | 69.38 | 38.68 | 51.56 | | ELECTRA-Base | 108,893,186 | 8:40:57 | 18:41 | 2:12 | 68.78 | 80.16 | 54.70 | 65.80 | | ELECTRA-Large | 334,094,338 | 28:31:59 | 1:00:40 | 7:13 | 74.15 | 84.96 | 62.35 | 73.28 | | MiniLMv2-L6-H384-from-BERT-Large | 22,566,146 | 2:12:48 | 4:23 | 0:40 | 59.31 | 71.09 | 41.78 | 53.30 | | MiniLMv2-L6-H768-from-BERT-Large | 66,365,954 | 4:42:59 | 10:01 | 1:10 | 64.27 | 75.84 | 49.05 | 59.82 | | MiniLMv2-L6-H384-from-RoBERTa-Large | 30,147,842 | 2:15:10 | 4:19 | 0:30 | 59.27 | 70.64 | 42.95 | 54.03 | | MiniLMv2-L12-H384-from-RoBERTa-Large | 40,794,626 | 4:14:22 | 8:27 | 0:58 | 64.58 | 76.23 | 51.28 | 62.83 | | MiniLMv2-L6-H768-from-RoBERTa-Large | 81,529,346 | 4:39:02 | 9:34 | 1:06 | 65.80 | 77.17 | 51.72 | 63.27 | | TinyRoBERTa | 81,529.346 | 4:27:06\* | 9:54 | 1:04 | 69.38 | 80.07 | 53.29 | 64.16 | | RoBERTa-Base | 124,056,578 | 8:50:29 | 18:59 | 2:11 | 69.06 | 80.08 | 55.53 | 66.49 | | RoBERTa-Large | 354,312,194 | 29:16:06 | 1:01:10 | 7:04 | 74.08 | 84.38 | 62.20 | 72.88 | \* TinyRoBERTa's training time isn't directly comparable to the other models since it was distilled from [VMware/roberta-large-mrqa](https://huggingface.co/VMware/roberta-large-mrqa) that was already trained on MRQA. # Limitations and Bias The model is based on a large and diverse dataset, but it may still have limitations and biases in certain areas. Some limitations include: - Language: The model is designed to work with English text only and may not perform as well on other languages. - Domain-specific knowledge: The model has been trained on a general dataset and may not perform well on questions that require domain-specific knowledge. - Out-of-distribution questions: The model may struggle with questions that are outside the scope of the MRQA dataset. This is best demonstrated by the delta between its scores on the eval vs test datasets. In addition, the model may have some bias in terms of the data it was trained on. The dataset includes questions from a variety of sources, but it may not be representative of all populations or perspectives. As a result, the model may perform better or worse for certain types of questions or on certain types of texts.
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus6
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
34
2023-02-17T21:43:36Z
--- license: apache-2.0 datasets: - mrqa language: - en metrics: - exact_match - f1 model-index: - name: VMware/minilmv2-l6-h768-from-bert-large-mrqa results: - task: type: Question-Answering dataset: type: mrqa name: MRQA metrics: - type: exact_match value: 64.27 name: Eval EM - type: f1 value: 75.84 name: Eval F1 - type: exact_match value: 49.05 name: Test EM - type: f1 value: 59.82 name: Test F1 --- This model release is part of a joint research project with Howard University's Innovation Foundry/AIM-AHEAD Lab. # Model Details - **Model name:** MiniLMv2-L6-H768-from-BERT-Large-MRQA - **Model type:** Extractive Question Answering - **Parent Model:** [MiniLMv2-L6-H768-distilled-from-BERT-Large](https://huggingface.co/nreimers/MiniLMv2-L6-H768-distilled-from-BERT-Large) - **Training dataset:** [MRQA](https://huggingface.co/datasets/mrqa) (Machine Reading for Question Answering) - **Training data size:** 516,819 examples - **Training time:** 4:42:59 on 1 Nvidia V100 32GB GPU - **Language:** English - **Framework:** PyTorch - **Model version:** 1.0 # Intended Use This model is intended to provide accurate answers to questions based on context passages. It can be used for a variety of tasks, including question-answering for search engines, chatbots, customer service systems, and other applications that require natural language understanding. # How to Use ```python from transformers import pipeline question_answerer = pipeline("question-answering", model='VMware/minilmv2-l6-h768-from-bert-large-mrqa') context = "We present the results of the Machine Reading for Question Answering (MRQA) 2019 shared task on evaluating the generalization capabilities of reading comprehension systems. In this task, we adapted and unified 18 distinct question answering datasets into the same format. Among them, six datasets were made available for training, six datasets were made available for development, and the final six were hidden for final evaluation. Ten teams submitted systems, which explored various ideas including data sampling, multi-task learning, adversarial training and ensembling. The best system achieved an average F1 score of 72.5 on the 12 held-out datasets, 10.7 absolute points higher than our initial baseline based on BERT." question = "What is MRQA?" result = question_answerer(question=question, context=context) print(result) # { # 'score': 0.8620206117630005, # 'start': 30, # 'end': 68, # 'answer': 'Machine Reading for Question Answering' # } ``` # Training Details The model was trained for 1 epoch on the MRQA training set. ## Training Hyperparameters ```python args = TrainingArguments( "minilmv2-l6-h768-from-bert-large-mrqa", save_strategy="epoch", learning_rate=1e-5, num_train_epochs=1, weight_decay=0.01, per_device_train_batch_size=16, ) ``` # Evaluation Metrics The model was evaluated using standard metrics for question-answering models, including: Exact match (EM): The percentage of questions for which the model produces an exact match with the ground truth answer. F1 score: A weighted average of precision and recall, which measures the overlap between the predicted answer and the ground truth answer. # Model Family Performance | Parent Language Model | Number of Parameters | Training Time | Eval Time | Test Time | Eval EM | Eval F1 | Test EM | Test F1 | |---|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| | BERT-Tiny | 4,369,666 | 26:11 | 0:41 | 0:04 | 22.78 | 32.42 | 10.18 | 18.72 | | BERT-Base | 108,893,186 | 8:39:10 | 18:42 | 2:13 | 64.48 | 76.14 | 48.89 | 59.89 | | BERT-Large | 334,094,338 | 28:35:38 | 1:00:56 | 7:14 | 69.52 | 80.50 | 55.00 | 65.78 | | DeBERTa-v3-Extra-Small | 70,682,882 | 5:19:05 | 11:29 | 1:16 | 65.58 | 77.17 | 50.92 | 62.58 | | DeBERTa-v3-Base | 183,833,090 | 12:13:41 | 28:18 | 3:09 | 71.43 | 82.59 | 59.49 | 70.46 | | DeBERTa-v3-Large | 434,014,210 | 38:36:13 | 1:25:47 | 9:33 | **76.08** | **86.23** | **64.27** | **75.22** | | ELECTRA-Small | 13,483,522 | 2:16:36 | 3:55 | 0:27 | 57.63 | 69.38 | 38.68 | 51.56 | | ELECTRA-Base | 108,893,186 | 8:40:57 | 18:41 | 2:12 | 68.78 | 80.16 | 54.70 | 65.80 | | ELECTRA-Large | 334,094,338 | 28:31:59 | 1:00:40 | 7:13 | 74.15 | 84.96 | 62.35 | 73.28 | | MiniLMv2-L6-H384-from-BERT-Large | 22,566,146 | 2:12:48 | 4:23 | 0:40 | 59.31 | 71.09 | 41.78 | 53.30 | | MiniLMv2-L6-H768-from-BERT-Large | 66,365,954 | 4:42:59 | 10:01 | 1:10 | 64.27 | 75.84 | 49.05 | 59.82 | | MiniLMv2-L6-H384-from-RoBERTa-Large | 30,147,842 | 2:15:10 | 4:19 | 0:30 | 59.27 | 70.64 | 42.95 | 54.03 | | MiniLMv2-L12-H384-from-RoBERTa-Large | 40,794,626 | 4:14:22 | 8:27 | 0:58 | 64.58 | 76.23 | 51.28 | 62.83 | | MiniLMv2-L6-H768-from-RoBERTa-Large | 81,529,346 | 4:39:02 | 9:34 | 1:06 | 65.80 | 77.17 | 51.72 | 63.27 | | TinyRoBERTa | 81,529.346 | 4:27:06\* | 9:54 | 1:04 | 69.38 | 80.07 | 53.29 | 64.16 | | RoBERTa-Base | 124,056,578 | 8:50:29 | 18:59 | 2:11 | 69.06 | 80.08 | 55.53 | 66.49 | | RoBERTa-Large | 354,312,194 | 29:16:06 | 1:01:10 | 7:04 | 74.08 | 84.38 | 62.20 | 72.88 | \* TinyRoBERTa's training time isn't directly comparable to the other models since it was distilled from [VMware/roberta-large-mrqa](https://huggingface.co/VMware/roberta-large-mrqa) that was already trained on MRQA. # Limitations and Bias The model is based on a large and diverse dataset, but it may still have limitations and biases in certain areas. Some limitations include: - Language: The model is designed to work with English text only and may not perform as well on other languages. - Domain-specific knowledge: The model has been trained on a general dataset and may not perform well on questions that require domain-specific knowledge. - Out-of-distribution questions: The model may struggle with questions that are outside the scope of the MRQA dataset. This is best demonstrated by the delta between its scores on the eval vs test datasets. In addition, the model may have some bias in terms of the data it was trained on. The dataset includes questions from a variety of sources, but it may not be representative of all populations or perspectives. As a result, the model may perform better or worse for certain types of questions or on certain types of texts.
CAMeL-Lab/bert-base-arabic-camelbert-mix-ner
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,860
2023-02-17T21:46:51Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: taxi-v3-v2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.58 +/- 2.69 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="shaaaanya/taxi-v3-v2", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-egy
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
62
2023-02-17T21:48:07Z
--- license: creativeml-openrail-m tags: - pytorch - diffusers - stable-diffusion - text-to-image - diffusion-models-class - dreambooth-hackathon - animal widget: - text: a photo of airobots robot in the Acropolis --- # DreamBooth model for the airobots concept trained by hulkster on the hulkster/airobotics dataset. This is a Stable Diffusion model fine-tuned on the airobots concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of airobots robot** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part! ## Description This is a Stable Diffusion model fine-tuned on `robot` images for the animal theme. ## Usage ```python from diffusers import StableDiffusionPipeline pipeline = StableDiffusionPipeline.from_pretrained('hulkster/airobots-robot') image = pipeline().images[0] image ``` ![alt text](preview-1.png)
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-msa
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,862
2023-02-17T21:49:55Z
--- license: apache-2.0 datasets: - mrqa language: - en metrics: - exact_match - f1 model-index: - name: VMware/roberta-large-mrqa results: - task: type: Question-Answering dataset: type: mrqa name: MRQA metrics: - type: exact_match value: 74.08 name: Eval EM - type: f1 value: 84.38 name: Eval F1 - type: exact_match value: 62.20 name: Test EM - type: f1 value: 72.88 name: Test F1 --- This model release is part of a joint research project with Howard University's Innovation Foundry/AIM-AHEAD Lab. # Model Details - **Model name:** RoBERTa-Large-MRQA - **Model type:** Extractive Question Answering - **Parent Model:** [RoBERTa-Large](https://huggingface.co/roberta-large) - **Training dataset:** [MRQA](https://huggingface.co/datasets/mrqa) (Machine Reading for Question Answering) - **Training data size:** 516,819 examples - **Training time:** 29:16:06 on 1 Nvidia V100 32GB GPU - **Language:** English - **Framework:** PyTorch - **Model version:** 1.0 # Intended Use This model is intended to provide accurate answers to questions based on context passages. It can be used for a variety of tasks, including question-answering for search engines, chatbots, customer service systems, and other applications that require natural language understanding. # How to Use ```python from transformers import pipeline question_answerer = pipeline("question-answering", model='VMware/roberta-large-mrqa') context = "We present the results of the Machine Reading for Question Answering (MRQA) 2019 shared task on evaluating the generalization capabilities of reading comprehension systems. In this task, we adapted and unified 18 distinct question answering datasets into the same format. Among them, six datasets were made available for training, six datasets were made available for development, and the final six were hidden for final evaluation. Ten teams submitted systems, which explored various ideas including data sampling, multi-task learning, adversarial training and ensembling. The best system achieved an average F1 score of 72.5 on the 12 held-out datasets, 10.7 absolute points higher than our initial baseline based on BERT." question = "What is MRQA?" result = question_answerer(question=question, context=context) print(result) # { # 'score': 0.9120895862579346, # 'start': 30, # 'end': 68, # 'answer': 'Machine Reading for Question Answering' # } ``` # Training Details The model was trained for 1 epoch on the MRQA training set. ## Training Hyperparameters ```python args = TrainingArguments( "roberta-large-mrqa", save_strategy="epoch", learning_rate=1e-5, num_train_epochs=1, weight_decay=0.01, per_device_train_batch_size=8, ) ``` # Evaluation Metrics The model was evaluated using standard metrics for question-answering models, including: Exact match (EM): The percentage of questions for which the model produces an exact match with the ground truth answer. F1 score: A weighted average of precision and recall, which measures the overlap between the predicted answer and the ground truth answer. # Model Family Performance | Parent Language Model | Number of Parameters | Training Time | Eval Time | Test Time | Eval EM | Eval F1 | Test EM | Test F1 | |---|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| | BERT-Tiny | 4,369,666 | 26:11 | 0:41 | 0:04 | 22.78 | 32.42 | 10.18 | 18.72 | | BERT-Base | 108,893,186 | 8:39:10 | 18:42 | 2:13 | 64.48 | 76.14 | 48.89 | 59.89 | | BERT-Large | 334,094,338 | 28:35:38 | 1:00:56 | 7:14 | 69.52 | 80.50 | 55.00 | 65.78 | | DeBERTa-v3-Extra-Small | 70,682,882 | 5:19:05 | 11:29 | 1:16 | 65.58 | 77.17 | 50.92 | 62.58 | | DeBERTa-v3-Base | 183,833,090 | 12:13:41 | 28:18 | 3:09 | 71.43 | 82.59 | 59.49 | 70.46 | | DeBERTa-v3-Large | 434,014,210 | 38:36:13 | 1:25:47 | 9:33 | **76.08** | **86.23** | **64.27** | **75.22** | | ELECTRA-Small | 13,483,522 | 2:16:36 | 3:55 | 0:27 | 57.63 | 69.38 | 38.68 | 51.56 | | ELECTRA-Base | 108,893,186 | 8:40:57 | 18:41 | 2:12 | 68.78 | 80.16 | 54.70 | 65.80 | | ELECTRA-Large | 334,094,338 | 28:31:59 | 1:00:40 | 7:13 | 74.15 | 84.96 | 62.35 | 73.28 | | MiniLMv2-L6-H384-from-BERT-Large | 22,566,146 | 2:12:48 | 4:23 | 0:40 | 59.31 | 71.09 | 41.78 | 53.30 | | MiniLMv2-L6-H768-from-BERT-Large | 66,365,954 | 4:42:59 | 10:01 | 1:10 | 64.27 | 75.84 | 49.05 | 59.82 | | MiniLMv2-L6-H384-from-RoBERTa-Large | 30,147,842 | 2:15:10 | 4:19 | 0:30 | 59.27 | 70.64 | 42.95 | 54.03 | | MiniLMv2-L12-H384-from-RoBERTa-Large | 40,794,626 | 4:14:22 | 8:27 | 0:58 | 64.58 | 76.23 | 51.28 | 62.83 | | MiniLMv2-L6-H768-from-RoBERTa-Large | 81,529,346 | 4:39:02 | 9:34 | 1:06 | 65.80 | 77.17 | 51.72 | 63.27 | | TinyRoBERTa | 81,529.346 | 4:27:06\* | 9:54 | 1:04 | 69.38 | 80.07 | 53.29 | 64.16 | | RoBERTa-Base | 124,056,578 | 8:50:29 | 18:59 | 2:11 | 69.06 | 80.08 | 55.53 | 66.49 | | RoBERTa-Large | 354,312,194 | 29:16:06 | 1:01:10 | 7:04 | 74.08 | 84.38 | 62.20 | 72.88 | \* TinyRoBERTa's training time isn't directly comparable to the other models since it was distilled from [VMware/roberta-large-mrqa](https://huggingface.co/VMware/roberta-large-mrqa) that was already trained on MRQA. # Limitations and Bias The model is based on a large and diverse dataset, but it may still have limitations and biases in certain areas. Some limitations include: - Language: The model is designed to work with English text only and may not perform as well on other languages. - Domain-specific knowledge: The model has been trained on a general dataset and may not perform well on questions that require domain-specific knowledge. - Out-of-distribution questions: The model may struggle with questions that are outside the scope of the MRQA dataset. This is best demonstrated by the delta between its scores on the eval vs test datasets. In addition, the model may have some bias in terms of the data it was trained on. The dataset includes questions from a variety of sources, but it may not be representative of all populations or perspectives. As a result, the model may perform better or worse for certain types of questions or on certain types of texts.
CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
21
null
--- datasets: - relbert/relational_similarity model-index: - name: relbert/relbert-roberta-large-nce-d-semeval2012-t-rex results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8246428571428571 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6818181818181818 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6765578635014837 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7815453029460812 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.912 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6491228070175439 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6203703703703703 - task: name: Analogy Questions (ConceptNet Analogy) type: multiple-choice-qa dataset: name: ConceptNet Analogy args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3934563758389262 - task: name: Analogy Questions (TREX Analogy) type: multiple-choice-qa dataset: name: TREX Analogy args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6939890710382514 - task: name: Analogy Questions (NELL-ONE Analogy) type: multiple-choice-qa dataset: name: NELL-ONE Analogy args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6916666666666667 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9144191652855206 - name: F1 (macro) type: f1_macro value: 0.9090326046297017 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8525821596244132 - name: F1 (macro) type: f1_macro value: 0.6842287666651263 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6700975081256771 - name: F1 (macro) type: f1_macro value: 0.6583310456420314 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9554148987966892 - name: F1 (macro) type: f1_macro value: 0.8781071111856901 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8928235662801629 - name: F1 (macro) type: f1_macro value: 0.8921854652640585 --- # relbert/relbert-roberta-large-nce-d-semeval2012-t-rex RelBERT based on [roberta-large](https://huggingface.co/roberta-large) fine-tuned on [relbert/relational_similarity](https://huggingface.co/datasets/relbert/relational_similarity) (see the [`relbert`](https://github.com/asahi417/relbert) for more detail of fine-tuning). This model achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-d-semeval2012-t-rex/raw/main/analogy.forward.json)): - Accuracy on SAT (full): 0.6818181818181818 - Accuracy on SAT: 0.6765578635014837 - Accuracy on BATS: 0.7815453029460812 - Accuracy on U2: 0.6491228070175439 - Accuracy on U4: 0.6203703703703703 - Accuracy on Google: 0.912 - Accuracy on ConceptNet Analogy: 0.3934563758389262 - Accuracy on T-Rex Analogy: 0.6939890710382514 - Accuracy on NELL-ONE Analogy: 0.6916666666666667 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-d-semeval2012-t-rex/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9144191652855206 - Micro F1 score on CogALexV: 0.8525821596244132 - Micro F1 score on EVALution: 0.6700975081256771 - Micro F1 score on K&H+N: 0.9554148987966892 - Micro F1 score on ROOT09: 0.8928235662801629 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-d-semeval2012-t-rex/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8246428571428571 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-large-nce-d-semeval2012-t-rex") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (n_dim, ) ``` ### Training hyperparameters - model: roberta-large - max_length: 64 - epoch: 20 - batch: 64 - random_seed: 0 - lr: 5e-06 - lr_warmup: 10 - aggregation_mode: average_no_mask - data: relbert/relational_similarity - data_name: semeval2012_relational_similarity.t_rex_relational_similarity - exclude_relation: None - split: train - split_valid: validation - loss_function: nce - classification_loss: False - loss_function_config: {'temperature': 0.05, 'num_negative': 300, 'num_positive': 10} - augment_negative_by_positive: False See the full configuration at [config file](https://huggingface.co/relbert/relbert-roberta-large-nce-d-semeval2012-t-rex/raw/main/finetuning_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.emnlp-main.712/). ``` @inproceedings{ushio-etal-2021-distilling, title = "Distilling Relation Embeddings from Pretrained Language Models", author = "Ushio, Asahi and Camacho-Collados, Jose and Schockaert, Steven", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.712", doi = "10.18653/v1/2021.emnlp-main.712", pages = "9044--9062", abstract = "Pre-trained language models have been found to capture a surprisingly rich amount of lexical knowledge, ranging from commonsense properties of everyday concepts to detailed factual knowledge about named entities. Among others, this makes it possible to distill high-quality word vectors from pre-trained language models. However, it is currently unclear to what extent it is possible to distill relation embeddings, i.e. vectors that characterize the relationship between two words. Such relation embeddings are appealing because they can, in principle, encode relational knowledge in a more fine-grained way than is possible with knowledge graphs. To obtain relation embeddings from a pre-trained language model, we encode word pairs using a (manually or automatically generated) prompt, and we fine-tune the language model such that relationally similar word pairs yield similar output vectors. We find that the resulting relation embeddings are highly competitive on analogy (unsupervised) and relation classification (supervised) benchmarks, even without any task-specific fine-tuning. Source code to reproduce our experimental results and the model checkpoints are available in the following repository: https://github.com/asahi417/relbert", } ```
CAMeL-Lab/bert-base-arabic-camelbert-msa-ner
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
229
null
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -125.60 +/- 77.76 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'z4x/ppo-LunarLander-v2-CleanRL' 'batch_size': 512 'minibatch_size': 128} ```
CAMeL-Lab/bert-base-arabic-camelbert-msa-poetry
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:1905.05700", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
25
2023-02-17T22:16:31Z
--- language: en thumbnail: https://huggingface.co/front/thumbnails/microsoft.png tags: - text-classification license: mit --- # AutoDisProxyT for Distilling Massive Neural Networks AutoDisProxyT is a distilled task-agnostic transformer model that leverages task transfer for learning a small universal model that can be applied to arbitrary tasks and languages as outlined in the paper [Few-shot Task-agnostic Neural Architecture Search for Distilling Large Language Models](https://proceedings.neurips.cc/paper_files/paper/2022/file/b7c12689a89e98a61bcaa65285a41b7c-Paper-Conference.pdf). This AutoDisProxyT checkpoint with **7** layers, **160** hidden size, **10** attention heads corresponds to **6.88 million** parameters and **0.27G** FLOPs. The following table shows the results on GLUE dev set. | Models | #Params (M) | #FLOPs (G) | MNLI | QNLI | QQP | RTE | SST-2 | MRPC | CoLA | Avg | |----------------|--------|---------|------|------|------|------|------|------|--------|-------| | BERT | 109 | 11.2 | 84.5 | 91.7 | 91.3 | 68.6 | 93.2 | 87.3 | 53.5 | 82.2 | | BERT<sub>SMALL</sub> | 66 | 5.66 | 81.8 | 89.8 | 90.6 | 67.9 | 91.2 | 84.9 | 53.5 | 80.0 | | TruncatedBERT | 66 | 5.66 | 81.2 | 87.9 | 90.4 | 65.5 | 90.8 | 82.7 | 41.4 | 77.1 | | DistilBERT | 66 | 5.66 | 82.2 | 89.2 | 88.5 | 59.9 | 91.3 | 87.5 | 51.3 | 78.6 | | TinyBERT | 66 | 5.66 | 83.5 | 90.5 | 90.6 | 72.2 | 91.6 | 88.4 | 42.8 | 79.9 | | MiniLM | 66 | 5.66 | 84.0 | 91.0 | 91.0 | 71.5 | 92.0 | 88.4 | 49.2 | 81.0 | | AutoTinyBERT-KD-S1 | 30.0 | 1.69 | 82.3 | 89.7 | 89.9 | 71.1 | 91.4 | 88.5 | 47.3 | 80.0 | | DynaBERT | 37.7 | 1.81 | 82.3 | 88.5 | 90.4 | 63.2 | 92.0 | 81.4 | 76.4 | 43.7 | | NAS-BERT<sub>10</sub>| 10.0 | 2.30 | 76.4 | 86.3 | 88.5 | 66.6 | 88.6 | 79.1 | 34.0 | 74.2 | | AutoTinyBERT-KD-S4 | 66 | 5.66 | 76.0 | 85.5 | 86.9 | 64.9 | 86.8 | 81.4 | 20.4 | 71.7 | | NAS-BERT<sub>5</sub> | 66 | 5.66 | 74.4 | 84.9 | 85.8 | 66.6 | 87.3 | 79.6 | 19.8 | 71.2 | | **AutoDisProxyT** | 6.88 | 0.27 | 79.0 | 86.4 | 89.1 | 64.3 | 85.9 | 78.5 | 24.8 | 72.6 | Tested with `torch 1.6.0` If you use this checkpoint in your work, please cite: ``` latex @article{xu2022autodistil, title={AutoDistil: Few-shot Task-agnostic Neural Architecture Search for Distilling Large Language Models}, author={Xu, Dongkuan and Mukherjee, Subhabrata and Liu, Xiaodong and Dey, Debadeepta and Wang, Wenhui and Zhang, Xiang and Awadallah, Ahmed Hassan and Gao, Jianfeng}, journal={arXiv preprint arXiv:2201.12507}, year={2022} } ```
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-glf
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
21
2023-02-17T22:21:08Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: SAC results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -0.35 +/- 0.10 name: mean_reward verified: false --- # **SAC** Agent playing **PandaReachDense-v2** This is a trained model of a **SAC** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
CAMeL-Lab/bert-base-arabic-camelbert-msa-quarter
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
2023-02-17T22:28:56Z
--- tags: - CartPole-v1 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 244.30 +/- 65.98 name: mean_reward verified: false --- # (CleanRL) **DQN** Agent Playing **CartPole-v1** This is a trained model of a DQN agent playing CartPole-v1. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQN_baseline_VIDEO.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[DQN_baseline_VIDEO]" python -m cleanrl_utils.enjoy --exp-name DQN_baseline_VIDEO --env-id CartPole-v1 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/pfunk/CartPole-v1-DQN_baseline_VIDEO-seed1/raw/main/dqn.py curl -OL https://huggingface.co/pfunk/CartPole-v1-DQN_baseline_VIDEO-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/pfunk/CartPole-v1-DQN_baseline_VIDEO-seed1/raw/main/poetry.lock poetry install --all-extras python dqn.py --exp-name DQN_baseline_VIDEO --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id CartPole-v1 --seed 1 --total-timesteps 100000 ``` # Hyperparameters ```python {'batch_size': 128, 'buffer_size': 10000, 'capture_video': False, 'cuda': True, 'end_e': 0.05, 'env_id': 'CartPole-v1', 'exp_name': 'DQN_baseline_VIDEO', 'exploration_fraction': 0.5, 'gamma': 0.99, 'hf_entity': 'pfunk', 'learning_rate': 0.00025, 'learning_starts': 10000, 'save_model': True, 'seed': 1, 'start_e': 1, 'target_network_frequency': 500, 'tau': 1.0, 'torch_deterministic': True, 'total_timesteps': 100000, 'track': True, 'train_frequency': 10, 'upload_model': True, 'wandb_entity': 'pfunk', 'wandb_project_name': 'dqpn'} ```
CAMeL-Lab/bert-base-arabic-camelbert-msa-sixteenth
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
26
2023-02-17T22:41:47Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: parsasam/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
CAUKiel/JavaBERT
[ "pytorch", "safetensors", "bert", "fill-mask", "code", "arxiv:2110.10404", "arxiv:1910.09700", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
388
null
--- language: en thumbnail: http://www.huggingtweets.com/anthrophobe1/1676674313589/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1619867533640605697/rGD6NShu_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">.</div> <div style="text-align: center; font-size: 14px;">@anthrophobe1</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from .. | Data | . | | --- | --- | | Tweets downloaded | 1603 | | Retweets | 247 | | Short tweets | 128 | | Tweets kept | 1228 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/v2ott2rf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @anthrophobe1's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/o3d9xnm1) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/o3d9xnm1/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/anthrophobe1') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
CBreit00/DialoGPT_small_Rick
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-02-17T22:54:57Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
CLAck/en-km
[ "pytorch", "marian", "text2text-generation", "transformers", "translation", "autotrain_compatible" ]
translation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
2023-02-17T22:58:59Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: ArtYac/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
dccuchile/albert-tiny-spanish-finetuned-pos
[ "pytorch", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
Access to model CQTTS/NZ is restricted and you are not in the authorized list. Visit https://huggingface.co/CQTTS/NZ to ask for access.
dccuchile/albert-large-spanish
[ "pytorch", "tf", "albert", "pretraining", "es", "dataset:large_spanish_corpus", "transformers", "spanish", "OpenCENIA" ]
null
{ "architectures": [ "AlbertForPreTraining" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
75
null
--- language: - en license: apache-2.0 tags: - hf-asr-leaderboard - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 model-index: - name: Whisper Medium.en - genevera results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Medium.en - genevera This model is a fine-tuned version of [openai/whisper-medium.en](https://huggingface.co/openai/whisper-medium.en) on the Common Voice 11.0 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - training_steps: 100 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Chakita/KannadaBERT
[ "pytorch", "roberta", "fill-mask", "transformers", "masked-lm", "fill-in-the-blanks", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -4.92 +/- 2.45 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Chan/distilroberta-base-finetuned-wikitext2
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en metrics: - f1 - accuracy pipeline_tag: text-classification widget: - text: "Every woman wants to be a model. It's codeword for 'I get everything for free and people want me'" --- ### distilbert-base-sexism-detector This is a fine-tuned model of distilbert-base on the Explainable Detection of Online Sexism (EDOS) dataset. It is intended to be used as a classification model for identifying tweets (0 - not sexist; 1 - sexist). **This is a light model with an 81.2 F1 score. Use this model for fase prediction using the online API, if you like to see our best model with 86.3 F1 score , use this [link](https://huggingface.co/NLP-LTU/BERTweet-large-sexism-detector).** Classification examples (use these example in the Hosted Inference API in the right panel ): |Prediction|Tweet| |-----|--------| |sexist |Every woman wants to be a model. It's codeword for "I get everything for free and people want me" | |not sexist |basically I placed more value on her than I should then?| # More Details For more details about the datasets and eval results, see (we will updated the page with our paper link) # How to use ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer,pipeline import torch model = AutoModelForSequenceClassification.from_pretrained('NLP-LTU/distilbert-sexism-detector') tokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased') classifier = pipeline("text-classification", model=model, tokenizer=tokenizer) prediction=classifier("Every woman wants to be a model. It's codeword for 'I get everything for free and people want me' ") label_pred = 'not sexist' if prediction == 0 else 'sexist' print(label_pred) ``` ``` precision recall f1-score support not sexsit 0.9000 0.9264 0.9130 3030 sexist 0.7469 0.6784 0.7110 970 accuracy 0.8662 4000 macro avg 0.8234 0.8024 0.8120 4000 weighted avg 0.8628 0.8662 0.8640 4000 ```
ChauhanVipul/BERT
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-02-18T09:25:37Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Ahmade/bert-fine-tuned-cola2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Ahmade/bert-fine-tuned-cola2 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3115 - Validation Loss: 0.4341 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.5109 | 0.4635 | 0 | | 0.3115 | 0.4341 | 1 | ### Framework versions - Transformers 4.26.1 - TensorFlow 2.11.0 - Datasets 2.9.0 - Tokenizers 0.13.2
CheonggyeMountain-Sherpa/kogpt-trinity-poem
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
15
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.706732347314747 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . evaluating using gymnasium gives a weird score value: 814.92 +/- 28.41 ## Usage ```python model = load_from_hub(repo_id="Yelinz/q-Taxi-v3-v1", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ChrisVCB/DialoGPT-medium-cmjs
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
2023-02-18T10:54:25Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 278.66 +/- 18.45 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ChristianOrr/madnet_keras
[ "tensorboard", "dataset:flyingthings-3d", "dataset:kitti", "arxiv:1810.05424", "vision", "deep-stereo", "depth-estimation", "Tensorflow2", "Keras", "license:apache-2.0" ]
depth-estimation
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-02-18T11:05:53Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="parsasam/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ChristopherA08/IndoELECTRA
[ "pytorch", "electra", "pretraining", "id", "dataset:oscar", "transformers" ]
null
{ "architectures": [ "ElectraForPreTraining" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2023-02-18T11:06:08Z
# TorToiSe Tortoise is a text-to-speech program built with the following priorities: 1. Strong multi-voice capabilities. 2. Highly realistic prosody and intonation. This repo contains all the code needed to run Tortoise TTS in inference mode. ### New features #### v2.1; 2022/5/2 - Added ability to produce totally random voices. - Added ability to download voice conditioning latent via a script, and then use a user-provided conditioning latent. - Added ability to use your own pretrained models. - Refactored directory structures. - Performance improvements & bug fixes. ## What's in a name? I'm naming my speech-related repos after Mojave desert flora and fauna. Tortoise is a bit tongue in cheek: this model is insanely slow. It leverages both an autoregressive decoder **and** a diffusion decoder; both known for their low sampling rates. On a K80, expect to generate a medium sized sentence every 2 minutes. ## Demos See [this page](http://nonint.com/static/tortoise_v2_examples.html) for a large list of example outputs. ## Usage guide ### Colab Colab is the easiest way to try this out. I've put together a notebook you can use here: https://colab.research.google.com/drive/1wVVqUPqwiDBUVeWWOUNglpGhU3hg_cbR?usp=sharing ### Installation If you want to use this on your own computer, you must have an NVIDIA GPU. First, install pytorch using these instructions: [https://pytorch.org/get-started/locally/](https://pytorch.org/get-started/locally/) Then: ```shell git clone https://github.com/neonbjb/tortoise-tts.git cd tortoise-tts python setup.py install ``` ### do_tts.py This script allows you to speak a single phrase with one or more voices. ```shell python tortoise/do_tts.py --text "I'm going to speak this" --voice random --preset fast ``` ### read.py This script provides tools for reading large amounts of text. ```shell python tortoise/read.py --textfile <your text to be read> --voice random ``` This will break up the textfile into sentences, and then convert them to speech one at a time. It will output a series of spoken clips as they are generated. Once all the clips are generated, it will combine them into a single file and output that as well. Sometimes Tortoise screws up an output. You can re-generate any bad clips by re-running `read.py` with the --regenerate argument. ### API Tortoise can be used programmatically, like so: ```python reference_clips = [utils.audio.load_audio(p, 22050) for p in clips_paths] tts = api.TextToSpeech() pcm_audio = tts.tts_with_preset("your text here", reference_clips, preset='fast') ``` ## Voice customization guide Tortoise was specifically trained to be a multi-speaker model. It accomplishes this by consulting reference clips. These reference clips are recordings of a speaker that you provide to guide speech generation. These clips are used to determine many properties of the output, such as the pitch and tone of the voice, speaking speed, and even speaking defects like a lisp or stuttering. The reference clip is also used to determine non-voice related aspects of the audio output like volume, background noise, recording quality and reverb. ### Random voice I've included a feature which randomly generates a voice. These voices don't actually exist and will be random every time you run it. The results are quite fascinating and I recommend you play around with it! You can use the random voice by passing in 'random' as the voice name. Tortoise will take care of the rest. For the those in the ML space: this is created by projecting a random vector onto the voice conditioning latent space. ### Provided voices This repo comes with several pre-packaged voices. You will be familiar with many of them. :) Most of the provided voices were not found in the training set. Experimentally, it seems that voices from the training set produce more realistic outputs then those outside of the training set. Any voice prepended with "train" came from the training set. ### Adding a new voice To add new voices to Tortoise, you will need to do the following: 1. Gather audio clips of your speaker(s). Good sources are YouTube interviews (you can use youtube-dl to fetch the audio), audiobooks or podcasts. Guidelines for good clips are in the next section. 2. Cut your clips into ~10 second segments. You want at least 3 clips. More is better, but I only experimented with up to 5 in my testing. 3. Save the clips as a WAV file with floating point format and a 22,050 sample rate. 4. Create a subdirectory in voices/ 5. Put your clips in that subdirectory. 6. Run tortoise utilities with --voice=<your_subdirectory_name>. ### Picking good reference clips As mentioned above, your reference clips have a profound impact on the output of Tortoise. Following are some tips for picking good clips: 1. Avoid clips with background music, noise or reverb. These clips were removed from the training dataset. Tortoise is unlikely to do well with them. 2. Avoid speeches. These generally have distortion caused by the amplification system. 3. Avoid clips from phone calls. 4. Avoid clips that have excessive stuttering, stammering or words like "uh" or "like" in them. 5. Try to find clips that are spoken in such a way as you wish your output to sound like. For example, if you want to hear your target voice read an audiobook, try to find clips of them reading a book. 6. The text being spoken in the clips does not matter, but diverse text does seem to perform better. ## Advanced Usage ### Generation settings Tortoise is primarily an autoregressive decoder model combined with a diffusion model. Both of these have a lot of knobs that can be turned that I've abstracted away for the sake of ease of use. I did this by generating thousands of clips using various permutations of the settings and using a metric for voice realism and intelligibility to measure their effects. I've set the defaults to the best overall settings I was able to find. For specific use-cases, it might be effective to play with these settings (and it's very likely that I missed something!) These settings are not available in the normal scripts packaged with Tortoise. They are available, however, in the API. See ```api.tts``` for a full list. ### Prompt engineering Some people have discovered that it is possible to do prompt engineering with Tortoise! For example, you can evoke emotion by including things like "I am really sad," before your text. I've built an automated redaction system that you can use to take advantage of this. It works by attempting to redact any text in the prompt surrounded by brackets. For example, the prompt "\[I am really sad,\] Please feed me." will only speak the words "Please feed me" (with a sad tonality). ### Playing with the voice latent Tortoise ingests reference clips by feeding them through individually through a small submodel that produces a point latent, then taking the mean of all of the produced latents. The experimentation I have done has indicated that these point latents are quite expressive, affecting everything from tone to speaking rate to speech abnormalities. This lends itself to some neat tricks. For example, you can combine feed two different voices to tortoise and it will output what it thinks the "average" of those two voices sounds like. #### Generating conditioning latents from voices Use the script `get_conditioning_latents.py` to extract conditioning latents for a voice you have installed. This script will dump the latents to a .pth pickle file. The file will contain a single tuple, (autoregressive_latent, diffusion_latent). Alternatively, use the api.TextToSpeech.get_conditioning_latents() to fetch the latents. #### Using raw conditioning latents to generate speech After you've played with them, you can use them to generate speech by creating a subdirectory in voices/ with a single ".pth" file containing the pickled conditioning latents as a tuple (autoregressive_latent, diffusion_latent). ### Send me feedback! Probabilistic models like Tortoise are best thought of as an "augmented search" - in this case, through the space of possible utterances of a specific string of text. The impact of community involvement in perusing these spaces (such as is being done with GPT-3 or CLIP) has really surprised me. If you find something neat that you can do with Tortoise that isn't documented here, please report it to me! I would be glad to publish it to this page. ## Tortoise-detect Out of concerns that this model might be misused, I've built a classifier that tells the likelihood that an audio clip came from Tortoise. This classifier can be run on any computer, usage is as follows: ```commandline python tortoise/is_this_from_tortoise.py --clip=<path_to_suspicious_audio_file> ``` This model has 100% accuracy on the contents of the results/ and voices/ folders in this repo. Still, treat this classifier as a "strong signal". Classifiers can be fooled and it is likewise not impossible for this classifier to exhibit false positives. ## Model architecture Tortoise TTS is inspired by OpenAI's DALLE, applied to speech data and using a better decoder. It is made up of 5 separate models that work together. I've assembled a write-up of the system architecture here: [https://nonint.com/2022/04/25/tortoise-architectural-design-doc/](https://nonint.com/2022/04/25/tortoise-architectural-design-doc/) ## Training These models were trained on my "homelab" server with 8 RTX 3090s over the course of several months. They were trained on a dataset consisting of ~50k hours of speech data, most of which was transcribed by [ocotillo](http://www.github.com/neonbjb/ocotillo). Training was done on my own [DLAS](https://github.com/neonbjb/DL-Art-School) trainer. I currently do not have plans to release the training configurations or methodology. See the next section.. ## Ethical Considerations Tortoise v2 works considerably better than I had planned. When I began hearing some of the outputs of the last few versions, I began wondering whether or not I had an ethically unsound project on my hands. The ways in which a voice-cloning text-to-speech system could be misused are many. It doesn't take much creativity to think up how. After some thought, I have decided to go forward with releasing this. Following are the reasons for this choice: 1. It is primarily good at reading books and speaking poetry. Other forms of speech do not work well. 2. It was trained on a dataset which does not have the voices of public figures. While it will attempt to mimic these voices if they are provided as references, it does not do so in such a way that most humans would be fooled. 3. The above points could likely be resolved by scaling up the model and the dataset. For this reason, I am currently withholding details on how I trained the model, pending community feedback. 4. I am releasing a separate classifier model which will tell you whether a given audio clip was generated by Tortoise or not. See `tortoise-detect` above. 5. If I, a tinkerer with a BS in computer science with a ~$15k computer can build this, then any motivated corporation or state can as well. I would prefer that it be in the open and everyone know the kinds of things ML can do. ### Diversity The diversity expressed by ML models is strongly tied to the datasets they were trained on. Tortoise was trained primarily on a dataset consisting of audiobooks. I made no effort to balance diversity in this dataset. For this reason, Tortoise will be particularly poor at generating the voices of minorities or of people who speak with strong accents. ## Looking forward Tortoise v2 is about as good as I think I can do in the TTS world with the resources I have access to. A phenomenon that happens when training very large models is that as parameter count increases, the communication bandwidth needed to support distributed training of the model increases multiplicatively. On enterprise-grade hardware, this is not an issue: GPUs are attached together with exceptionally wide buses that can accommodate this bandwidth. I cannot afford enterprise hardware, though, so I am stuck. I want to mention here that I think Tortoise could do be a **lot** better. The three major components of Tortoise are either vanilla Transformer Encoder stacks or Decoder stacks. Both of these types of models have a rich experimental history with scaling in the NLP realm. I see no reason to believe that the same is not true of TTS. The largest model in Tortoise v2 is considerably smaller than GPT-2 large. It is 20x smaller that the original DALLE transformer. Imagine what a TTS model trained at or near GPT-3 or DALLE scale could achieve. If you are an ethical organization with computational resources to spare interested in seeing what this model could do if properly scaled out, please reach out to me! I would love to collaborate on this. ## Acknowledgements This project has garnered more praise than I expected. I am standing on the shoulders of giants, though, and I want to credit a few of the amazing folks in the community that have helped make this happen: - Hugging Face, who wrote the GPT model and the generate API used by Tortoise, and who hosts the model weights. - [Ramesh et al](https://arxiv.org/pdf/2102.12092.pdf) who authored the DALLE paper, which is the inspiration behind Tortoise. - [Nichol and Dhariwal](https://arxiv.org/pdf/2102.09672.pdf) who authored the (revision of) the code that drives the diffusion model. - [Jang et al](https://arxiv.org/pdf/2106.07889.pdf) who developed and open-sourced univnet, the vocoder this repo uses. - [lucidrains](https://github.com/lucidrains) who writes awesome open source pytorch models, many of which are used here. - [Patrick von Platen](https://huggingface.co/patrickvonplaten) whose guides on setting up wav2vec were invaluable to building my dataset. ## Notice Tortoise was built entirely by me using my own hardware. My employer was not involved in any facet of Tortoise's development. If you use this repo or the ideas therein for your research, please cite it! A bibtex entree can be found in the right pane on GitHub.
Chuah/DialoGPT-small-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
2023-02-18T11:07:05Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="parsasam/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Chun/DialoGPT-large-dailydialog
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
2023-02-18T11:12:17Z
--- license: apache-2.0 tags: - summarization - generated_from_trainer metrics: - rouge model-index: - name: mt5-base-finetuned-stocks-event-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-base-finetuned-stocks-event-all This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6143 - Rouge1: 0.4336 - Rouge2: 0.3906 - Rougel: 0.4328 - Rougelsum: 0.4317 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | 6.4821 | 1.0 | 97 | 2.0502 | 0.1648 | 0.0891 | 0.1587 | 0.1588 | | 2.3093 | 2.0 | 194 | 0.9278 | 0.3059 | 0.2526 | 0.3063 | 0.3048 | | 1.5634 | 3.0 | 291 | 0.7646 | 0.3348 | 0.2917 | 0.3334 | 0.3334 | | 1.2392 | 4.0 | 388 | 0.7398 | 0.3721 | 0.3193 | 0.3725 | 0.3721 | | 1.1755 | 5.0 | 485 | 0.6872 | 0.3498 | 0.2990 | 0.3496 | 0.3507 | | 1.0981 | 6.0 | 582 | 0.6685 | 0.3579 | 0.3256 | 0.3591 | 0.3566 | | 1.0238 | 7.0 | 679 | 0.6379 | 0.4261 | 0.3882 | 0.4278 | 0.4253 | | 1.0265 | 8.0 | 776 | 0.6143 | 0.4336 | 0.3906 | 0.4328 | 0.4317 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Chun/DialoGPT-small-dailydialog
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget library_name: ml-agents --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Write your model_id: ZhihongDeng/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
ClaudeYang/awesome_fb_model
[ "pytorch", "bart", "text-classification", "dataset:multi_nli", "transformers", "zero-shot-classification" ]
zero-shot-classification
{ "architectures": [ "BartForSequenceClassification" ], "model_type": "bart", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
26
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: jondister/Soccer_JD 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
CodeMonkey98/distilroberta-base-finetuned-wikitext2
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 33.70 +/- 27.76 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
CoffeeAddict93/gpt1-modest-proposal
[ "pytorch", "openai-gpt", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "OpenAIGPTLMHeadModel" ], "model_type": "openai-gpt", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
### Safety label classifier A model trained on []() to classify prompts to different safety labels. For more information refer [here]() ### Run Model ```python from torch.utils.data import Dataset class ProSocialDataset(Dataset): def __init__(self,split): super().__init__() self.tokenizer = AutoTokenizer.from_pretrained(MODEL) self.sep_token = self.tokenizer.sep_token self.dataset = dataset[split] self.label2id = {"__casual__":0,"__needs_caution__":1,"__needs_intervention__":2,"__probably_needs_caution__":3,"__possibly_needs_caution__":4} self.id2label = {v:k for k,v in label_to_id.items()} def __len__(self): return len(self.dataset) def __getitem__(self,idx): context = self.dataset[idx] idx_start = idx end = self.dataset[max(0,idx_start-1)]["episode_done"] while ((not end) and (idx_start>0)): end = self.dataset[max(0,idx_start-2)]["episode_done"] idx_start -= 1 idx_start = max(0,idx_start) prev_context = [f'{self.dataset[i]["context"]}' for i in range(idx_start,idx)] rots = self.dataset[idx]["rots"] context = f'{self.dataset[idx]["context"]}' + self.sep_token + "".join(prev_context) + self.sep_token + "".join(rots) encoding = self.tokenizer( context, max_length=MAXLEN, add_special_tokens=True, truncation=True, padding='max_length') encoding["labels"] = self.label2id[self.dataset[idx]["safety_label"]] return encoding model = AutoModelForSequenceClassification.from_pretrained("shahules786/prosocial-classifier") pro_social_data = load_dataset("allenai/prosocial-dialog") dataset = ProSocialDataset(split="test") for item in dataset: pred = model(**item).logits ``` ### Citations ``` @inproceedings{ kim2022prosocialdialog, title={ProsocialDialog: A Prosocial Backbone for Conversational Agents}, author={Hyunwoo Kim and Youngjae Yu and Liwei Jiang and Ximing Lu and Daniel Khashabi and Gunhee Kim and Yejin Choi and Maarten Sap}, booktitle={EMNLP}, year=2022 } ```
CoffeeAddict93/gpt2-medium-call-of-the-wild
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -26.69 +/- 86.05 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 2000000 'learning_rate': 0.00025 'num_envs': 16 'num_steps': 1024 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'saikiranp/ppo-LunarLandr-v2-CleanRL' 'batch_size': 16384 'minibatch_size': 4096} ```
CogComp/bart-faithful-summary-detector
[ "pytorch", "jax", "bart", "text-classification", "en", "dataset:xsum", "transformers", "xsum", "license:cc-by-sa-4.0" ]
text-classification
{ "architectures": [ "BartForSequenceClassification" ], "model_type": "bart", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": 1, "max_length": 128, "min_length": 12, "no_repeat_ngram_size": null, "num_beams": 4, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
234
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget library_name: ml-agents --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Write your model_id: ibadrehman/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
ComCom/gpt2-medium
[ "pytorch", "gpt2", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "GPT2Model" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: gpl-3.0 language: - en - zh pipeline_tag: image-to-image tags: - art - image generation - live3d - anime --- # Live3D v2.2 (AttNR) [Windows Bundle ](https://huggingface.co/transpchan/Live3D-v2-windowsbundle/resolve/main/Live3D.zip) | [一键启动Windows懒人包](https://huggingface.co/transpchan/Live3D-v2-windowsbundle/resolve/main/Live3D.zip) [![image](https://github.com/transpchan/transpchan.github.io/blob/main/live3d/main.png?raw=true)](https://transpchan.github.io/live3d) ### This is only a mirror, not the original repo! Please go to [Github](https://github.com/transpchan/Live3D-v2) for latest updates. Neural Rendering with Attention: An Incremental Improvement for Anime Character Animation [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/transpchan/Live3D-v2/blob/main/notebook.ipynb) [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7652719.svg)](https://doi.org/10.5281/zenodo.7652719) [![Download](https://img.shields.io/badge/Download-Windows-green.svg)](https://huggingface.co/transpchan/Live3D-v2-windowsbundle/resolve/main/Live3D.zip) [![Code](https://img.shields.io/badge/Code-GPLv3-green.svg)](https://github.com/transpchan/Live3D-v2/) [Discord](https://discord.gg/Md3cykbn36) | [Twitter](https://twitter.com/transpchan) | [Bilibili](https://space.bilibili.com/6418569) | [Zhihu](https://zhuanlan.zhihu.com/p/565391665) [![image](https://user-images.githubusercontent.com/89829658/219866290-2fe721c3-d3f7-4fdb-86dc-a8d1db385b12.png)](https://huggingface.co/transpchan/Live3D-v2-windowsbundle/resolve/main/Live3D.zip)
ComCom/gpt2
[ "pytorch", "gpt2", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "GPT2Model" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 520.50 +/- 206.17 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga brand25 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga brand25 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga brand25 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
Contrastive-Tension/BERT-Distil-CT-STSb
[ "pytorch", "tf", "distilbert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "DistilBertModel" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: ibadrehman/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
CouchCat/ma_sa_v7_distil
[ "pytorch", "distilbert", "text-classification", "en", "transformers", "sentiment-analysis", "license:mit" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
38
null
--- tags: - conversational --- # My Awesome Model
Coyotl/DialoGPT-test-last-arthurmorgan
[ "conversational" ]
conversational
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - conversational --- # My Awesome Model
Crasher222/kaggle-comp-test
[ "pytorch", "bert", "text-classification", "en", "dataset:Crasher222/autonlp-data-kaggle-test", "transformers", "autonlp", "co2_eq_emissions" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
29
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: nlpbook_distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: split metrics: - name: Accuracy type: accuracy value: 0.929 - name: F1 type: f1 value: 0.9290966283051001 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nlpbook_distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2091 - Accuracy: 0.929 - F1: 0.9291 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8218 | 1.0 | 250 | 0.3043 | 0.9085 | 0.9055 | | 0.2416 | 2.0 | 500 | 0.2091 | 0.929 | 0.9291 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.13.1+cu116 - Datasets 2.8.0 - Tokenizers 0.10.3
CrayonShinchan/bart_fine_tune_test
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: creativeml-openrail-m --- Original model is here https://civitai.com/models/10106/merongmix
CrayonShinchan/fine_tune_try_1
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -180.12 +/- 71.93 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters
Culmenus/XLMR-ENIS-finetuned-ner
[ "pytorch", "tensorboard", "xlm-roberta", "token-classification", "dataset:mim_gold_ner", "transformers", "generated_from_trainer", "license:agpl-3.0", "model-index", "autotrain_compatible" ]
token-classification
{ "architectures": [ "XLMRobertaForTokenClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - Gholamreza/pquad model-index: - name: distilbert-fa-zwnj-base-finetuned-pquad results: [] language: - fa library_name: transformers pipeline_tag: question-answering widget: - text: اسم من چیست؟ context: >- من، غلامرضا دار، 23 ساله از بندرعباس هستم. هم اکنون در دانشگاه امیرکبیر مشغول به تحصیل در رشته هوش مصنوعی می باشم. example_title: "اسم" - text: غلامرضا چند سال دارد؟ context: >- من، غلامرضا دار، 23 ساله از بندرعباس هستم. هم اکنون در دانشگاه امیرکبیر مشغول به تحصیل در رشته هوش مصنوعی می باشم. example_title: "سن" - text: نام خانوادگی غلامرضا چیست؟ context: >- من، غلامرضا دار، 23 ساله از بندرعباس هستم. هم اکنون در دانشگاه امیرکبیر مشغول به تحصیل در رشته هوش مصنوعی می باشم. example_title: "نام خانوادگی" - text: غلامرصا در چه دانشگاهی تحصیل میکند؟ context: >- من، غلامرضا دار، 23 ساله از بندرعباس هستم. هم اکنون در دانشگاه امیرکبیر مشغول به تحصیل در رشته هوش مصنوعی می باشم. example_title: "دانشگاه" metrics: - f1 - exact_match --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-fa-zwnj-base-finetuned-pquad This model is a fine-tuned version of [HooshvareLab/distilbert-fa-zwnj-base](https://huggingface.co/HooshvareLab/distilbert-fa-zwnj-base) on the pquad dataset. ## Results ### Test set | name | value | ----- | ----- | 'exact' | 66.38340414896275, 'f1'| 80.23760220987583, 'total'| 8002, 'HasAns_exact'| 60.13469119579501, 'HasAns_f1'| 78.34449620292781, 'HasAns_total'| 6088, 'NoAns_exact'| 86.25914315569489, 'NoAns_f1'| 86.25914315569489, 'NoAns_total'| 1914, 'best_exact'| 66.38340414896275, 'best_exact_thresh'| 0.0, 'best_f1'| 80.23760220987589, 'best_f1_thresh'| 0.0 ### Validation set | name | value | ----- | ----- | 'exact'| 64.65646940822468, 'f1'| 78.88641788270802, 'total'| 7976, 'HasAns_exact'| 57.54795663052544, 'HasAns_f1'| 76.4800782372771, 'HasAns_total'| 5995, 'NoAns_exact'| 86.16860171630489, 'NoAns_f1'| 86.16860171630489, 'NoAns_total'| 1981, 'best_exact'| 64.65646940822468, 'best_exact_thresh'| 0.0, 'best_f1'| 78.88641788270819, 'best_f1_thresh'| 0.0 ## Model description uses [distilbert-fa-zwnj-base](https://huggingface.co/HooshvareLab/distilbert-fa-zwnj-base) as base and fine-tunes it on [pquad](https://huggingface.co/datasets/Gholamreza/pquad) dataset. ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.1299 | 1.0 | 4003 | 1.1306 | | 0.845 | 2.0 | 8006 | 1.0839 | | 0.639 | 3.0 | 12009 | 1.1302 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Culmenus/checkpoint-168500-finetuned-de-to-is_nr2
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- datasets: - relbert/relational_similarity model-index: - name: relbert/relbert-roberta-large-nce-c-semeval2012-nell results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8470634920634921 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6898395721925134 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6973293768545994 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.8243468593663146 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.944 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6929824561403509 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6597222222222222 - task: name: Analogy Questions (ConceptNet Analogy) type: multiple-choice-qa dataset: name: ConceptNet Analogy args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.436241610738255 - task: name: Analogy Questions (TREX Analogy) type: multiple-choice-qa dataset: name: TREX Analogy args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7103825136612022 - task: name: Analogy Questions (NELL-ONE Analogy) type: multiple-choice-qa dataset: name: NELL-ONE Analogy args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.8066666666666666 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9177339159258702 - name: F1 (macro) type: f1_macro value: 0.9136940646765112 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8781690140845071 - name: F1 (macro) type: f1_macro value: 0.7339306967191377 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.704225352112676 - name: F1 (macro) type: f1_macro value: 0.6885755780161168 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9567364540585658 - name: F1 (macro) type: f1_macro value: 0.8700344787938971 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9166405515512378 - name: F1 (macro) type: f1_macro value: 0.9144585254310948 --- # relbert/relbert-roberta-large-nce-c-semeval2012-nell RelBERT based on [roberta-large](https://huggingface.co/roberta-large) fine-tuned on [relbert/relational_similarity](https://huggingface.co/datasets/relbert/relational_similarity) (see the [`relbert`](https://github.com/asahi417/relbert) for more detail of fine-tuning). This model achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-c-semeval2012-nell/raw/main/analogy.forward.json)): - Accuracy on SAT (full): 0.6898395721925134 - Accuracy on SAT: 0.6973293768545994 - Accuracy on BATS: 0.8243468593663146 - Accuracy on U2: 0.6929824561403509 - Accuracy on U4: 0.6597222222222222 - Accuracy on Google: 0.944 - Accuracy on ConceptNet Analogy: 0.436241610738255 - Accuracy on T-Rex Analogy: 0.7103825136612022 - Accuracy on NELL-ONE Analogy: 0.8066666666666666 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-c-semeval2012-nell/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9177339159258702 - Micro F1 score on CogALexV: 0.8781690140845071 - Micro F1 score on EVALution: 0.704225352112676 - Micro F1 score on K&H+N: 0.9567364540585658 - Micro F1 score on ROOT09: 0.9166405515512378 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-c-semeval2012-nell/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8470634920634921 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-large-nce-c-semeval2012-nell") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (n_dim, ) ``` ### Training hyperparameters - model: roberta-large - max_length: 64 - epoch: 20 - batch: 64 - random_seed: 0 - lr: 5e-06 - lr_warmup: 10 - aggregation_mode: average_no_mask - data: relbert/relational_similarity - data_name: nell_relational_similarity.semeval2012_relational_similarity - exclude_relation: None - split: train - split_valid: validation - loss_function: nce - classification_loss: False - loss_function_config: {'temperature': 0.05, 'num_negative': 300, 'num_positive': 10} - augment_negative_by_positive: False See the full configuration at [config file](https://huggingface.co/relbert/relbert-roberta-large-nce-c-semeval2012-nell/raw/main/finetuning_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.emnlp-main.712/). ``` @inproceedings{ushio-etal-2021-distilling, title = "Distilling Relation Embeddings from Pretrained Language Models", author = "Ushio, Asahi and Camacho-Collados, Jose and Schockaert, Steven", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.712", doi = "10.18653/v1/2021.emnlp-main.712", pages = "9044--9062", abstract = "Pre-trained language models have been found to capture a surprisingly rich amount of lexical knowledge, ranging from commonsense properties of everyday concepts to detailed factual knowledge about named entities. Among others, this makes it possible to distill high-quality word vectors from pre-trained language models. However, it is currently unclear to what extent it is possible to distill relation embeddings, i.e. vectors that characterize the relationship between two words. Such relation embeddings are appealing because they can, in principle, encode relational knowledge in a more fine-grained way than is possible with knowledge graphs. To obtain relation embeddings from a pre-trained language model, we encode word pairs using a (manually or automatically generated) prompt, and we fine-tune the language model such that relationally similar word pairs yield similar output vectors. We find that the resulting relation embeddings are highly competitive on analogy (unsupervised) and relation classification (supervised) benchmarks, even without any task-specific fine-tuning. Source code to reproduce our experimental results and the model checkpoints are available in the following repository: https://github.com/asahi417/relbert", } ```
Culmenus/opus-mt-de-is-finetuned-de-to-is
[ "pytorch", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- datasets: - lambdalabs/pokemon-blip-captions pipeline_tag: unconditional-image-generation tags: - Image Generation - Diffusers ---
CurtisBowser/DialoGPT-small-sora
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
2023-02-18T16:32:05Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 236.96 +/- 76.66 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
D-Keqi/espnet_asr_train_asr_streaming_transformer_raw_en_bpe500_sp_valid.acc.ave
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: dgodderis/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
D3vil/DialoGPT-smaall-harrypottery
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 248.40 +/- 20.38 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
D3xter1922/distilbert-base-uncased-finetuned-cola
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 607.00 +/- 169.87 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rdesarz -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rdesarz -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga rdesarz ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
D3xter1922/electra-base-discriminator-finetuned-mnli
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: albert-base-v2-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-base-v2-finetuned-squad This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 0.9891 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.867 | 1.0 | 5540 | 0.8997 | | 0.6439 | 2.0 | 11080 | 0.8932 | | 0.4669 | 3.0 | 16620 | 0.9891 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
D4RL1NG/yes
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: creativeml-openrail-m language: - en tags: - stable-diffusion - text-to-image - image-to-image - diffusers ---
DARKVIP3R/DialoGPT-medium-Anakin
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
--- license: creativeml-openrail-m thumbnail: >- https://huggingface.co/NoCrypt/SomethingV2/resolve/main/imgs/00031-1769428138-masterpiece%2C%20best%20quality%2C%20hatsune%20miku%2C%201girl%2C%20white%20shirt%2C%20blue%20necktie%2C%20bare%20shoulders%2C%20very%20detailed%20background%2C%20hands%20on%20ow.png tags: - stable-diffusion - text-to-image - safetensors - diffusers inference: true language: - en widget: - text: >- masterpiece, best quality, 1girl, brown hair, green eyes, colorful, autumn, cumulonimbus clouds, lighting, blue sky, falling leaves, garden example_title: example 1girl - text: >- masterpiece, best quality, 1boy, medium hair, blonde hair, blue eyes, bishounen, colorful, autumn, cumulonimbus clouds, lighting, blue sky, falling leaves, garden example_title: example 1boy library_name: diffusers --- ## Introducing SomethingV2.2, An updated version of this model, can be found [here](https://huggingface.co/NoCrypt/SomethingV2_2) --- [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/NoCrypt/SomethingV2) <center><img src="https://huggingface.co/NoCrypt/SomethingV2/resolve/main/imgs/banner.webp" width="95%"/></center> <center><h1><b>SomethingV2</b></h1></center> <p align="center">Welcome to SomethingV2 - an anime latent diffusion model. This model is intended to produce vibrant but soft anime style images. </p> ## Recommended Settings - VAE: None (Baked in model) - Clip Skip: 2 - Sampler: DPM++ 2M Karras - CFG Scale: 7 - 12 - Negative Prompt: [EasyNegative](https://huggingface.co/datasets/gsdf/EasyNegative) - For better results, using hires fix is a must. - Hires upscaler: Latent (any variant, such as nearest-exact) - Resolution: At least 512x512 first pass, upscale up to 1500x1500 ## Example <img style="display:inline;margin:0;padding:0;" src="https://huggingface.co/NoCrypt/SomethingV2/resolve/main/imgs/00090-1829045217-masterpiece%20best%20quality%20hatsune%20miku%201girl%20white%20shirt%20blue%20necktie%20bare%20shoulders%20very%20detailed%20background%20hands%20on%20ow2473e4832c888be11494dab007c390c19c5b2f7d.png" width="32%"/> <img style="display:inline;margin:0;padding:0;" src="https://huggingface.co/NoCrypt/SomethingV2/resolve/main/imgs/00022-1769428138-masterpiece%2C%20best%20quality%2C%20hatsune%20miku%2C%201girl%2C%20white%20shirt%2C%20blue%20necktie%2C%20bare%20shoulders%2C%20very%20detailed%20background%2C%20hands%20on%20ow.png" width="32%"/> <img style="display:inline;margin:0;padding:0;" src="https://huggingface.co/NoCrypt/SomethingV2/resolve/main/imgs/00098-3514023396-masterpiece%2C%20best%20quality%2C%20hatsune%20miku%2C%201girl%2C%20white%20shirt%2C%20blue%20necktie%2C%20bare%20shoulders%2C%20very%20detailed%20background%2C%20cafe%2C%20angry.png" width="32%"/> <details><summary><big><b>Prompts</b></big></summary> ```yaml masterpiece, best quality, hatsune miku, 1girl, white shirt, blue necktie, bare shoulders, very detailed background, hands on own cheeks, open mouth, one eye closed, clenched teeth, smile Negative prompt: EasyNegative, tattoo, (shoulder tattoo:1.0), (number tattoo:1.3), frills Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 1829045217, Size: 456x592, Model: somethingv2_1, Denoising strength: 0.53, Clip skip: 2, ENSD: 31337, Hires upscale: 1.65, Hires steps: 12, Hires upscaler: Latent (nearest-exact), Discard penultimate sigma: True ``` ```yaml masterpiece, best quality, hatsune miku, 1girl, white shirt, blue necktie, bare shoulders, very detailed background, hands on own cheeks, open mouth, eyez closed, clenched teeth, smile, arms behind back, Negative prompt: EasyNegative, tattoo, (shoulder tattoo:1.0), (number tattoo:1.3), frills Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 1769428138, Size: 456x592, Model: somethingv2_1, Denoising strength: 0.53, Clip skip: 2, ENSD: 31337, Hires upscale: 1.65, Hires steps: 12, Hires upscaler: Latent (nearest-exact), Discard penultimate sigma: True ``` ```yaml masterpiece, best quality, hatsune miku, 1girl, white shirt, blue necktie, bare shoulders, very detailed background, cafe, angry, crossed arms, detached sleeves, light particles, Negative prompt: EasyNegative, tattoo, (shoulder tattoo:1.0), (number tattoo:1.3), frills Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 3514023396, Size: 456x592, Model: somethingv2_1, Denoising strength: 0.53, Clip skip: 2, ENSD: 31337, Hires upscale: 1.65, Hires steps: 12, Hires upscaler: Latent (nearest-exact), Discard penultimate sigma: True ``` </details> ## FAQ ### Model differences? ![](https://huggingface.co/NoCrypt/SomethingV2/resolve/main/imgs/xyz_grid-0003-4163886333-masterpiece%2C%20hatsune%20miku%2C%20white%20shirt%2C%20blue%20necktie%2C%20bare%20shoulders%2C%20detached%20sleeves%2C.png) <details><summary><big><b>Prompts</b></big></summary> ```yaml masterpiece, hatsune miku, white shirt, blue necktie, bare shoulders, detached sleeves, Negative prompt: EasyNegative Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 4163886333, Size: 440x592, Model: -, Denoising strength: 0.53, Clip skip: 2, ENSD: 31337, Hires upscale: 1.65, Hires steps: 13, Hires upscaler: Latent (nearest-exact) ``` </details> ### Why all examples is miku? Because I love miku. But here's other subjects <img style="display:inline;margin:0;padding:0;" src="https://huggingface.co/NoCrypt/SomethingV2/resolve/main/imgs/00018-4018636341-masterpiece%2C%20best%20quality%2C%201girl%2C%20aqua%20eyes%2C%20baseball%20cap%2C%20blonde%20hair%2C%20closed%20mouth%2C%20earrings%2C%20green%20background%2C%20hat%2C%20hoop%20earr.png" width="49%"/> <img style="display:inline;margin:0;padding:0;" src="https://huggingface.co/NoCrypt/SomethingV2/resolve/main/imgs/00019-1334620477-masterpiece%2C%20best%20quality%2C%20landscape.png" width="49%"/> <details><summary><big><b>Prompts</b></big></summary> ```yaml masterpiece, best quality, 1girl, aqua eyes, baseball cap, blonde hair, closed mouth, earrings, green background, hat, hoop earrings, jewelry, looking at viewer, shirt, short hair, simple background, solo, upper body, yellow shirt Negative prompt: EasyNegative Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 4018636341, Size: 440x592, Model: somethingv2, Denoising strength: 0.53, Clip skip: 2, ENSD: 31337, Hires upscale: 1.65, Hires steps: 13, Hires upscaler: Latent (nearest-exact) ``` ```yaml masterpiece, best quality, landscape Negative prompt: EasyNegative Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 1334620477, Size: 440x592, Model: somethingv2, Denoising strength: 0.53, Clip skip: 2, ENSD: 31337, Hires upscale: 1.65, Hires steps: 13, Hires upscaler: Latent (nearest-exact) ``` </details>
DCU-NLP/bert-base-irish-cased-v1
[ "pytorch", "tf", "bert", "fill-mask", "transformers", "generated_from_keras_callback", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,244
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="JUNGU/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
DCU-NLP/electra-base-irish-cased-discriminator-v1
[ "pytorch", "electra", "pretraining", "ga", "transformers", "irish", "license:apache-2.0" ]
null
{ "architectures": [ "ElectraForPreTraining" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="jackmedda/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
DCU-NLP/electra-base-irish-cased-generator-v1
[ "pytorch", "electra", "fill-mask", "ga", "transformers", "irish", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "ElectraForMaskedLM" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
2023-02-18T17:03:24Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.44 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="JUNGU/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
DHBaek/xlm-roberta-large-korquad-mask
[ "pytorch", "xlm-roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "XLMRobertaForQuestionAnswering" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- tags: - Pong-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pong-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pong-PLE-v0 type: Pong-PLE-v0 metrics: - type: mean_reward value: -16.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pong-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pong-PLE-v0** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
DJStomp/TestingSalvoNET
[ "transformers" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
2023-02-18T17:15:04Z
--- tags: - generated_from_trainer model-index: - name: SajjadAyoubi_xlm-roberta-large-fa-qa_finetune_on_hoshfa_3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SajjadAyoubi_xlm-roberta-large-fa-qa_finetune_on_hoshfa_3 This model is a fine-tuned version of [SajjadAyoubi/xlm-roberta-large-fa-qa](https://huggingface.co/SajjadAyoubi/xlm-roberta-large-fa-qa) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8894 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 3 - eval_batch_size: 3 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.4424 | 1.0 | 1500 | 2.0999 | | 1.8186 | 2.0 | 3000 | 1.2042 | | 1.2822 | 3.0 | 4500 | 0.8894 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
DKpro000/DialoGPT-small-harrypotter
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - Summarization - generated_from_trainer datasets: - samsum model-index: - name: conversation-summ_longformer_bart_like results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # conversation-summ_longformer_bart_like This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the samsum dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|:---------:| | No log | 1.0 | 143 | 1.9126 | 42.0973 | 16.7856 | 33.784 | 37.7811 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Daltcamalea01/Camaleaodalt
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: loso_F04 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # loso_F04 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0791 - Wer: 1.4780 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 7 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 9.9264 | 0.96 | 500 | 3.6742 | 1.0 | | 2.6962 | 1.91 | 1000 | 1.7830 | 2.6233 | | 1.1118 | 2.87 | 1500 | 0.5233 | 1.8458 | | 0.3692 | 3.82 | 2000 | 0.1670 | 1.2423 | | 0.1671 | 4.78 | 2500 | 0.1289 | 1.3700 | | 0.0897 | 5.74 | 3000 | 0.1031 | 1.5110 | | 0.0656 | 6.69 | 3500 | 0.0791 | 1.4780 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.13.1+cu116 - Datasets 1.18.3 - Tokenizers 0.13.2
DannyMichael/ECU911
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-02-18T18:22:33Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="seungwoos/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Davlan/bert-base-multilingual-cased-finetuned-igbo
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
15
null
Found. Redirecting to https://cdn-lfs.huggingface.co/repos/46/da/46daedf26dde3e0e117097544d3633a78b2cc26bbffccc5ce16e33dd5104c9c6/4eb72f9e6bdbfcf359ad66cdd7e26984449f47692069f81205f2217c9a821ba2?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27README.md%3B+filename%3D%22README.md%22%3B&response-content-type=text%2Fmarkdown&Expires=1685106682&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL3JlcG9zLzQ2L2RhLzQ2ZGFlZGYyNmRkZTNlMGUxMTcwOTc1NDRkMzYzM2E3OGIyY2MyNmJiZmZjY2M1Y2UxNmUzM2RkNTEwNGM5YzYvNGViNzJmOWU2YmRiZmNmMzU5YWQ2NmNkZDdlMjY5ODQ0NDlmNDc2OTIwNjlmODEyMDVmMjIxN2M5YTgyMWJhMj9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPSoiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE2ODUxMDY2ODJ9fX1dfQ__&Signature=hULjqmo7ppFrYP4MMPU92-mEUyrOl4f73y7X5sTgGrFcWkE3HoGlskALFBS5SiPGKMsIASKarzKT-optDNOz9u6rxhUyHw54XkQVX5qJc9p7ncfNxeK-hQvraX9pf033OSXPLsvlAZJ9inYlzxHfjfwGFNOAB3aNcdd7fFDmJ96qhFBvRPHkHpeqGszHh8c31t%7Engy28d5gJxkeEWbSSLxZDfwfTnx-PDfR73Pi6QjLhkdNlA8vfEn6KLvC507FsptoFTrG0Z6B8y8vt4L6rJvARc3oqyEjOIDxPw2hNSrfonlK-0nLsqszSi0ViRD1d4xtA2vuAQQQivzpfMC2fTQ__&Key-Pair-Id=KVTP0A1DKRTAX
Davlan/bert-base-multilingual-cased-ner-hrl
[ "pytorch", "tf", "bert", "token-classification", "transformers", "autotrain_compatible", "has_space" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
269,898
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: LucaReggiani/t5-small-nlpfinalproject4-xsum results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # LucaReggiani/t5-small-nlpfinalproject4-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.0688 - Validation Loss: 2.9609 - Train Rouge1: 22.9985 - Train Rouge2: 5.0413 - Train Rougel: 18.1856 - Train Rougelsum: 18.0816 - Train Gen Len: 18.67 - Epoch: 8 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.98, 'epsilon': 1e-06, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch | |:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:| | 3.8921 | 3.2708 | 18.8870 | 3.0920 | 14.9668 | 14.9517 | 18.67 | 0 | | 3.5034 | 3.1209 | 21.5417 | 3.8130 | 16.5211 | 16.5045 | 18.37 | 1 | | 3.3763 | 3.0605 | 21.0710 | 3.6133 | 15.7808 | 15.7437 | 18.33 | 2 | | 3.2971 | 3.0305 | 21.6173 | 4.0001 | 16.2502 | 16.2302 | 18.5 | 3 | | 3.2452 | 3.0086 | 22.8085 | 4.9522 | 17.8831 | 17.7797 | 18.6 | 4 | | 3.1899 | 2.9920 | 22.7903 | 5.3026 | 17.8844 | 17.8651 | 18.58 | 5 | | 3.1514 | 2.9775 | 23.0533 | 5.3456 | 18.4312 | 18.3636 | 18.52 | 6 | | 3.1050 | 2.9686 | 23.0767 | 5.1264 | 18.4552 | 18.3503 | 18.54 | 7 | | 3.0688 | 2.9609 | 22.9985 | 5.0413 | 18.1856 | 18.0816 | 18.67 | 8 | ### Framework versions - Transformers 4.26.1 - TensorFlow 2.11.0 - Datasets 2.9.0 - Tokenizers 0.13.2
Davlan/distilbert-base-multilingual-cased-ner-hrl
[ "pytorch", "tf", "distilbert", "token-classification", "transformers", "autotrain_compatible", "has_space" ]
token-classification
{ "architectures": [ "DistilBertForTokenClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
123,856
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Zangnan/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Davlan/m2m100_418M-eng-yor-mt
[ "pytorch", "m2m_100", "text2text-generation", "arxiv:2103.08647", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "M2M100ForConditionalGeneration" ], "model_type": "m2m_100", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1004.35 +/- 86.89 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Davlan/mT5_base_yoruba_adr
[ "pytorch", "mt5", "text2text-generation", "arxiv:2003.10564", "arxiv:2103.08647", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MT5ForConditionalGeneration" ], "model_type": "mt5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Davlan/mbart50-large-eng-yor-mt
[ "pytorch", "mbart", "text2text-generation", "arxiv:2103.08647", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MBartForConditionalGeneration" ], "model_type": "mbart", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Zangnan/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Davlan/mbart50-large-yor-eng-mt
[ "pytorch", "mbart", "text2text-generation", "arxiv:2103.08647", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MBartForConditionalGeneration" ], "model_type": "mbart", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 274.50 +/- 31.50 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Taratata -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Taratata -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Taratata ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.001), ('learning_starts', 100000), ('n_timesteps', 10000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
Davlan/mt5-small-pcm-en
[ "pytorch", "mt5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MT5ForConditionalGeneration" ], "model_type": "mt5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: t5-base-finetune-dzongkha-to-romanized results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-finetune-dzongkha-to-romanized This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.8324 - Rouge1: 0.0 - Rouge2: 0.0 - Rougel: 0.0 - Rougelsum: 0.0 - Gen Len: 3.1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 90 | 5.1830 | 0.0 | 0.0 | 0.0 | 0.0 | 3.2 | | No log | 2.0 | 180 | 4.8936 | 0.0 | 0.0 | 0.0 | 0.0 | 3.0667 | | No log | 3.0 | 270 | 4.8324 | 0.0 | 0.0 | 0.0 | 0.0 | 3.1 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Davlan/mt5_base_eng_yor_mt
[ "pytorch", "mt5", "text2text-generation", "arxiv:2103.08647", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MT5ForConditionalGeneration" ], "model_type": "mt5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 24.10 +/- 17.64 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Davlan/mt5_base_yor_eng_mt
[ "pytorch", "mt5", "text2text-generation", "arxiv:2103.08647", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MT5ForConditionalGeneration" ], "model_type": "mt5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1581 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2179 | 1.0 | 5533 | 1.1521 | | 0.9581 | 2.0 | 11066 | 1.1296 | | 0.7409 | 3.0 | 16599 | 1.1581 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1 - Datasets 2.9.0 - Tokenizers 0.11.0
Declan/Breitbart_modelv7
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: flan-t5-large-da-multiwoz2.1_fs0.2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-large-da-multiwoz2.1_fs0.2 This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3159 - Accuracy: 45.1554 - Num: 3689 - Gen Len: 15.5213 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 24 - seed: 1799 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Num | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----:|:-------:| | 0.9653 | 0.28 | 400 | 0.4635 | 31.3166 | 3689 | 15.196 | | 0.5071 | 0.57 | 800 | 0.4031 | 35.8289 | 3689 | 15.6546 | | 0.4603 | 0.85 | 1200 | 0.3718 | 37.6313 | 3689 | 15.6511 | | 0.4219 | 1.13 | 1600 | 0.3577 | 37.9333 | 3689 | 16.5319 | | 0.3991 | 1.42 | 2000 | 0.3491 | 40.5462 | 3689 | 15.453 | | 0.394 | 1.7 | 2400 | 0.3409 | 40.9333 | 3689 | 15.5137 | | 0.3822 | 1.98 | 2800 | 0.3370 | 41.2932 | 3689 | 15.225 | | 0.3625 | 2.26 | 3200 | 0.3327 | 42.1132 | 3689 | 16.0718 | | 0.3577 | 2.55 | 3600 | 0.3329 | 42.1372 | 3689 | 15.9973 | | 0.3644 | 2.83 | 4000 | 0.3303 | 42.2529 | 3689 | 15.6525 | | 0.349 | 3.11 | 4400 | 0.3256 | 43.2025 | 3689 | 15.6601 | | 0.3355 | 3.4 | 4800 | 0.3243 | 43.791 | 3689 | 15.5451 | | 0.338 | 3.68 | 5200 | 0.3231 | 43.5073 | 3689 | 15.7411 | | 0.3424 | 3.96 | 5600 | 0.3196 | 44.5281 | 3689 | 15.1307 | | 0.3299 | 4.25 | 6000 | 0.3159 | 45.1554 | 3689 | 15.5213 | | 0.328 | 4.53 | 6400 | 0.3188 | 43.4699 | 3689 | 15.3849 | | 0.3204 | 4.81 | 6800 | 0.3159 | 44.7764 | 3689 | 15.8219 | | 0.3166 | 5.1 | 7200 | 0.3165 | 45.0608 | 3689 | 15.8791 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.5.1 - Tokenizers 0.12.1
Declan/CNN_model_v1
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: flan-t5-large-da-multiwoz2.1_fs0.1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-large-da-multiwoz2.1_fs0.1 This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3373 - Accuracy: 43.2245 - Num: 3689 - Gen Len: 15.3058 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 24 - seed: 1799 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Num | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----:|:-------:| | 0.9683 | 0.56 | 400 | 0.4608 | 31.0524 | 3689 | 14.9279 | | 0.5046 | 1.13 | 800 | 0.4028 | 35.7175 | 3689 | 15.1651 | | 0.4488 | 1.69 | 1200 | 0.3803 | 36.4952 | 3689 | 16.3375 | | 0.4267 | 2.25 | 1600 | 0.3613 | 38.4613 | 3689 | 15.2646 | | 0.4003 | 2.81 | 2000 | 0.3538 | 39.8281 | 3689 | 15.5842 | | 0.3862 | 3.38 | 2400 | 0.3497 | 40.0593 | 3689 | 15.2356 | | 0.3729 | 3.94 | 2800 | 0.3433 | 40.857 | 3689 | 15.9675 | | 0.3632 | 4.5 | 3200 | 0.3457 | 41.157 | 3689 | 15.8818 | | 0.3534 | 5.06 | 3600 | 0.3367 | 42.9369 | 3689 | 15.7314 | | 0.3432 | 5.63 | 4000 | 0.3358 | 41.9514 | 3689 | 15.7173 | | 0.3395 | 6.19 | 4400 | 0.3373 | 43.2245 | 3689 | 15.3058 | | 0.3345 | 6.75 | 4800 | 0.3351 | 42.4941 | 3689 | 14.8916 | | 0.3266 | 7.31 | 5200 | 0.3360 | 42.9742 | 3689 | 15.7124 | | 0.3233 | 7.88 | 5600 | 0.3327 | 43.1362 | 3689 | 15.9379 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.5.1 - Tokenizers 0.12.1
Declan/CNN_model_v2
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: question_v_statement_finetuned_roberta-basev2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # question_v_statement_finetuned_roberta-basev2 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0052 - Accuracy: 0.9993 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0077 | 1.0 | 3966 | 0.0055 | 0.9991 | | 0.0008 | 2.0 | 7932 | 0.0052 | 0.9993 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0 - Datasets 2.1.0 - Tokenizers 0.12.1
Declan/CNN_model_v6
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - fleurs metrics: - wer model-index: - name: xlsr-53-ur results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: fleurs type: fleurs config: ur_pk split: test args: ur_pk metrics: - name: Wer type: wer value: 0.3450557529714496 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlsr-53-ur This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the fleurs dataset. It achieves the following results on the evaluation set: - Loss: 0.6860 - Wer: 0.3451 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 12 - total_eval_batch_size: 12 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.0396 | 1.59 | 300 | 3.0179 | 1.0 | | 0.4976 | 3.17 | 600 | 0.7037 | 0.5447 | | 0.3062 | 4.76 | 900 | 0.5557 | 0.4036 | | 0.2287 | 6.35 | 1200 | 0.5620 | 0.3935 | | 0.2504 | 7.94 | 1500 | 0.5907 | 0.3677 | | 0.0633 | 9.52 | 1800 | 0.6239 | 0.3773 | | 0.0456 | 11.11 | 2100 | 0.6748 | 0.3604 | | 0.0774 | 12.7 | 2400 | 0.6747 | 0.3552 | | 0.058 | 14.29 | 2700 | 0.6860 | 0.3451 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Declan/ChicagoTribune_model_v3
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
Access to model spunk255/fernando is restricted and you are not in the authorized list. Visit https://huggingface.co/spunk255/fernando to ask for access.
Declan/ChicagoTribune_model_v5
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: questionansweringmodel results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # questionansweringmodel This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 2.9365 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.7398 | 1.0 | 5520 | 0.7216 | | 0.5263 | 2.0 | 11040 | 0.7513 | | 0.3828 | 3.0 | 16560 | 0.8821 | | 0.2411 | 4.0 | 22080 | 1.0024 | | 0.1549 | 5.0 | 27600 | 1.3959 | | 0.0996 | 6.0 | 33120 | 1.7559 | | 0.0594 | 7.0 | 38640 | 2.0031 | | 0.0393 | 8.0 | 44160 | 2.3654 | | 0.0326 | 9.0 | 49680 | 2.7383 | | 0.0162 | 10.0 | 55200 | 2.9365 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.11.0+cu113 - Datasets 2.9.0 - Tokenizers 0.13.2
Declan/ChicagoTribune_model_v6
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - generated_from_trainer model-index: - name: petro-twitter-assistant-30ep results: [] widget: - text: Opino que mi gobierno es datasets: - jhonparra18/petro-tweets language: - es --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # petro-twitter-assistant-30ep This model is a fine-tuned version of [flax-community/gpt-2-spanish](https://huggingface.co/flax-community/gpt-2-spanish) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.8837 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 20 - eval_batch_size: 20 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 40 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.123 | 2.3 | 1000 | 3.0761 | | 2.8048 | 4.6 | 2000 | 3.0394 | | 2.5904 | 6.9 | 3000 | 3.0743 | | 2.3804 | 9.2 | 4000 | 3.2378 | | 2.1736 | 11.49 | 5000 | 3.4025 | | 1.9736 | 13.79 | 6000 | 3.6284 | | 1.779 | 16.09 | 7000 | 3.9806 | | 1.5993 | 18.39 | 8000 | 4.2559 | | 1.4584 | 20.69 | 9000 | 4.4938 | | 1.3492 | 22.99 | 10000 | 4.6608 | | 1.2701 | 25.29 | 11000 | 4.8302 | | 1.2309 | 27.59 | 12000 | 4.8696 | | 1.2161 | 29.89 | 13000 | 4.8837 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0 - Datasets 2.1.0 - Tokenizers 0.12.1
Declan/ChicagoTribune_model_v7
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- library_name: diffusers pipeline_tag: text-to-image base_model: CompVis/stable-diffusion-v1-4 ---
Declan/FoxNews_model_v3
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -64.99 +/- 26.01 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 1000000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'oscarb92/cleanrl-ppo-LunarLander-v2' 'batch_size': 512 'minibatch_size': 128} ```
Declan/FoxNews_model_v5
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -118.98 +/- 36.13 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters
DiegoAlysson/opus-mt-en-ro-finetuned-en-to-ro
[ "pytorch", "tensorboard", "marian", "text2text-generation", "dataset:wmt16", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
Access to model FrancisDrakeK/test is restricted and you are not in the authorized list. Visit https://huggingface.co/FrancisDrakeK/test to ask for access.
Dmitriiserg/Pxd
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-22k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # ConvNeXt V2 (nano-sized model) ConvNeXt V2 model pretrained using the FCMAE framework and fine-tuned on the ImageNet-22K dataset at resolution 384x384. It was introduced in the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Woo et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt-V2). Disclaimer: The team releasing ConvNeXT V2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXt V2 is a pure convolutional model (ConvNet) that introduces a fully convolutional masked autoencoder framework (FCMAE) and a new Global Response Normalization (GRN) layer to ConvNeXt. ConvNeXt V2 significantly improves the performance of pure ConvNets on various recognition benchmarks. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnextv2_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnextv2) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoImageProcessor, ConvNextV2ForImageClassification import torch from datasets import load_dataset dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] preprocessor = AutoImageProcessor.from_pretrained("facebook/convnextv2-nano-22k-384") model = ConvNextV2ForImageClassification.from_pretrained("facebook/convnextv2-huge-22k-384") inputs = preprocessor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnextv2). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2301-00808, author = {Sanghyun Woo and Shoubhik Debnath and Ronghang Hu and Xinlei Chen and Zhuang Liu and In So Kweon and Saining Xie}, title = {ConvNeXt {V2:} Co-designing and Scaling ConvNets with Masked Autoencoders}, journal = {CoRR}, volume = {abs/2301.00808}, year = {2023}, url = {https://doi.org/10.48550/arXiv.2301.00808}, doi = {10.48550/arXiv.2301.00808}, eprinttype = {arXiv}, eprint = {2301.00808}, timestamp = {Tue, 10 Jan 2023 15:10:12 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2301-00808.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
Doohae/q_encoder
[ "pytorch" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### Model Dreambooth model trained by thecreativemind2023 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
Doquey/DialoGPT-small-Michaelbot
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- tags: - autotrain - translation language: - unk - unk datasets: - Tritkoman/autotrain-data-romaniv2 co2_eq_emissions: emissions: 71.97851742122822 --- # Model Trained Using AutoTrain - Problem type: Translation - Model ID: 3584296276 - CO2 Emissions (in grams): 71.9785 ## Validation Metrics - Loss: 2.284 - SacreBLEU: 8.048 - Gen len: 49.335
DoyyingFace/bert-asian-hate-tweets-asonam-clean
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
null
--- tags: - molecular language model - SELFIES - molecule optimization inference: false --- # MolGen-large-opt MolGen-large-opt was introduced in the paper ["Molecular Language Model as Multi-task Generator"](https://arxiv.org/pdf/2301.11259.pdf) and first released in [this repository](https://github.com/zjunlp/MolGen). ## Model description MolGen-large-opt is the fine-tuned version of [MolGen-large](https://huggingface.co/zjunlp/MolGen-large). MolGen-large is the first pre-trained model that only produces chemically valid molecules. With a training corpus of over 100 million molecules in SELFIES representation, MolGen-large learns the intrinsic structural patterns of molecules by mapping corrupted SELFIES to their original forms. Specifically, MolGen-large employs a bidirectional Transformer as its encoder and an autoregressive Transformer as its decoder. Through its carefully designed multi-task molecular prefix tuning (MPT), MolGen-large-opt can generate molecules with desired properties, making it a valuable tool for molecular optimization. ![image.png](./molgen.png) ## Intended uses You can use the fine-tuned model for molecule optimization for downstream tasks. See the [repository](https://github.com/zjunlp/MolGen) to look for fine-tune details on a task that interests you. ### How to use Molecule optimization example: ```python >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> tokenizer = AutoTokenizer.from_pretrained("zjunlp/MolGen-large-opt") >>> model = AutoModelForSeq2SeqLM.from_pretrained("zjunlp/MolGen-large-opt") >>> sf_input = tokenizer("[N][#C][C][C][C@@H1][C][C][C][C][C][C][C][C][C][C][C][Ring1][N][=O]", return_tensors="pt") >>> # beam search >>> molecules = model.generate(input_ids=sf_input["input_ids"], attention_mask=sf_input["attention_mask"], max_length=35, min_length=5, num_return_sequences=5, num_beams=5) >>> sf_output = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True).replace(" ","") for g in molecules] ['[N][#C][C][C][C@@H1][C][C][C][C][C][C][C][C][C][C][C][C][Ring1][N][=O]', '[N][#C][C][C][C@@H1][C][C][C][C][C][C][C][C][C][C][C][Ring1][N][=O]', '[N][#C][C][C][C@@H1][C][C][C][C][C][C][C][C][C][C][C][C][C][Ring1][N][=O]', '[N][#C][C][C][C@@H1][C][C][C][C][C][C][C][C][C][C][Ring1][N][=O]', '[N][#C][C][C][C@@H1][C][C][C][C][C][C][C][C][C][C][C][C][C][C][Ring1][N][=O]'] ``` ### BibTeX entry and citation info ```bibtex @article{fang2023molecular, title={Molecular Language Model as Multi-task Generator}, author={Fang, Yin and Zhang, Ningyu and Chen, Zhuo and Fan, Xiaohui and Chen, Huajun}, journal={arXiv preprint arXiv:2301.11259}, year={2023} } ```
albert-large-v1
[ "pytorch", "tf", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "AlbertForMaskedLM" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
687
2023-02-19T09:56:27Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos model-index: - name: distilbert-base-uncased-finetuned-clinc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2