modelId
stringlengths
4
81
tags
list
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
timestamp[ns, tz=UTC]
card
stringlengths
51
438k
Davlan/bert-base-multilingual-cased-finetuned-luganda
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
16
2023-01-14T21:11:22Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="sinny/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Davlan/distilbert-base-multilingual-cased-masakhaner
[ "pytorch", "tf", "distilbert", "token-classification", "arxiv:2103.11811", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "DistilBertForTokenClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
16
null
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-cartpole1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Davlan/xlm-roberta-base-finetuned-kinyarwanda
[ "pytorch", "xlm-roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "XLMRobertaForMaskedLM" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
61
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: jentrialgo/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Dayout/test
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8733333333333333 - name: F1 type: f1 value: 0.875 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3124 - Accuracy: 0.8733 - F1: 0.875 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
Dazai/Ok
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - sst2 model-index: - name: finetuned_distilgpt2_sst2_negation0.0_pretrainedTrue results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_distilgpt2_sst2_negation0.0_pretrainedTrue This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the sst2 dataset. It achieves the following results on the evaluation set: - Loss: 3.7369 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.3136 | 1.0 | 1059 | 3.7331 | | 3.162 | 2.0 | 2118 | 3.7319 | | 3.0859 | 3.0 | 3177 | 3.7369 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.7.0 - Datasets 2.8.0 - Tokenizers 0.13.2
Dbluciferm3737/U
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: whisper-medium-ft-cy results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-medium-ft-cy This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3063 - Wer: 15.7739 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.3328 | 0.83 | 1000 | 0.3451 | 21.2371 | | 0.1843 | 1.66 | 2000 | 0.2953 | 16.9522 | | 0.0837 | 2.49 | 3000 | 0.2980 | 16.0877 | | 0.0367 | 3.32 | 4000 | 0.3063 | 15.7739 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3
DeadBeast/marathi-roberta-base
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - sst2 model-index: - name: finetuned_distilgpt2_sst2_negation0.0_pretrainedTrue_epochs0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_distilgpt2_sst2_negation0.0_pretrainedTrue_epochs0 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the sst2 dataset. It achieves the following results on the evaluation set: - eval_loss: 4.6217 - eval_runtime: 0.9515 - eval_samples_per_second: 193.385 - eval_steps_per_second: 24.173 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 0 ### Framework versions - Transformers 4.25.1 - Pytorch 1.7.0 - Datasets 2.8.0 - Tokenizers 0.13.2
DeadBeast/roberta-base-pretrained-mr-2
[ "pytorch", "jax", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
Simple 50/50 merge of https://civitai.com/models/1274/dreamlike-diffusion-10 and https://civitai.com/models/1102/synthwavepunk
DecafNosebleed/DialoGPT-small-ScaraBot
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
15
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - sst2 model-index: - name: finetuned_distilgpt2_sst2_negation0.0_pretrainedFalse_epochs0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_distilgpt2_sst2_negation0.0_pretrainedFalse_epochs0 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the sst2 dataset. It achieves the following results on the evaluation set: - eval_loss: 4.6217 - eval_runtime: 0.9358 - eval_samples_per_second: 196.632 - eval_steps_per_second: 24.579 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 0 ### Framework versions - Transformers 4.25.1 - Pytorch 1.7.0 - Datasets 2.8.0 - Tokenizers 0.13.2
DecafNosebleed/scarabot-model
[ "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 241.77 +/- 37.36 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Declan/Breitbart_model_v4
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- tags: - conversational --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> # Model Details ## Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ## Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ## Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ## Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ## Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ## Training Procedure [optional] <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing [More Information Needed] ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] # Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ## Testing Data, Factors & Metrics ### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] ### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ## Results [More Information Needed] ### Summary # Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] # Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] # Technical Specifications [optional] ## Model Architecture and Objective [More Information Needed] ## Compute Infrastructure [More Information Needed] ### Hardware [More Information Needed] ### Software [More Information Needed] # Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] # Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] # More Information [optional] [More Information Needed] # Model Card Authors [optional] [More Information Needed] # Model Card Contact [More Information Needed] # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> [More Information Needed] </details>
Declan/Breitbart_model_v5
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: grade2jazz results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # grade2jazz This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.6449 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 44 | 3.0933 | | No log | 2.0 | 88 | 2.7392 | | No log | 3.0 | 132 | 2.6449 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.1+cpu - Datasets 2.8.0 - Tokenizers 0.13.2
Declan/Breitbart_model_v6
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 259.76 +/- 12.15 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Declan/Breitbart_modelv7
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: small-mlm-glue-stsb-target-glue-mrpc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small-mlm-glue-stsb-target-glue-mrpc This model is a fine-tuned version of [muhtasham/small-mlm-glue-stsb](https://huggingface.co/muhtasham/small-mlm-glue-stsb) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9122 - Accuracy: 0.7598 - F1: 0.8322 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.3924 | 4.35 | 500 | 0.8097 | 0.7647 | 0.8416 | | 0.0751 | 8.7 | 1000 | 1.4556 | 0.7574 | 0.8374 | | 0.0294 | 13.04 | 1500 | 1.7098 | 0.7647 | 0.8356 | | 0.0186 | 17.39 | 2000 | 1.9122 | 0.7598 | 0.8322 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
Declan/CNN_model_v1
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
2023-01-15T00:51:32Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: korean_sentiment_analysis_dataset3_best results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # korean_sentiment_analysis_dataset3_best This model is a fine-tuned version of [klue/roberta-large](https://huggingface.co/klue/roberta-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6989 - Micro f1 score: 76.6383 - Auprc: 81.5157 - Accuracy: 0.7664 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Micro f1 score | Auprc | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------------:|:-------:|:--------:| | 0.7997 | 1.0 | 5080 | 0.6822 | 74.7769 | 79.4361 | 0.7478 | | 0.4544 | 2.0 | 10160 | 0.6608 | 76.7429 | 81.1265 | 0.7674 | | 0.5702 | 3.0 | 15240 | 0.6989 | 76.6383 | 81.5157 | 0.7664 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.6.0 - Datasets 2.7.1 - Tokenizers 0.13.2
Declan/CNN_model_v2
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: creativeml-openrail-m tags: - pytorch - diffusers - stable-diffusion - text-to-image - diffusion-models-class - dreambooth-hackathon - animal widget: - text: an ött3r otter flying over a city at night, neon lights, highly detailed, digital painting, artstation, concept art, sharp focus, illustration, art by artgerm and greg rutkowski and alphonse mucha, cinematic lighting --- # DreamBooth model for the ött3r concept trained by mathpn on the mathpn/LeonardTheOtter dataset. This is a Stable Diffusion model fine-tuned on the ött3r concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of ött3r otter** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part! ## Description This is a Stable Diffusion model fine-tuned on `otter` images for the animal theme. ## Usage ```python from diffusers import StableDiffusionPipeline pipeline = StableDiffusionPipeline.from_pretrained('mathpn/dreambooth-friendly-otter') image = pipeline().images[0] image ```
Declan/CNN_model_v4
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### miki-style Dreambooth model trained by steyn with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
Declan/CNN_model_v7
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: small-mlm-glue-qnli-target-glue-mrpc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small-mlm-glue-qnli-target-glue-mrpc This model is a fine-tuned version of [muhtasham/small-mlm-glue-qnli](https://huggingface.co/muhtasham/small-mlm-glue-qnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9217 - Accuracy: 0.7770 - F1: 0.8455 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.3905 | 4.35 | 500 | 0.7540 | 0.7892 | 0.8608 | | 0.0675 | 8.7 | 1000 | 1.4012 | 0.7892 | 0.8608 | | 0.0274 | 13.04 | 1500 | 1.5409 | 0.7794 | 0.8454 | | 0.0189 | 17.39 | 2000 | 1.5464 | 0.7917 | 0.8609 | | 0.0119 | 21.74 | 2500 | 1.7553 | 0.7794 | 0.8505 | | 0.0179 | 26.09 | 3000 | 1.7660 | 0.7745 | 0.8492 | | 0.0128 | 30.43 | 3500 | 1.9217 | 0.7770 | 0.8455 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
Declan/ChicagoTribune_model_v2
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
2023-01-15T01:23:23Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: small-mlm-glue-stsb-target-glue-qnli results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small-mlm-glue-stsb-target-glue-qnli This model is a fine-tuned version of [muhtasham/small-mlm-glue-stsb](https://huggingface.co/muhtasham/small-mlm-glue-stsb) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3477 - Accuracy: 0.8547 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4913 | 0.15 | 500 | 0.3941 | 0.8287 | | 0.4468 | 0.31 | 1000 | 0.3872 | 0.8303 | | 0.4246 | 0.46 | 1500 | 0.3619 | 0.8411 | | 0.4133 | 0.61 | 2000 | 0.3757 | 0.8375 | | 0.4133 | 0.76 | 2500 | 0.3445 | 0.8503 | | 0.3958 | 0.92 | 3000 | 0.3340 | 0.8574 | | 0.3576 | 1.07 | 3500 | 0.3426 | 0.8558 | | 0.318 | 1.22 | 4000 | 0.3568 | 0.8559 | | 0.3166 | 1.37 | 4500 | 0.3477 | 0.8547 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
Declan/ChicagoTribune_model_v4
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget library_name: ml-agents --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Write your model_id: dfm794/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Declan/ChicagoTribune_model_v6
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: creativeml-openrail-m tags: - pytorch - diffusers - stable-diffusion - text-to-image - diffusion-models-class - dreambooth-hackathon - science widget: - text: A photo of a pai symbol --- # DreamBooth model for the pai concept trained by 0xAnders. This is a Stable Diffusion model fine-tuned on the pai concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of pai symbol** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part! ## Description This is a Stable Diffusion model fine-tuned on `symbol` images for the science theme, for the Hugging Face DreamBooth Hackathon, from the HF CN Community, corporated with the HeyWhale. ## Usage ```python from diffusers import StableDiffusionPipeline pipeline = StableDiffusionPipeline.from_pretrained('0xAnders/pai-symbol-heywhale') image = pipeline().images[0] image ``` ## Examples **a photo of pai symbol in the Great Wall↓** ![](https://www.hualigs.cn/image/63c97770bc2d4.jpg) ![](https://www.hualigs.cn/image/63c976a57c21f.jpg)
Declan/ChicagoTribune_model_v7
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: ptaylour/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Declan/ChicagoTribune_model_v8
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 245.70 +/- 21.78 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Declan/FoxNews_model_v1
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-labor_space results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-labor_space This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Tokenizers 0.13.2
Declan/FoxNews_model_v2
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
2023-01-15T03:51:51Z
--- license: mit tags: - generated_from_trainer datasets: - sst2 model-index: - name: finetuned_gpt2-medium_sst2_negation0.0_pretrainedFalse_epochs30 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_gpt2-medium_sst2_negation0.0_pretrainedFalse_epochs30 This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the sst2 dataset. It achieves the following results on the evaluation set: - Loss: 5.8610 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.7927 | 1.0 | 1059 | 3.3242 | | 2.4065 | 2.0 | 2118 | 3.5353 | | 2.0753 | 3.0 | 3177 | 3.8060 | | 1.8186 | 4.0 | 4236 | 4.0682 | | 1.6246 | 5.0 | 5295 | 4.3559 | | 1.4789 | 6.0 | 6354 | 4.5638 | | 1.367 | 7.0 | 7413 | 4.6723 | | 1.2762 | 8.0 | 8472 | 4.8568 | | 1.2058 | 9.0 | 9531 | 4.9660 | | 1.1499 | 10.0 | 10590 | 5.0804 | | 1.1047 | 11.0 | 11649 | 5.1751 | | 1.0641 | 12.0 | 12708 | 5.2775 | | 1.0287 | 13.0 | 13767 | 5.3404 | | 1.0026 | 14.0 | 14826 | 5.4163 | | 0.9781 | 15.0 | 15885 | 5.4508 | | 0.9559 | 16.0 | 16944 | 5.4982 | | 0.945 | 17.0 | 18003 | 5.5577 | | 0.9267 | 18.0 | 19062 | 5.5923 | | 0.9153 | 19.0 | 20121 | 5.6331 | | 0.8998 | 20.0 | 21180 | 5.6636 | | 0.8864 | 21.0 | 22239 | 5.7158 | | 0.8802 | 22.0 | 23298 | 5.7324 | | 0.8727 | 23.0 | 24357 | 5.7652 | | 0.8586 | 24.0 | 25416 | 5.7807 | | 0.8565 | 25.0 | 26475 | 5.7954 | | 0.851 | 26.0 | 27534 | 5.8253 | | 0.8457 | 27.0 | 28593 | 5.8330 | | 0.8432 | 28.0 | 29652 | 5.8485 | | 0.8405 | 29.0 | 30711 | 5.8505 | | 0.8354 | 30.0 | 31770 | 5.8610 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.7.0 - Datasets 2.8.0 - Tokenizers 0.13.2
Declan/HuffPost_model_v1
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: small-mlm-glue-qnli-target-glue-qnli results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small-mlm-glue-qnli-target-glue-qnli This model is a fine-tuned version of [muhtasham/small-mlm-glue-qnli](https://huggingface.co/muhtasham/small-mlm-glue-qnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3483 - Accuracy: 0.8598 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4857 | 0.15 | 500 | 0.3911 | 0.8305 | | 0.444 | 0.31 | 1000 | 0.3833 | 0.8318 | | 0.421 | 0.46 | 1500 | 0.3587 | 0.8433 | | 0.4119 | 0.61 | 2000 | 0.3669 | 0.8400 | | 0.4098 | 0.76 | 2500 | 0.3444 | 0.8451 | | 0.3909 | 0.92 | 3000 | 0.3305 | 0.8558 | | 0.3535 | 1.07 | 3500 | 0.3413 | 0.8591 | | 0.3168 | 1.22 | 4000 | 0.3438 | 0.8622 | | 0.312 | 1.37 | 4500 | 0.3583 | 0.8539 | | 0.3183 | 1.53 | 5000 | 0.3483 | 0.8598 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
Declan/Independent__model
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-01-15T02:33:25Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1-TEST results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Declan/NPR_model_v2
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned_emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: train args: split metrics: - name: Accuracy type: accuracy value: 0.928 - name: F1 type: f1 value: 0.9279068376386842 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned_emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2132 - Accuracy: 0.928 - F1: 0.9279 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.828 | 1.0 | 250 | 0.3122 | 0.911 | 0.9086 | | 0.2476 | 2.0 | 500 | 0.2132 | 0.928 | 0.9279 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
Declan/NPR_model_v3
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="asubiabre/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Declan/NPR_model_v4
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 40.20 +/- 34.23 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Declan/NPR_model_v5
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: small-mlm-glue-stsb-target-glue-qqp results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small-mlm-glue-stsb-target-glue-qqp This model is a fine-tuned version of [muhtasham/small-mlm-glue-stsb](https://huggingface.co/muhtasham/small-mlm-glue-stsb) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3294 - Accuracy: 0.8525 - F1: 0.8131 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.4739 | 0.04 | 500 | 0.4259 | 0.7919 | 0.7514 | | 0.4186 | 0.09 | 1000 | 0.3841 | 0.8190 | 0.7709 | | 0.3984 | 0.13 | 1500 | 0.3737 | 0.8228 | 0.7757 | | 0.3853 | 0.18 | 2000 | 0.3725 | 0.8228 | 0.7878 | | 0.3761 | 0.22 | 2500 | 0.3558 | 0.8362 | 0.7969 | | 0.3616 | 0.26 | 3000 | 0.3434 | 0.8418 | 0.8010 | | 0.3616 | 0.31 | 3500 | 0.3286 | 0.8504 | 0.8008 | | 0.3528 | 0.35 | 4000 | 0.3293 | 0.8513 | 0.8110 | | 0.358 | 0.4 | 4500 | 0.3213 | 0.8539 | 0.8104 | | 0.3428 | 0.44 | 5000 | 0.3294 | 0.8525 | 0.8131 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
Declan/NPR_model_v6
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- language: - en tags: - stable-diffusion --- Momoko Model And Embeddigs Recommended to use two together. The resulting image will be awesome.
Declan/NewYorkTimes_model_v3
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-attempt1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 400.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Declan/NewYorkTimes_model_v6
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 264.42 +/- 38.22 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Declan/NewYorkTimes_model_v8
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
2023-01-15T04:03:40Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: small-mlm-glue-qnli-target-glue-qqp results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small-mlm-glue-qnli-target-glue-qqp This model is a fine-tuned version of [muhtasham/small-mlm-glue-qnli](https://huggingface.co/muhtasham/small-mlm-glue-qnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3296 - Accuracy: 0.8511 - F1: 0.8117 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.4762 | 0.04 | 500 | 0.4247 | 0.7897 | 0.7473 | | 0.4188 | 0.09 | 1000 | 0.3880 | 0.8126 | 0.7702 | | 0.4011 | 0.13 | 1500 | 0.3760 | 0.8194 | 0.7750 | | 0.387 | 0.18 | 2000 | 0.3779 | 0.8189 | 0.7866 | | 0.3802 | 0.22 | 2500 | 0.3642 | 0.8320 | 0.7958 | | 0.3606 | 0.26 | 3000 | 0.3526 | 0.8358 | 0.7972 | | 0.3604 | 0.31 | 3500 | 0.3337 | 0.8495 | 0.8010 | | 0.3538 | 0.35 | 4000 | 0.3341 | 0.8483 | 0.8102 | | 0.3582 | 0.4 | 4500 | 0.3293 | 0.8503 | 0.8106 | | 0.345 | 0.44 | 5000 | 0.3296 | 0.8511 | 0.8117 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
Declan/Politico_model_v2
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### diffusionAI Dreambooth model trained by aaronsiim with [buildspace's DreamBooth](https://colab.research.google.com/github/buildspace/diffusers/blob/main/examples/dreambooth/DreamBooth_Stable_Diffusion.ipynb) notebook Build your own using the [AI Avatar project](https://buildspace.so/builds/ai-avatar)! To get started head over to the [project dashboard](https://buildspace.so/p/build-ai-avatars). Sample pictures of this concept:
Declan/Politico_model_v3
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget library_name: ml-agents --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Write your model_id: jxiao/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Declan/Politico_model_v6
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: creativeml-openrail-m --- For training purposes. Model: Anything 4.5
Declan/Politico_model_v8
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: cc-by-nc-sa-4.0 language: - en thumbnail: "https://huggingface.co/GeneralAwareness/Cyberpunked/resolve/main/tmp71_2gsc8.png" tags: - stable-diffusion - v2 - text-to-image - image-to-image - Embedding --- Textual Inversion Embedding by General Awareness For SD 2.x trained on 768x768 images from various sources. Install by downloading the .pt embedding, and put it in the \embeddings folder. This embedding was made to do Cyberpunk scenes with Text to Image, or Image to Image even if the prompt includes "cyberpunk" already (also plays VERY nicely with other embeddings). --- Use keyword: image in Cyberpunked style, Cyberpunked style, Cyberpunked, in the style of Cyberpunked, or by Cyberpunked. --- new year's eve celebration with countdown clock in Toronto, hyper realistic, 8K, high detail, cyberpunked ![Single Samples](https://huggingface.co/GeneralAwareness/Cyberpunked/resolve/main/1.png) batman standing on rooftop looking down at street over the shoulder perspective, cyberpunked ![Single Samples](https://huggingface.co/GeneralAwareness/Cyberpunked/resolve/main/2.png) johnny depp steampunk style, cyberpunked ![Single Samples](https://huggingface.co/GeneralAwareness/Cyberpunked/resolve/main/3.png) Cyberpunked (Image to Image Euler_a 40 steps CFG: 10 Denoise Strength: 0.6). Using various parameters comes up with some interesting results. ![Single Samples](https://huggingface.co/GeneralAwareness/Cyberpunked/resolve/main/4.png) ![Single Samples](https://huggingface.co/GeneralAwareness/Cyberpunked/resolve/main/5.png)
Declan/Reuters_model_v1
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 1000.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Declan/Reuters_model_v2
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: creativeml-openrail-m pipeline_tag: text-to-image --- ### a Better model is out, Go to https://huggingface.co/no3/kat-at3-beta1 ### kat from [Flipon](https://store.steampowered.com/app/1285020/Flipon/) on [WD](https://huggingface.co/hakurei/waifu-diffusion) via Dreambooth #### model by no3 This your waifu-diffusion v1.4 model fine-tuned kat concept taught to waifu-diffusion v1.4 with Dreambooth. It can be used by modifying the `instance_prompt`: **sks kaatt** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts). ### note If you want to to use in UI like [AUTOMATIC1111](https://github.com/AUTOMATIC1111/stable-diffusion-webui) or any UI that's uses .ckpt files just download one or more file from here for your convenience. [katFl-wd-1.4-beta2.ckpt](https://huggingface.co/no3/kat-wd-1.4-beta2/resolve/main/katFl-wd-1.4-beta2.ckpt) 5.16 GB [katFl-wd-1.4-beta2-pruned.ckpt](https://huggingface.co/no3/kat-wd-1.4-beta2/resolve/main/katFl-wd-1.4-beta2-pruned.ckpt) 2.58 GB Uses less storage space, but untested yet If you have issues or questions feel free to visit the Community Tab and start discussion about it. Here are images used for training this concept: ![image 1](https://huggingface.co/no3/kat-wd-1.4-beta2/resolve/main/concept_images/1.png) ![image 2](https://huggingface.co/no3/kat-wd-1.4-beta2/resolve/main/concept_images/2.png) ![image 3](https://huggingface.co/no3/kat-wd-1.4-beta2/resolve/main/concept_images/3.png) ![image 4](https://huggingface.co/no3/kat-wd-1.4-beta2/resolve/main/concept_images/1%20c.png) ![image 5](https://huggingface.co/no3/kat-wd-1.4-beta2/resolve/main/concept_images/2%20c.png)
Declan/Reuters_model_v3
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget library_name: ml-agents --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Write your model_id: Nyxynyx/SnowballTarget1 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Declan/Reuters_model_v4
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget library_name: ml-agents --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Write your model_id: dfm794/ppo-SnowballTarget-2 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Declan/Reuters_model_v5
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: creativeml-openrail-m pipeline_tag: text-to-image --- ### a Better model is out, Go to https://huggingface.co/no3/kat-at3-beta1 ### kat from [Flipon](https://store.steampowered.com/app/1285020/Flipon/) on [WD](https://huggingface.co/hakurei/waifu-diffusion) via Dreambooth #### model by no3 This your waifu-diffusion v1.4 model fine-tuned kat concept taught to waifu-diffusion v1.4 with Dreambooth. It can be used by modifying the `instance_prompt`: **sks kaatt** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts). ### note If you want to to use in UI like [AUTOMATIC1111](https://github.com/AUTOMATIC1111/stable-diffusion-webui) or any UI that's uses .ckpt files just download one or more file from here for your convenience. [katFl-wd-1.4-beta3.ckpt](https://huggingface.co/no3/kat-wd-1.4-beta3/resolve/main/katFl-wd-1.4-beta3.ckpt) 5.16 GB [katFl-wd-1.4-beta3-pruned.ckpt](https://huggingface.co/no3/kat-wd-1.4-beta3/resolve/main/katFl-wd-1.4-beta3-pruned.ckpt) 2.58 GB Uses less storage space, but untested yet If you have issues or questions feel free to visit the Community Tab and start discussion about it. Here are images used for training this concept: ![image 1](https://huggingface.co/no3/kat-wd-1.4-beta3/resolve/main/concept_images/1.png) ![image 2](https://huggingface.co/no3/kat-wd-1.4-beta3/resolve/main/concept_images/2.png) ![image 3](https://huggingface.co/no3/kat-wd-1.4-beta3/resolve/main/concept_images/3.png) ![image 4](https://huggingface.co/no3/kat-wd-1.4-beta3/resolve/main/concept_images/1%20c.png) ![image 5](https://huggingface.co/no3/kat-wd-1.4-beta3/resolve/main/concept_images/2%20c.png)
Declan/Reuters_model_v8
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: jxiao/ppo-pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Declan/WallStreetJournal_model_v1
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-copter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 34.70 +/- 29.55 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Declan/WallStreetJournal_model_v2
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: cc-by-4.0 tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: hing-roberta-NCM-run-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hing-roberta-NCM-run-3 This model is a fine-tuned version of [l3cube-pune/hing-roberta](https://huggingface.co/l3cube-pune/hing-roberta) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.2053 - Accuracy: 0.6645 - Precision: 0.6565 - Recall: 0.6479 - F1: 0.6505 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.9077 | 1.0 | 927 | 0.8070 | 0.6397 | 0.6581 | 0.6439 | 0.6382 | | 0.6915 | 2.0 | 1854 | 0.8635 | 0.6462 | 0.6368 | 0.6439 | 0.6357 | | 0.4785 | 3.0 | 2781 | 1.0961 | 0.6613 | 0.6510 | 0.6556 | 0.6505 | | 0.3356 | 4.0 | 3708 | 1.6867 | 0.6667 | 0.6623 | 0.6611 | 0.6595 | | 0.2622 | 5.0 | 4635 | 2.0271 | 0.6602 | 0.6589 | 0.6451 | 0.6482 | | 0.1957 | 6.0 | 5562 | 2.2565 | 0.6634 | 0.6763 | 0.6517 | 0.6541 | | 0.1419 | 7.0 | 6489 | 2.4627 | 0.6440 | 0.6487 | 0.6203 | 0.6230 | | 0.1126 | 8.0 | 7416 | 2.7844 | 0.6483 | 0.6347 | 0.6268 | 0.6295 | | 0.091 | 9.0 | 8343 | 2.8776 | 0.6440 | 0.6302 | 0.6315 | 0.6307 | | 0.0758 | 10.0 | 9270 | 3.0246 | 0.6451 | 0.6325 | 0.6227 | 0.6256 | | 0.0674 | 11.0 | 10197 | 2.9389 | 0.6721 | 0.6605 | 0.6501 | 0.6530 | | 0.0542 | 12.0 | 11124 | 3.0503 | 0.6429 | 0.6456 | 0.6315 | 0.6330 | | 0.0576 | 13.0 | 12051 | 3.0252 | 0.6483 | 0.6427 | 0.6435 | 0.6398 | | 0.0337 | 14.0 | 12978 | 3.1160 | 0.6731 | 0.6676 | 0.6545 | 0.6575 | | 0.0318 | 15.0 | 13905 | 3.0740 | 0.6807 | 0.6733 | 0.6647 | 0.6671 | | 0.0188 | 16.0 | 14832 | 3.0890 | 0.6721 | 0.6633 | 0.6574 | 0.6589 | | 0.0258 | 17.0 | 15759 | 3.1519 | 0.6634 | 0.6602 | 0.6456 | 0.6490 | | 0.017 | 18.0 | 16686 | 3.1503 | 0.6688 | 0.6638 | 0.6547 | 0.6568 | | 0.0146 | 19.0 | 17613 | 3.2083 | 0.6688 | 0.6621 | 0.6516 | 0.6545 | | 0.0125 | 20.0 | 18540 | 3.2053 | 0.6645 | 0.6565 | 0.6479 | 0.6505 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.10.1+cu111 - Datasets 2.3.2 - Tokenizers 0.12.1
Declan/WallStreetJournal_model_v5
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: Nyxynyx/Pyramids_Training 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Declan/WallStreetJournal_model_v6
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: gpl-3.0 language: - en library_name: diffusers pipeline_tag: text-to-image tags: - generative ai - stable-diffusion - image-to-image - realism - art --- Anireal 2D v2 Finetuned Stable Diffusion 1.5 for generating images ![img](./e1.png) ![img](./ex1.png)
Declan/WallStreetJournal_model_v8
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: gpl-3.0 language: - en library_name: diffusers pipeline_tag: text-to-image tags: - generative ai - stable-diffusion - image-to-image - realism - art --- Anireal 2.5D v2 Finetuned Stable Diffusion 1.5 for generating images ![img](./e2.png) ![img](./ex2.png)
Declan/test_model
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: gpl-3.0 language: - en library_name: diffusers pipeline_tag: text-to-image tags: - generative ai - stable-diffusion - image-to-image - realism - art --- Anireal 3D v2 Finetuned Stable Diffusion 1.5 for generating images ![img](./e3.png) ![img](./ex3.png)
DeepChem/ChemBERTa-10M-MLM
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
90
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 262.10 +/- 44.87 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
DeepChem/ChemBERTa-10M-MTR
[ "pytorch", "roberta", "arxiv:1910.09700", "transformers" ]
null
{ "architectures": [ "RobertaForRegression" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
708
null
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: imagefolder metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-osteosarcoma-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `imagefolder` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/logannyeMD/ddpm-osteosarcoma-128/tensorboard?#scalars)
DeepChem/ChemBERTa-5M-MLM
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
29
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: small-mlm-glue-stsb-target-glue-rte results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small-mlm-glue-stsb-target-glue-rte This model is a fine-tuned version of [muhtasham/small-mlm-glue-stsb](https://huggingface.co/muhtasham/small-mlm-glue-stsb) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.8025 - Accuracy: 0.5993 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4125 | 6.41 | 500 | 1.3674 | 0.5884 | | 0.0577 | 12.82 | 1000 | 2.5654 | 0.6065 | | 0.0268 | 19.23 | 1500 | 2.8994 | 0.5884 | | 0.0158 | 25.64 | 2000 | 3.1525 | 0.6101 | | 0.0131 | 32.05 | 2500 | 3.5112 | 0.5884 | | 0.0137 | 38.46 | 3000 | 3.5227 | 0.5740 | | 0.0174 | 44.87 | 3500 | 3.0736 | 0.6354 | | 0.0139 | 51.28 | 4000 | 3.5635 | 0.5921 | | 0.0122 | 57.69 | 4500 | 3.3484 | 0.5957 | | 0.0074 | 64.1 | 5000 | 3.8025 | 0.5993 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
DeepChem/ChemBERTa-5M-MTR
[ "pytorch", "roberta", "transformers" ]
null
{ "architectures": [ "RobertaForRegression" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
2023-01-15T06:12:45Z
--- license: gpl-3.0 language: - en library_name: diffusers pipeline_tag: text-to-image tags: - generative ai - stable-diffusion - image-to-image - realism - art --- Photoreal Semi v2 Finetuned Stable Diffusion 1.5 for generating images ![img](./e4.png) ![img](./ex4.png)
DeepESP/gpt2-spanish-medium
[ "pytorch", "tf", "jax", "gpt2", "text-generation", "es", "dataset:ebooks", "transformers", "GPT-2", "Spanish", "ebooks", "nlg", "license:mit" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
340
null
<p> anything is not pruned, 7GB model </p> <p> A model = anything-v3+0.3([bg-visualnovel]-[anything-v3]) </p> <p> B model = A model+0.3([NAIfull-latest]-[NAIsfw-latest]) </p> <p> <br> </p> <p> barcode4444 = B model+0.3([gape60]-[NAIfull-latest]) </p> <p> barcode9999 = [barcode4444]+1([f222]-[sd1.5]) </p> <p> barcode9999+1 = [barcode9999]+0.5([gape60]-[NAIfull-latest]) </p>
DeepPavlov/bert-base-multilingual-cased-sentence
[ "pytorch", "jax", "bert", "feature-extraction", "multilingual", "arxiv:1704.05426", "arxiv:1809.05053", "arxiv:1908.10084", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
140
null
--- license: apache-2.0 tags: - vision - depth-estimation - generated_from_trainer model-index: - name: glpn-nyu-finetuned-diode-230115-063851 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # glpn-nyu-finetuned-diode-230115-063851 This model is a fine-tuned version of [vinvino02/glpn-nyu](https://huggingface.co/vinvino02/glpn-nyu) on the diode-subset dataset. It achieves the following results on the evaluation set: - Loss: 0.4360 - Mae: 0.4211 - Rmse: 0.6143 - Abs Rel: 0.4394 - Log Mae: 0.1700 - Log Rmse: 0.2243 - Delta1: 0.3835 - Delta2: 0.6419 - Delta3: 0.8181 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 24 - eval_batch_size: 48 - seed: 2022 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.15 - num_epochs: 75 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | Rmse | Abs Rel | Log Mae | Log Rmse | Delta1 | Delta2 | Delta3 | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:|:-------:|:--------:|:------:|:------:|:------:| | 1.0074 | 1.0 | 72 | 0.4929 | 0.4684 | 0.6424 | 0.5682 | 0.1955 | 0.2515 | 0.3151 | 0.5288 | 0.7834 | | 0.4702 | 2.0 | 144 | 0.4561 | 0.4431 | 0.6292 | 0.4667 | 0.1822 | 0.2344 | 0.3377 | 0.6052 | 0.7878 | | 0.4618 | 3.0 | 216 | 0.4827 | 0.4664 | 0.6351 | 0.5445 | 0.1939 | 0.2461 | 0.3163 | 0.5336 | 0.7408 | | 0.4388 | 4.0 | 288 | 0.4669 | 0.4450 | 0.6251 | 0.5068 | 0.1831 | 0.2379 | 0.3465 | 0.5870 | 0.7811 | | 0.463 | 5.0 | 360 | 0.4960 | 0.4715 | 0.6382 | 0.5942 | 0.1963 | 0.2528 | 0.3115 | 0.5361 | 0.7222 | | 0.4478 | 6.0 | 432 | 0.4808 | 0.4542 | 0.6301 | 0.5439 | 0.1872 | 0.2440 | 0.3452 | 0.5647 | 0.7519 | | 0.42 | 7.0 | 504 | 0.4645 | 0.4445 | 0.6268 | 0.4876 | 0.1822 | 0.2361 | 0.3578 | 0.5889 | 0.7741 | | 0.3977 | 8.0 | 576 | 0.4767 | 0.4503 | 0.6299 | 0.5162 | 0.1857 | 0.2410 | 0.3452 | 0.5705 | 0.7726 | | 0.4045 | 9.0 | 648 | 0.4747 | 0.4568 | 0.6303 | 0.5208 | 0.1885 | 0.2406 | 0.3323 | 0.5572 | 0.7497 | | 0.392 | 10.0 | 720 | 0.4860 | 0.4571 | 0.6331 | 0.5645 | 0.1889 | 0.2475 | 0.3370 | 0.5627 | 0.7589 | | 0.3749 | 11.0 | 792 | 0.4785 | 0.4502 | 0.6308 | 0.5449 | 0.1860 | 0.2446 | 0.3423 | 0.5719 | 0.7940 | | 0.4292 | 12.0 | 864 | 0.4905 | 0.4574 | 0.6346 | 0.5616 | 0.1891 | 0.2483 | 0.3402 | 0.5694 | 0.7499 | | 0.432 | 13.0 | 936 | 0.4648 | 0.4408 | 0.6229 | 0.4877 | 0.1804 | 0.2345 | 0.3607 | 0.5947 | 0.7771 | | 0.4097 | 14.0 | 1008 | 0.4464 | 0.4303 | 0.6221 | 0.4398 | 0.1742 | 0.2285 | 0.3879 | 0.6179 | 0.7821 | | 0.4212 | 15.0 | 1080 | 0.4773 | 0.4550 | 0.6298 | 0.5327 | 0.1874 | 0.2425 | 0.3375 | 0.5666 | 0.7588 | | 0.3862 | 16.0 | 1152 | 0.4682 | 0.4440 | 0.6248 | 0.5171 | 0.1824 | 0.2392 | 0.3516 | 0.5906 | 0.7793 | | 0.3726 | 17.0 | 1224 | 0.4702 | 0.4425 | 0.6243 | 0.5190 | 0.1807 | 0.2385 | 0.3591 | 0.5904 | 0.7824 | | 0.4016 | 18.0 | 1296 | 0.5012 | 0.4789 | 0.6418 | 0.6093 | 0.2002 | 0.2561 | 0.3035 | 0.5188 | 0.7003 | | 0.3772 | 19.0 | 1368 | 0.4935 | 0.4676 | 0.6371 | 0.5940 | 0.1947 | 0.2525 | 0.3195 | 0.5398 | 0.7340 | | 0.3987 | 20.0 | 1440 | 0.4630 | 0.4399 | 0.6312 | 0.4934 | 0.1801 | 0.2388 | 0.3711 | 0.6044 | 0.7865 | | 0.378 | 21.0 | 1512 | 0.4424 | 0.4180 | 0.6211 | 0.4329 | 0.1683 | 0.2280 | 0.4210 | 0.6415 | 0.8022 | | 0.3674 | 22.0 | 1584 | 0.4591 | 0.4346 | 0.6272 | 0.5022 | 0.1764 | 0.2374 | 0.3819 | 0.6184 | 0.7881 | | 0.3803 | 23.0 | 1656 | 0.4708 | 0.4483 | 0.6276 | 0.5228 | 0.1841 | 0.2404 | 0.3484 | 0.5854 | 0.7597 | | 0.4082 | 24.0 | 1728 | 0.4753 | 0.4506 | 0.6286 | 0.5436 | 0.1854 | 0.2436 | 0.3512 | 0.5780 | 0.7593 | | 0.3662 | 25.0 | 1800 | 0.4455 | 0.4221 | 0.6160 | 0.4622 | 0.1709 | 0.2288 | 0.3897 | 0.6406 | 0.8124 | | 0.3735 | 26.0 | 1872 | 0.4405 | 0.4194 | 0.6219 | 0.4487 | 0.1691 | 0.2304 | 0.4091 | 0.6492 | 0.8080 | | 0.3387 | 27.0 | 1944 | 0.4449 | 0.4235 | 0.6176 | 0.4538 | 0.1716 | 0.2282 | 0.3807 | 0.6471 | 0.8122 | | 0.3826 | 28.0 | 2016 | 0.4521 | 0.4261 | 0.6176 | 0.4622 | 0.1716 | 0.2289 | 0.3887 | 0.6348 | 0.7957 | | 0.358 | 29.0 | 2088 | 0.4299 | 0.4113 | 0.6123 | 0.4165 | 0.1643 | 0.2209 | 0.4073 | 0.6734 | 0.8179 | | 0.3466 | 30.0 | 2160 | 0.4357 | 0.4154 | 0.6172 | 0.4177 | 0.1666 | 0.2237 | 0.4067 | 0.6619 | 0.8109 | | 0.3698 | 31.0 | 2232 | 0.4735 | 0.4469 | 0.6256 | 0.5423 | 0.1842 | 0.2425 | 0.3421 | 0.5840 | 0.7896 | | 0.3578 | 32.0 | 2304 | 0.4405 | 0.4156 | 0.6126 | 0.4429 | 0.1674 | 0.2253 | 0.4016 | 0.6521 | 0.8146 | | 0.3908 | 33.0 | 2376 | 0.4829 | 0.4584 | 0.6315 | 0.5698 | 0.1895 | 0.2472 | 0.3317 | 0.5601 | 0.7479 | | 0.3398 | 34.0 | 2448 | 0.4451 | 0.4253 | 0.6187 | 0.4517 | 0.1720 | 0.2283 | 0.3869 | 0.6347 | 0.8013 | | 0.3368 | 35.0 | 2520 | 0.4491 | 0.4259 | 0.6186 | 0.4619 | 0.1725 | 0.2299 | 0.3774 | 0.6392 | 0.8056 | | 0.3786 | 36.0 | 2592 | 0.4419 | 0.4254 | 0.6150 | 0.4497 | 0.1726 | 0.2262 | 0.3677 | 0.6346 | 0.8181 | | 0.3373 | 37.0 | 2664 | 0.4562 | 0.4365 | 0.6224 | 0.4909 | 0.1780 | 0.2346 | 0.3690 | 0.6071 | 0.7911 | | 0.3628 | 38.0 | 2736 | 0.4643 | 0.4433 | 0.6244 | 0.5107 | 0.1822 | 0.2378 | 0.3437 | 0.5898 | 0.7946 | | 0.3746 | 39.0 | 2808 | 0.4746 | 0.4525 | 0.6278 | 0.5310 | 0.1865 | 0.2417 | 0.3388 | 0.5716 | 0.7541 | | 0.3994 | 40.0 | 2880 | 0.4740 | 0.4498 | 0.6280 | 0.5399 | 0.1857 | 0.2431 | 0.3415 | 0.5791 | 0.7742 | | 0.3583 | 41.0 | 2952 | 0.4500 | 0.4260 | 0.6197 | 0.4717 | 0.1731 | 0.2318 | 0.3885 | 0.6316 | 0.8052 | | 0.369 | 42.0 | 3024 | 0.4369 | 0.4176 | 0.6181 | 0.4334 | 0.1681 | 0.2261 | 0.4051 | 0.6604 | 0.8066 | | 0.35 | 43.0 | 3096 | 0.4514 | 0.4297 | 0.6182 | 0.4802 | 0.1753 | 0.2321 | 0.3702 | 0.6155 | 0.8117 | | 0.3249 | 44.0 | 3168 | 0.4382 | 0.4209 | 0.6180 | 0.4332 | 0.1698 | 0.2256 | 0.3981 | 0.6443 | 0.8054 | | 0.3329 | 45.0 | 3240 | 0.4558 | 0.4380 | 0.6222 | 0.4840 | 0.1789 | 0.2335 | 0.3578 | 0.5989 | 0.7958 | | 0.3553 | 46.0 | 3312 | 0.4420 | 0.4173 | 0.6150 | 0.4520 | 0.1679 | 0.2274 | 0.4029 | 0.6572 | 0.8098 | | 0.3671 | 47.0 | 3384 | 0.4479 | 0.4294 | 0.6174 | 0.4734 | 0.1750 | 0.2304 | 0.3595 | 0.6255 | 0.8145 | | 0.3244 | 48.0 | 3456 | 0.4542 | 0.4369 | 0.6189 | 0.4872 | 0.1786 | 0.2325 | 0.3520 | 0.6026 | 0.8070 | | 0.3803 | 49.0 | 3528 | 0.4447 | 0.4256 | 0.6174 | 0.4635 | 0.1721 | 0.2291 | 0.3850 | 0.6347 | 0.8041 | | 0.332 | 50.0 | 3600 | 0.4434 | 0.4279 | 0.6167 | 0.4573 | 0.1735 | 0.2276 | 0.3689 | 0.6301 | 0.8082 | | 0.3249 | 51.0 | 3672 | 0.4379 | 0.4242 | 0.6170 | 0.4448 | 0.1716 | 0.2260 | 0.3783 | 0.6400 | 0.8117 | | 0.3257 | 52.0 | 3744 | 0.4277 | 0.4151 | 0.6169 | 0.4110 | 0.1664 | 0.2215 | 0.4075 | 0.6594 | 0.8110 | | 0.3256 | 53.0 | 3816 | 0.4493 | 0.4317 | 0.6189 | 0.4776 | 0.1755 | 0.2309 | 0.3654 | 0.6152 | 0.8077 | | 0.3164 | 54.0 | 3888 | 0.4503 | 0.4303 | 0.6173 | 0.4821 | 0.1750 | 0.2313 | 0.3687 | 0.6181 | 0.8128 | | 0.3276 | 55.0 | 3960 | 0.4503 | 0.4322 | 0.6187 | 0.4765 | 0.1763 | 0.2311 | 0.3641 | 0.6098 | 0.8127 | | 0.3207 | 56.0 | 4032 | 0.4524 | 0.4320 | 0.6199 | 0.4807 | 0.1759 | 0.2324 | 0.3622 | 0.6234 | 0.8072 | | 0.3204 | 57.0 | 4104 | 0.4425 | 0.4238 | 0.6149 | 0.4532 | 0.1715 | 0.2266 | 0.3800 | 0.6413 | 0.8086 | | 0.3282 | 58.0 | 4176 | 0.4440 | 0.4267 | 0.6162 | 0.4592 | 0.1731 | 0.2278 | 0.3777 | 0.6260 | 0.8088 | | 0.3232 | 59.0 | 4248 | 0.4439 | 0.4298 | 0.6165 | 0.4603 | 0.1748 | 0.2278 | 0.3621 | 0.6190 | 0.8141 | | 0.307 | 60.0 | 4320 | 0.4452 | 0.4275 | 0.6165 | 0.4623 | 0.1737 | 0.2286 | 0.3741 | 0.6235 | 0.8105 | | 0.3142 | 61.0 | 4392 | 0.4432 | 0.4270 | 0.6159 | 0.4578 | 0.1732 | 0.2275 | 0.3763 | 0.6236 | 0.8133 | | 0.3062 | 62.0 | 4464 | 0.4422 | 0.4238 | 0.6150 | 0.4582 | 0.1717 | 0.2275 | 0.3829 | 0.6331 | 0.8189 | | 0.3037 | 63.0 | 4536 | 0.4306 | 0.4142 | 0.6132 | 0.4240 | 0.1663 | 0.2223 | 0.3992 | 0.6677 | 0.8193 | | 0.309 | 64.0 | 4608 | 0.4450 | 0.4277 | 0.6162 | 0.4625 | 0.1736 | 0.2282 | 0.3710 | 0.6242 | 0.8169 | | 0.3096 | 65.0 | 4680 | 0.4442 | 0.4277 | 0.6169 | 0.4601 | 0.1736 | 0.2283 | 0.3714 | 0.6262 | 0.8144 | | 0.3049 | 66.0 | 4752 | 0.4449 | 0.4278 | 0.6166 | 0.4622 | 0.1737 | 0.2285 | 0.3725 | 0.6273 | 0.8122 | | 0.3324 | 67.0 | 4824 | 0.4416 | 0.4264 | 0.6159 | 0.4511 | 0.1728 | 0.2265 | 0.3731 | 0.6284 | 0.8145 | | 0.3183 | 68.0 | 4896 | 0.4405 | 0.4243 | 0.6149 | 0.4501 | 0.1718 | 0.2261 | 0.3786 | 0.6315 | 0.8173 | | 0.3178 | 69.0 | 4968 | 0.4397 | 0.4240 | 0.6150 | 0.4488 | 0.1716 | 0.2259 | 0.3779 | 0.6332 | 0.8168 | | 0.3159 | 70.0 | 5040 | 0.4365 | 0.4212 | 0.6138 | 0.4396 | 0.1701 | 0.2242 | 0.3836 | 0.6404 | 0.8173 | | 0.3266 | 71.0 | 5112 | 0.4397 | 0.4244 | 0.6145 | 0.4480 | 0.1718 | 0.2256 | 0.3773 | 0.6312 | 0.8161 | | 0.3234 | 72.0 | 5184 | 0.4384 | 0.4237 | 0.6144 | 0.4451 | 0.1714 | 0.2251 | 0.3761 | 0.6346 | 0.8177 | | 0.3108 | 73.0 | 5256 | 0.4371 | 0.4219 | 0.6144 | 0.4429 | 0.1705 | 0.2250 | 0.3820 | 0.6395 | 0.8174 | | 0.3184 | 74.0 | 5328 | 0.4351 | 0.4206 | 0.6138 | 0.4381 | 0.1697 | 0.2240 | 0.3850 | 0.6430 | 0.8182 | | 0.3152 | 75.0 | 5400 | 0.4360 | 0.4211 | 0.6143 | 0.4394 | 0.1700 | 0.2243 | 0.3835 | 0.6419 | 0.8181 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
Doiman/DialoGPT-medium-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
--- license: creativeml-openrail-m tags: - pytorch - diffusers - stable-diffusion - text-to-image - diffusion-models-class - dreambooth-hackathon - science widget: - text: A painting of StarTrek starship, Michelangelo style --- # DreamBooth model for the StarTrek concept trained by vumichien on the vumichien/spaceship_star_trek dataset. <img src="https://huggingface.co/vumichien/StarTrek-starship/resolve/main/1_dlgd3k5ZecT17cJOrg2NdA.jpeg" alt="StarTrek starship"> This is a Stable Diffusion model fine-tuned on the StarTrek concept with DreamBooth. It can be used by modifying the `instance_prompt`: **StarTrek starship** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part! ## Description This is a Stable Diffusion model fine-tuned on `starship` images for the science theme. ## Examples <figure> <img src="https://huggingface.co/vumichien/StarTrek-starship/resolve/main/Leonardo%20Da%20Vinci%20style.png" alt="StarTrek starship - Leonardo Da Vinci style"> <figcaption>Text prompts for generated: A painting of StarTrek starship, Leonardo Da Vinci style </figcaption> </figure> <figure> <img src="https://huggingface.co/vumichien/StarTrek-starship/resolve/main/Michelangelo%20style.png" alt="StarTrek starship - Michelangelo style"> <figcaption>Text prompts for generated: A painting of StarTrek starship, Michelangelo style </figcaption> </figure> <figure> <img src="https://huggingface.co/vumichien/StarTrek-starship/resolve/main/Botero%20style.png" alt="StarTrek starship - Botero style"> <figcaption>Text prompts for generated: A painting of StarTrek starship, Botero style </figcaption> </figure> <figure> <img src="https://huggingface.co/vumichien/StarTrek-starship/resolve/main/Pierre-Auguste%20Renoir%20style.png" alt="StarTrek starship - Pierre-Auguste Renoir style"> <figcaption>Text prompts for generated: A painting of StarTrek starship, Pierre-Auguste Renoir style </figcaption> </figure> <figure> <img src="https://huggingface.co/vumichien/StarTrek-starship/resolve/main/Vincent%20Van%20Gogh%20style.png" alt="StarTrek starship - Vincent Van Gogh style"> <figcaption>Text prompts for generated: A painting of StarTrek starship, Vincent Van Gogh style </figcaption> </figure> <figure> <img src="https://huggingface.co/vumichien/StarTrek-starship/resolve/main/Rembrandt%20style.png" alt="StarTrek starship - Rembrandt style"> <figcaption>Text prompts for generated: A painting of StarTrek starship, Rembrandt style </figcaption> </figure> ## Usage ```python from diffusers import StableDiffusionPipeline pipeline = StableDiffusionPipeline.from_pretrained('vumichien/StarTrek-starship') image = pipeline().images[0] image ```
albert-base-v1
[ "pytorch", "tf", "safetensors", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "exbert", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "AlbertForMaskedLM" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
38,156
2023-01-15T11:29:12Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: s-himmi/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
albert-large-v1
[ "pytorch", "tf", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "AlbertForMaskedLM" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
687
2023-01-15T11:35:46Z
--- library_name: stable-baselines3 tags: - CartpoleDMC-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartpoleDMC-v0 type: CartpoleDMC-v0 metrics: - type: mean_reward value: 999.66 +/- 0.25 name: mean_reward verified: false --- # **PPO** Agent playing **CartpoleDMC-v0** This is a trained model of a **PPO** agent playing **CartpoleDMC-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo ppo --env CartpoleDMC-v0 -orga qgallouedec -f logs/ python -m rl_zoo3.enjoy --algo ppo --env CartpoleDMC-v0 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo ppo --env CartpoleDMC-v0 -orga qgallouedec -f logs/ python -m rl_zoo3.enjoy --algo ppo --env CartpoleDMC-v0 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo ppo --env CartpoleDMC-v0 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo ppo --env CartpoleDMC-v0 -f logs/ -orga qgallouedec ``` ## Hyperparameters ```python OrderedDict([('n_timesteps', 1000000.0), ('policy', 'MlpPolicy'), ('normalize', False)]) ```
albert-xlarge-v2
[ "pytorch", "tf", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "AlbertForMaskedLM" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2,973
2023-01-15T11:44:00Z
--- license: cc-by-nc-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-finetuned-ucf101-subset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-ucf101-subset This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1527 - Accuracy: 0.9522 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 932 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.0091 | 0.25 | 233 | 1.0615 | 0.6618 | | 0.365 | 1.25 | 466 | 0.5371 | 0.8051 | | 0.1671 | 2.25 | 699 | 0.3670 | 0.8897 | | 0.0051 | 3.25 | 932 | 0.1527 | 0.9522 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
albert-xxlarge-v1
[ "pytorch", "tf", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "AlbertForMaskedLM" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7,091
2023-01-15T11:44:37Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: train args: split metrics: - name: Accuracy type: accuracy value: 0.9435 - name: F1 type: f1 value: 0.9436365142942252 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1610 - Accuracy: 0.9435 - F1: 0.9436 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.063 | 1.0 | 250 | 0.1568 | 0.941 | 0.9414 | | 0.0417 | 2.0 | 500 | 0.1610 | 0.9435 | 0.9436 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
bert-base-cased
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "exbert", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8,621,271
2023-01-15T11:50:29Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - food101 metrics: - accuracy model-index: - name: my_awesome_food_model results: - task: name: Image Classification type: image-classification dataset: name: food101 type: food101 config: default split: train[:5000] args: default metrics: - name: Accuracy type: accuracy value: 0.897 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset. It achieves the following results on the evaluation set: - Loss: 1.5916 - Accuracy: 0.897 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.6742 | 0.99 | 62 | 2.5104 | 0.821 | | 1.8036 | 1.99 | 124 | 1.7824 | 0.863 | | 1.591 | 2.99 | 186 | 1.5916 | 0.897 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
bert-base-chinese
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "zh", "arxiv:1810.04805", "transformers", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3,377,486
2023-01-15T11:59:47Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 256.70 +/- 16.45 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
bert-base-german-cased
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "de", "transformers", "exbert", "license:mit", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
175,983
null
--- license: creativeml-openrail-m tags: - text-to-image widget: - text: sdcid --- ### Sample pictures of: sdcid (use that on your prompt) ![sdcid 0](https://huggingface.co/AppInApp/a0c306ab-cdd9-4303-ace9-6d021d3520d5/resolve/main/instance_data/sdcid_%286%29.jpg)![sdcid 1](https://huggingface.co/AppInApp/a0c306ab-cdd9-4303-ace9-6d021d3520d5/resolve/main/instance_data/sdcid_%287%29.jpg)![sdcid 2](https://huggingface.co/AppInApp/a0c306ab-cdd9-4303-ace9-6d021d3520d5/resolve/main/instance_data/sdcid_%281%29.jpg)![sdcid 3](https://huggingface.co/AppInApp/a0c306ab-cdd9-4303-ace9-6d021d3520d5/resolve/main/instance_data/sdcid_%283%29.jpg)![sdcid 4](https://huggingface.co/AppInApp/a0c306ab-cdd9-4303-ace9-6d021d3520d5/resolve/main/instance_data/sdcid_%284%29.jpg)![sdcid 5](https://huggingface.co/AppInApp/a0c306ab-cdd9-4303-ace9-6d021d3520d5/resolve/main/instance_data/sdcid_%285%29.jpg)![sdcid 6](https://huggingface.co/AppInApp/a0c306ab-cdd9-4303-ace9-6d021d3520d5/resolve/main/instance_data/sdcid_%282%29.jpg)
bert-base-uncased
[ "pytorch", "tf", "jax", "rust", "safetensors", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "exbert", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
59,663,489
2023-01-15T12:30:10Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: small-mlm-glue-mrpc-custom-tokenizer-target-glue-mnli results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small-mlm-glue-mrpc-custom-tokenizer-target-glue-mnli This model is a fine-tuned version of [muhtasham/small-mlm-glue-mrpc-custom-tokenizer](https://huggingface.co/muhtasham/small-mlm-glue-mrpc-custom-tokenizer) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8198 - Accuracy: 0.6328 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.052 | 0.04 | 500 | 1.0262 | 0.4857 | | 0.9703 | 0.08 | 1000 | 0.9454 | 0.5575 | | 0.9365 | 0.12 | 1500 | 0.9063 | 0.5835 | | 0.9001 | 0.16 | 2000 | 0.8998 | 0.5903 | | 0.8888 | 0.2 | 2500 | 0.8922 | 0.5980 | | 0.8817 | 0.24 | 3000 | 0.8697 | 0.6075 | | 0.8669 | 0.29 | 3500 | 0.8602 | 0.6139 | | 0.8538 | 0.33 | 4000 | 0.8531 | 0.6174 | | 0.8356 | 0.37 | 4500 | 0.8470 | 0.6250 | | 0.8401 | 0.41 | 5000 | 0.8198 | 0.6328 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
bert-large-cased-whole-word-masking-finetuned-squad
[ "pytorch", "tf", "jax", "rust", "safetensors", "bert", "question-answering", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
question-answering
{ "architectures": [ "BertForQuestionAnswering" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8,214
2023-01-15T12:35:34Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: small-mlm-glue-qqp-target-glue-mnli results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small-mlm-glue-qqp-target-glue-mnli This model is a fine-tuned version of [muhtasham/small-mlm-glue-qqp](https://huggingface.co/muhtasham/small-mlm-glue-qqp) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6551 - Accuracy: 0.7219 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9185 | 0.04 | 500 | 0.8285 | 0.6395 | | 0.8182 | 0.08 | 1000 | 0.7859 | 0.6628 | | 0.7779 | 0.12 | 1500 | 0.7475 | 0.6761 | | 0.7565 | 0.16 | 2000 | 0.7283 | 0.6913 | | 0.7477 | 0.2 | 2500 | 0.7180 | 0.6929 | | 0.7376 | 0.24 | 3000 | 0.7028 | 0.6964 | | 0.7185 | 0.29 | 3500 | 0.6840 | 0.7137 | | 0.7051 | 0.33 | 4000 | 0.6747 | 0.7190 | | 0.6785 | 0.37 | 4500 | 0.6846 | 0.7192 | | 0.685 | 0.41 | 5000 | 0.6551 | 0.7219 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
bert-large-cased
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
388,769
2023-01-15T12:46:05Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: Danimp94/ppo-Huggy-t1 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
bert-large-uncased
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,058,496
2023-01-15T12:52:49Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 16.40 +/- 9.31 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction # **Training hyperparameters** ```python pixelcopter_hyperparameters = { "h_size": 32, "n_training_episodes": 40000, "n_evaluation_episodes": 10, "max_t": 5000, "gamma": 0.98, "lr": 1e-5, "env_id": env_id, "state_space": s_size, "action_space": a_size, } ```
distilbert-base-multilingual-cased
[ "pytorch", "tf", "onnx", "safetensors", "distilbert", "fill-mask", "multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", "fi", "fr", "gl", "ka", "de", "el", "gu", "ht", "he", "hi", "hu", "is", "io", "id", "ga", "it", "ja", "jv", "kn", "kk", "ky", "ko", "la", "lv", "lt", "roa", "nds", "lm", "mk", "mg", "ms", "ml", "mr", "mn", "min", "ne", "new", "nb", "nn", "oc", "fa", "pms", "pl", "pt", "pa", "ro", "ru", "sco", "sr", "scn", "sk", "sl", "aze", "es", "su", "sw", "sv", "tl", "tg", "th", "ta", "tt", "te", "tr", "uk", "ud", "uz", "vi", "vo", "war", "cy", "fry", "pnb", "yo", "dataset:wikipedia", "arxiv:1910.01108", "arxiv:1910.09700", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8,339,633
2023-01-15T13:05:04Z
--- tags: - generated_from_trainer datasets: - custom_squad_v2 model-index: - name: kobigbird-pure2-36745331 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kobigbird-pure2-36745331 This model is a fine-tuned version of [monologg/kobigbird-bert-base](https://huggingface.co/monologg/kobigbird-bert-base) on the custom_squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.4250 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 2 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.99 | 42 | 1.7149 | | No log | 1.99 | 84 | 1.4250 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.1 - Datasets 2.8.0 - Tokenizers 0.13.2
gpt2-large
[ "pytorch", "tf", "jax", "rust", "safetensors", "gpt2", "text-generation", "en", "arxiv:1910.09700", "transformers", "license:mit", "has_space" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,454,819
2023-01-15T13:15:52Z
--- license: creativeml-openrail-m tags: - pytorch - diffusers - stable-diffusion - text-to-image - diffusion-models-class - dreambooth-hackathon - wildcard widget: - text: >- a rabbit wearing sunglasses, in the style of <guo-chao> illustration, trending on artstation, masterpiece, best quality language: - zh - en --- # DreamBooth model for China-Chic-illustration This is the first model for `China-Chic illustration` (国潮插画) style painting. The model is based on Stable Diffusion model, fine-tuned the `China-Chic illustration` style taught to Stable Diffusion with DreamBooth. It is trained by tilake AIGC group on the own dataset. It can be used by modifying the `instance_prompt`: **style of \<guo-chao\> illustration** 全球首个 “国潮插画” 风格定制化模型发布。该模型基于 Stable Diffusion model,并采用 DreamBooth 微调。该模型由 TiLake AIGC Group 在内部数据集上训练而来。 你可以通过扩充以下 `instance_prompt` 来使用国潮插画模型绘制各种画作: **style of \<guo-chao\> illustration** # Gradio We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run China-Chic-illustration: [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/akhaliq/China-Chic-illustration) ## Description This is a Stable Diffusion model fine-tuned on `China-Chic illustration` style images. It is trained by Tilake AIGC Group. If you like this model, click the \[❤ like\] button! 国潮插画是将传统文化和现代潮流审美进行结合的一种插画形式,开放该模型旨在给相关艺术家和工作者提供灵感和创作思路。如果喜欢该模型,欢迎点亮网页最上方的【like】按钮~ ## Examples - Prompt: ```a cute rabbit in red clothes, in the style of <guo-chao> illustration, trending on artstation, masterpiece, best quality```(国潮兔) <img width="200px" height="200px" src="https://huggingface.co/tilake/China-Chic-illustration/resolve/main/example/1.jpg"> - Prompt: ```dragon dance, in the style of <guo-chao> illustration, trending on artstation, masterpiece, best quality```(国潮舞龙) <img width="200px" height="200px" src="https://huggingface.co/tilake/China-Chic-illustration/resolve/main/example/2.jpg"> - Prompt: ```fireworks, in the style of <guo-chao> illustration, trending on artstation, masterpiece, best quality```(国潮烟花) <img width="200px" height="200px" src="https://huggingface.co/tilake/China-Chic-illustration/resolve/main/example/3.jpg"> - Prompt: ```a snowman, fireworks in the background, in the style of <guo-chao> illustration, trending on artstation, masterpiece, best quality```(国潮雪人) <img width="200px" height="200px" src="https://huggingface.co/tilake/China-Chic-illustration/resolve/main/example/4.jpg"> ## Usage ```python from diffusers import StableDiffusionPipeline # guidance_scale=8.8 may be the best pipeline = StableDiffusionPipeline.from_pretrained('tilake/China-Chic-illustration') image = pipeline("style of <guo-chao> illustration, a rabbit wearing sunglasses").images[0] image ```
0xDEADBEA7/DialoGPT-small-rick
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
2023-01-15T14:27:03Z
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
1712871/manual_vn_electra_small
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-01-15T14:27:21Z
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
AAli/wav2vec2-base-demo-colab
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: MRingive/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
AI4Sec/cyner-xlm-roberta-base
[ "pytorch", "xlm-roberta", "token-classification", "transformers", "license:mit", "autotrain_compatible" ]
token-classification
{ "architectures": [ "XLMRobertaForTokenClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2023-01-15T15:27:48Z
--- license: mit library_name: sklearn tags: - sklearn - skops - tabular-classification model_file: example.pkl widget: structuredData: area error: - 30.29 - 96.05 - 48.31 compactness error: - 0.01911 - 0.01652 - 0.01484 concave points error: - 0.01037 - 0.0137 - 0.01093 concavity error: - 0.02701 - 0.02269 - 0.02813 fractal dimension error: - 0.003586 - 0.001698 - 0.002461 mean area: - 481.9 - 1130.0 - 748.9 mean compactness: - 0.1058 - 0.1029 - 0.1223 mean concave points: - 0.03821 - 0.07951 - 0.08087 mean concavity: - 0.08005 - 0.108 - 0.1466 mean fractal dimension: - 0.06373 - 0.05461 - 0.05796 mean perimeter: - 81.09 - 123.6 - 101.7 mean radius: - 12.47 - 18.94 - 15.46 mean smoothness: - 0.09965 - 0.09009 - 0.1092 mean symmetry: - 0.1925 - 0.1582 - 0.1931 mean texture: - 18.6 - 21.31 - 19.48 perimeter error: - 2.497 - 5.486 - 3.094 radius error: - 0.3961 - 0.7888 - 0.4743 smoothness error: - 0.006953 - 0.004444 - 0.00624 symmetry error: - 0.01782 - 0.01386 - 0.01397 texture error: - 1.044 - 0.7975 - 0.7859 worst area: - 677.9 - 1866.0 - 1156.0 worst compactness: - 0.2378 - 0.2336 - 0.2394 worst concave points: - 0.1015 - 0.1789 - 0.1514 worst concavity: - 0.2671 - 0.2687 - 0.3791 worst fractal dimension: - 0.0875 - 0.06589 - 0.08019 worst perimeter: - 96.05 - 165.9 - 124.9 worst radius: - 14.97 - 24.86 - 19.26 worst smoothness: - 0.1426 - 0.1193 - 0.1546 worst symmetry: - 0.3014 - 0.2551 - 0.2837 worst texture: - 24.64 - 26.58 - 26.0 --- # Model description [More Information Needed] ## Intended uses & limitations [More Information Needed] ## Training Procedure ### Hyperparameters The model is trained with below hyperparameters. <details> <summary> Click to expand </summary> | Hyperparameter | Value | |--------------------------|---------| | ccp_alpha | 0.0 | | class_weight | | | criterion | gini | | max_depth | | | max_features | | | max_leaf_nodes | | | min_impurity_decrease | 0.0 | | min_samples_leaf | 1 | | min_samples_split | 2 | | min_weight_fraction_leaf | 0.0 | | random_state | | | splitter | best | </details> ### Model Plot The model plot is below. <style>#sk-container-id-1 {color: black;background-color: white;}#sk-container-id-1 pre{padding: 0;}#sk-container-id-1 div.sk-toggleable {background-color: white;}#sk-container-id-1 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-1 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-1 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-1 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-1 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-1 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-1 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-1 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-1 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-1 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-1 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-1 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-1 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-1 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-1 div.sk-item {position: relative;z-index: 1;}#sk-container-id-1 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-1 div.sk-item::before, #sk-container-id-1 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-1 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-1 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-1 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-1 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-1 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-1 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-1 div.sk-label-container {text-align: center;}#sk-container-id-1 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-1 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-1" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>DecisionTreeClassifier()</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-1" type="checkbox" checked><label for="sk-estimator-id-1" class="sk-toggleable__label sk-toggleable__label-arrow">DecisionTreeClassifier</label><div class="sk-toggleable__content"><pre>DecisionTreeClassifier()</pre></div></div></div></div></div> ## Evaluation Results You can find the details about evaluation process and the evaluation results. | Metric | Value | |----------|---------| | accuracy | 0.94152 | | f1 score | 0.94152 | # How to Get Started with the Model [More Information Needed] # Model Card Authors This model card is written by following authors: [More Information Needed] # Model Card Contact You can contact the model card authors through following channels: [More Information Needed] # Citation Below you can find information related to citation. **BibTeX:** ``` [More Information Needed] ``` # citation_bibtex bibtex @inproceedings{...,year={2020}} # get_started_code import pickle with open(dtc_pkl_filename, 'rb') as file: clf = pickle.load(file) # model_card_authors skops_user # limitations This model is not ready to be used in production. # model_description This is a DecisionTreeClassifier model trained on breast cancer dataset. # eval_method The model is evaluated using test split, on accuracy and F1 score with macro average. # confusion_matrix ![confusion_matrix](confusion_matrix.png)
AI4Sec/cyner-xlm-roberta-large
[ "xlm-roberta", "token-classification", "transformers", "license:mit", "autotrain_compatible" ]
token-classification
{ "architectures": [ "XLMRobertaForTokenClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2023-01-15T15:29:24Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Beegbrain/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
AdapterHub/bert-base-uncased-pf-rte
[ "bert", "en", "arxiv:2104.08247", "adapter-transformers", "text-classification", "adapterhub:nli/rte" ]
text-classification
{ "architectures": null, "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- tags: - FrozenLake-v1-4x4 - q-learning - reinforcement-learning - custom-implementation model-index: - name: FrozenLake-v1-4x4_slippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4 type: FrozenLake-v1-4x4 metrics: - type: mean_reward value: 0.73 +/- 0.45 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="asubiabre/FrozenLake-v1-4x4_slippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
AigizK/wav2vec2-large-xls-r-300m-bashkir-cv7_no_lm
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-01-16T06:31:50Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Bingsu/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
AigizK/wav2vec2-large-xls-r-300m-bashkir-cv7_opt
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "ba", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "license:apache-2.0", "model-index", "has_space" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
64
null
--- tags: - generated_from_trainer metrics: - f1 model-index: - name: bert-base-chinese-finetuned-ner-food_requirement results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-chinese-finetuned-ner-food_requirement This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0046 - F1: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.169 | 1.0 | 3 | 1.7537 | 0.0 | | 1.598 | 2.0 | 6 | 1.0914 | 0.5169 | | 1.1569 | 3.0 | 9 | 0.6879 | 0.7057 | | 0.6669 | 4.0 | 12 | 0.4194 | 0.8607 | | 0.485 | 5.0 | 15 | 0.2528 | 0.9333 | | 0.2807 | 6.0 | 18 | 0.1476 | 0.9836 | | 0.2015 | 7.0 | 21 | 0.0834 | 0.9873 | | 0.1145 | 8.0 | 24 | 0.0484 | 0.9924 | | 0.0809 | 9.0 | 27 | 0.0283 | 1.0 | | 0.0495 | 10.0 | 30 | 0.0180 | 1.0 | | 0.0377 | 11.0 | 33 | 0.0126 | 1.0 | | 0.0219 | 12.0 | 36 | 0.0095 | 1.0 | | 0.0216 | 13.0 | 39 | 0.0076 | 1.0 | | 0.015 | 14.0 | 42 | 0.0065 | 1.0 | | 0.0176 | 15.0 | 45 | 0.0059 | 1.0 | | 0.0161 | 16.0 | 48 | 0.0053 | 1.0 | | 0.0133 | 17.0 | 51 | 0.0050 | 1.0 | | 0.0119 | 18.0 | 54 | 0.0048 | 1.0 | | 0.0116 | 19.0 | 57 | 0.0047 | 1.0 | | 0.0126 | 20.0 | 60 | 0.0046 | 1.0 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.12.0+cu102 - Datasets 1.18.4 - Tokenizers 0.12.1
AimB/mT5-en-kr-aihub-netflix
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-01-16T06:44:55Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.67 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Bingsu/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Akashpb13/xlsr_hungarian_new
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "hu", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
2023-01-16T07:51:32Z
--- library_name: stable-baselines3 tags: - CartpoleSparseDMC-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DDPG results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartpoleSparseDMC-v0 type: CartpoleSparseDMC-v0 metrics: - type: mean_reward value: 1000.00 +/- 0.00 name: mean_reward verified: false --- # **DDPG** Agent playing **CartpoleSparseDMC-v0** This is a trained model of a **DDPG** agent playing **CartpoleSparseDMC-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo ddpg --env CartpoleSparseDMC-v0 -orga qgallouedec -f logs/ python -m rl_zoo3.enjoy --algo ddpg --env CartpoleSparseDMC-v0 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo ddpg --env CartpoleSparseDMC-v0 -orga qgallouedec -f logs/ python -m rl_zoo3.enjoy --algo ddpg --env CartpoleSparseDMC-v0 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo ddpg --env CartpoleSparseDMC-v0 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo ddpg --env CartpoleSparseDMC-v0 -f logs/ -orga qgallouedec ``` ## Hyperparameters ```python OrderedDict([('batch_size', 64), ('gamma', 0.99), ('learning_rate', 0.0001), ('n_timesteps', 1000000.0), ('noise_std', 0.3), ('noise_type', 'ornstein-uhlenbeck'), ('policy', 'MlpPolicy'), ('policy_kwargs', 'dict(net_arch=dict(pi=[300, 200], qf=[400, 300]))'), ('normalize', False)]) ```
Akashpb13/xlsr_kurmanji_kurdish
[ "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "kmr", "ku", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
2023-01-16T07:52:27Z
--- library_name: stable-baselines3 tags: - CartpoleSwingupSparseDMC-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DDPG results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartpoleSwingupSparseDMC-v0 type: CartpoleSwingupSparseDMC-v0 metrics: - type: mean_reward value: 344.20 +/- 1.66 name: mean_reward verified: false --- # **DDPG** Agent playing **CartpoleSwingupSparseDMC-v0** This is a trained model of a **DDPG** agent playing **CartpoleSwingupSparseDMC-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo ddpg --env CartpoleSwingupSparseDMC-v0 -orga qgallouedec -f logs/ python -m rl_zoo3.enjoy --algo ddpg --env CartpoleSwingupSparseDMC-v0 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo ddpg --env CartpoleSwingupSparseDMC-v0 -orga qgallouedec -f logs/ python -m rl_zoo3.enjoy --algo ddpg --env CartpoleSwingupSparseDMC-v0 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo ddpg --env CartpoleSwingupSparseDMC-v0 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo ddpg --env CartpoleSwingupSparseDMC-v0 -f logs/ -orga qgallouedec ``` ## Hyperparameters ```python OrderedDict([('batch_size', 64), ('gamma', 0.99), ('learning_rate', 0.0001), ('n_timesteps', 1000000.0), ('noise_std', 0.3), ('noise_type', 'ornstein-uhlenbeck'), ('policy', 'MlpPolicy'), ('policy_kwargs', 'dict(net_arch=dict(pi=[300, 200], qf=[400, 300]))'), ('normalize', False)]) ```
Akashpb13/xlsr_maltese_wav2vec2
[ "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "mt", "dataset:common_voice", "transformers", "audio", "speech", "xlsr-fine-tuning-week", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2023-01-16T07:53:19Z
--- library_name: stable-baselines3 tags: - CartpoleTwoPolesDMC-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DDPG results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartpoleTwoPolesDMC-v0 type: CartpoleTwoPolesDMC-v0 metrics: - type: mean_reward value: 274.78 +/- 23.12 name: mean_reward verified: false --- # **DDPG** Agent playing **CartpoleTwoPolesDMC-v0** This is a trained model of a **DDPG** agent playing **CartpoleTwoPolesDMC-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo ddpg --env CartpoleTwoPolesDMC-v0 -orga qgallouedec -f logs/ python -m rl_zoo3.enjoy --algo ddpg --env CartpoleTwoPolesDMC-v0 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo ddpg --env CartpoleTwoPolesDMC-v0 -orga qgallouedec -f logs/ python -m rl_zoo3.enjoy --algo ddpg --env CartpoleTwoPolesDMC-v0 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo ddpg --env CartpoleTwoPolesDMC-v0 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo ddpg --env CartpoleTwoPolesDMC-v0 -f logs/ -orga qgallouedec ``` ## Hyperparameters ```python OrderedDict([('batch_size', 64), ('gamma', 0.99), ('learning_rate', 0.0001), ('n_timesteps', 1000000.0), ('noise_std', 0.3), ('noise_type', 'ornstein-uhlenbeck'), ('policy', 'MlpPolicy'), ('policy_kwargs', 'dict(net_arch=dict(pi=[300, 200], qf=[400, 300]))'), ('normalize', False)]) ```
Akbarariza/Anjar
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-01-16T07:54:01Z
--- tags: - FrozenLake-v1-8x8 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-8x8 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-8x8 type: FrozenLake-v1-8x8 metrics: - type: mean_reward value: 0.45 +/- 0.50 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Bingsu/q-FrozenLake-v1-8x8", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Akira-Yana/distilbert-base-uncased-finetuned-cola
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-01-16T07:54:10Z
--- library_name: stable-baselines3 tags: - CartpoleThreePolesDMC-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DDPG results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartpoleThreePolesDMC-v0 type: CartpoleThreePolesDMC-v0 metrics: - type: mean_reward value: 161.39 +/- 18.53 name: mean_reward verified: false --- # **DDPG** Agent playing **CartpoleThreePolesDMC-v0** This is a trained model of a **DDPG** agent playing **CartpoleThreePolesDMC-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo ddpg --env CartpoleThreePolesDMC-v0 -orga qgallouedec -f logs/ python -m rl_zoo3.enjoy --algo ddpg --env CartpoleThreePolesDMC-v0 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo ddpg --env CartpoleThreePolesDMC-v0 -orga qgallouedec -f logs/ python -m rl_zoo3.enjoy --algo ddpg --env CartpoleThreePolesDMC-v0 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo ddpg --env CartpoleThreePolesDMC-v0 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo ddpg --env CartpoleThreePolesDMC-v0 -f logs/ -orga qgallouedec ``` ## Hyperparameters ```python OrderedDict([('batch_size', 64), ('gamma', 0.99), ('learning_rate', 0.0001), ('n_timesteps', 1000000.0), ('noise_std', 0.3), ('noise_type', 'ornstein-uhlenbeck'), ('policy', 'MlpPolicy'), ('policy_kwargs', 'dict(net_arch=dict(pi=[300, 200], qf=[400, 300]))'), ('normalize', False)]) ```
Akiva/Joke
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-01-16T07:55:13Z
--- library_name: stable-baselines3 tags: - FingerSpinDMC-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DDPG results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FingerSpinDMC-v0 type: FingerSpinDMC-v0 metrics: - type: mean_reward value: 0.00 +/- 0.00 name: mean_reward verified: false --- # **DDPG** Agent playing **FingerSpinDMC-v0** This is a trained model of a **DDPG** agent playing **FingerSpinDMC-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo ddpg --env FingerSpinDMC-v0 -orga qgallouedec -f logs/ python -m rl_zoo3.enjoy --algo ddpg --env FingerSpinDMC-v0 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo ddpg --env FingerSpinDMC-v0 -orga qgallouedec -f logs/ python -m rl_zoo3.enjoy --algo ddpg --env FingerSpinDMC-v0 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo ddpg --env FingerSpinDMC-v0 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo ddpg --env FingerSpinDMC-v0 -f logs/ -orga qgallouedec ``` ## Hyperparameters ```python OrderedDict([('batch_size', 64), ('gamma', 0.99), ('learning_rate', 0.0001), ('n_timesteps', 1000000.0), ('noise_std', 0.3), ('noise_type', 'ornstein-uhlenbeck'), ('policy', 'MlpPolicy'), ('policy_kwargs', 'dict(net_arch=dict(pi=[300, 200], qf=[400, 300]))'), ('normalize', False)]) ```
AlanDev/DallEMiniButBetter
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-01-16T08:09:15Z
--- library_name: stable-baselines3 tags: - ManipulatorBringPegDMC-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DDPG results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: ManipulatorBringPegDMC-v0 type: ManipulatorBringPegDMC-v0 metrics: - type: mean_reward value: 0.15 +/- 0.31 name: mean_reward verified: false --- # **DDPG** Agent playing **ManipulatorBringPegDMC-v0** This is a trained model of a **DDPG** agent playing **ManipulatorBringPegDMC-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo ddpg --env ManipulatorBringPegDMC-v0 -orga qgallouedec -f logs/ python -m rl_zoo3.enjoy --algo ddpg --env ManipulatorBringPegDMC-v0 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo ddpg --env ManipulatorBringPegDMC-v0 -orga qgallouedec -f logs/ python -m rl_zoo3.enjoy --algo ddpg --env ManipulatorBringPegDMC-v0 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo ddpg --env ManipulatorBringPegDMC-v0 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo ddpg --env ManipulatorBringPegDMC-v0 -f logs/ -orga qgallouedec ``` ## Hyperparameters ```python OrderedDict([('batch_size', 64), ('gamma', 0.99), ('learning_rate', 0.0001), ('n_timesteps', 1000000.0), ('noise_std', 0.3), ('noise_type', 'ornstein-uhlenbeck'), ('policy', 'MlpPolicy'), ('policy_kwargs', 'dict(net_arch=dict(pi=[300, 200], qf=[400, 300]))'), ('normalize', False)]) ```
AlanDev/dall-e-better
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-01-16T08:10:30Z
--- library_name: stable-baselines3 tags: - ManipulatorInsertBallDMC-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DDPG results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: ManipulatorInsertBallDMC-v0 type: ManipulatorInsertBallDMC-v0 metrics: - type: mean_reward value: 0.00 +/- 0.00 name: mean_reward verified: false --- # **DDPG** Agent playing **ManipulatorInsertBallDMC-v0** This is a trained model of a **DDPG** agent playing **ManipulatorInsertBallDMC-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo ddpg --env ManipulatorInsertBallDMC-v0 -orga qgallouedec -f logs/ python -m rl_zoo3.enjoy --algo ddpg --env ManipulatorInsertBallDMC-v0 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo ddpg --env ManipulatorInsertBallDMC-v0 -orga qgallouedec -f logs/ python -m rl_zoo3.enjoy --algo ddpg --env ManipulatorInsertBallDMC-v0 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo ddpg --env ManipulatorInsertBallDMC-v0 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo ddpg --env ManipulatorInsertBallDMC-v0 -f logs/ -orga qgallouedec ``` ## Hyperparameters ```python OrderedDict([('batch_size', 64), ('gamma', 0.99), ('learning_rate', 0.0001), ('n_timesteps', 1000000.0), ('noise_std', 0.3), ('noise_type', 'ornstein-uhlenbeck'), ('policy', 'MlpPolicy'), ('policy_kwargs', 'dict(net_arch=dict(pi=[300, 200], qf=[400, 300]))'), ('normalize', False)]) ```
Aleksandar/bert-srb-ner-setimes
[ "pytorch", "bert", "token-classification", "transformers", "generated_from_trainer", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2023-01-16T08:16:20Z
--- library_name: stable-baselines3 tags: - ReacherHardDMC-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DDPG results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: ReacherHardDMC-v0 type: ReacherHardDMC-v0 metrics: - type: mean_reward value: 389.80 +/- 474.34 name: mean_reward verified: false --- # **DDPG** Agent playing **ReacherHardDMC-v0** This is a trained model of a **DDPG** agent playing **ReacherHardDMC-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo ddpg --env ReacherHardDMC-v0 -orga qgallouedec -f logs/ python -m rl_zoo3.enjoy --algo ddpg --env ReacherHardDMC-v0 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo ddpg --env ReacherHardDMC-v0 -orga qgallouedec -f logs/ python -m rl_zoo3.enjoy --algo ddpg --env ReacherHardDMC-v0 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo ddpg --env ReacherHardDMC-v0 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo ddpg --env ReacherHardDMC-v0 -f logs/ -orga qgallouedec ``` ## Hyperparameters ```python OrderedDict([('batch_size', 64), ('gamma', 0.99), ('learning_rate', 0.0001), ('n_timesteps', 1000000.0), ('noise_std', 0.3), ('noise_type', 'ornstein-uhlenbeck'), ('policy', 'MlpPolicy'), ('policy_kwargs', 'dict(net_arch=dict(pi=[300, 200], qf=[400, 300]))'), ('normalize', False)]) ```
Aleksandar1932/distilgpt2-rock
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
2023-01-16T08:30:02Z
--- language: - id license: apache-2.0 tags: - hf-asr-leaderboard - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 model-index: - name: Whisper Small Id - TheRains results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Id - TheRains This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 100 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
AlekseyKulnevich/Pegasus-QuestionGeneration
[ "pytorch", "pegasus", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "PegasusForConditionalGeneration" ], "model_type": "pegasus", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
17
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 157.50 +/- 104.22 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Niraya666 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Niraya666 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Niraya666 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 10000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
Alerosae/SocratesGPT-2
[ "pytorch", "gpt2", "feature-extraction", "en", "transformers", "text-generation" ]
text-generation
{ "architectures": [ "GPT2Model" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: mit --- ### kukkia on Stable Diffusion This is the `<flowerpaintingstyle>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<flowerpaintingstyle> 0](https://huggingface.co/sd-concepts-library/kukkia/resolve/main/concept_images/1.jpeg) ![<flowerpaintingstyle> 1](https://huggingface.co/sd-concepts-library/kukkia/resolve/main/concept_images/3.jpeg) ![<flowerpaintingstyle> 2](https://huggingface.co/sd-concepts-library/kukkia/resolve/main/concept_images/0.jpeg) ![<flowerpaintingstyle> 3](https://huggingface.co/sd-concepts-library/kukkia/resolve/main/concept_images/2.jpeg)
AlexMaclean/sentence-compression-roberta
[ "pytorch", "roberta", "token-classification", "transformers", "generated_from_trainer", "license:mit", "autotrain_compatible" ]
token-classification
{ "architectures": [ "RobertaForTokenClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
--- license: mit tags: - generated_from_trainer datasets: - sst2 model-index: - name: finetuned_gpt2-medium_sst2_negation0.0001_pretrainedTrue_epochs1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_gpt2-medium_sst2_negation0.0001_pretrainedTrue_epochs1 This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the sst2 dataset. It achieves the following results on the evaluation set: - Loss: 2.8742 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.3224 | 1.0 | 1322 | 2.8742 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.7.0 - Datasets 2.8.0 - Tokenizers 0.13.2
AlexN/xls-r-300m-fr-0
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "fr", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-first results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
AlexN/xls-r-300m-fr
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "fr", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
17
2023-01-16T09:31:03Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - matthews_correlation model-index: - name: tiny-mlm-snli-target-glue-cola results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-snli-target-glue-cola This model is a fine-tuned version of [muhtasham/tiny-mlm-snli](https://huggingface.co/muhtasham/tiny-mlm-snli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7851 - Matthews Correlation: 0.1125 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.61 | 1.87 | 500 | 0.6206 | 0.0 | | 0.6008 | 3.73 | 1000 | 0.6153 | 0.0257 | | 0.5837 | 5.6 | 1500 | 0.6237 | 0.0218 | | 0.5551 | 7.46 | 2000 | 0.6447 | 0.0688 | | 0.5306 | 9.33 | 2500 | 0.6594 | 0.0973 | | 0.5103 | 11.19 | 3000 | 0.6779 | 0.0957 | | 0.4842 | 13.06 | 3500 | 0.6971 | 0.1010 | | 0.4648 | 14.93 | 4000 | 0.7289 | 0.1170 | | 0.4467 | 16.79 | 4500 | 0.7530 | 0.0991 | | 0.4266 | 18.66 | 5000 | 0.7851 | 0.1125 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
Alireza1044/albert-base-v2-cola
[ "pytorch", "tensorboard", "albert", "text-classification", "en", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
32
null
--- tags: - dreambooth-hackathon pipeline_tag: text-to-image --- 中国奇谭第一季画风模型 import torch from diffusers import StableDiffusionPipeline pipe = StableDiffusionPipeline.from_pretrained("sd-dreambooth-library/cnstory", torch_dtype=torch.float16, use_auth_token=True) pipe = pipe.to("cuda") prompt = "a photograph of an astronaut riding a horse, cnstory artstyle" image = pipe(prompt).images[0] image.save(f"astronaut_rides_horse.png") ![下载.png](https://s3.amazonaws.com/moonup/production/uploads/1673866469001-63044d493926de1f7ec709f4.png) ![下载.png](https://s3.amazonaws.com/moonup/production/uploads/1673871174058-63044d493926de1f7ec709f4.png) ![img_v2_7ced34a2-4084-43ba-a02c-439b583eaf1g.jpg](https://s3.amazonaws.com/moonup/production/uploads/1673871184343-63044d493926de1f7ec709f4.jpeg) ![下载 (1).png](https://s3.amazonaws.com/moonup/production/uploads/1673871191196-63044d493926de1f7ec709f4.png)
Alireza1044/albert-base-v2-mnli
[ "pytorch", "albert", "text-classification", "en", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
235
null
--- license: mit tags: - generated_from_trainer datasets: - sst2 model-index: - name: finetuned_gpt2-xl_sst2_negation0.001_pretrainedTrue_epochs1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_gpt2-xl_sst2_negation0.001_pretrainedTrue_epochs1 This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on the sst2 dataset. It achieves the following results on the evaluation set: - Loss: 2.9199 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.8707 | 1.0 | 1322 | 2.9199 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.7.0 - Datasets 2.8.0 - Tokenizers 0.13.2
Alireza1044/albert-base-v2-qnli
[ "pytorch", "tensorboard", "albert", "text-classification", "en", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
41
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: tiny-mlm-snli-target-glue-mrpc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-snli-target-glue-mrpc This model is a fine-tuned version of [muhtasham/tiny-mlm-snli](https://huggingface.co/muhtasham/tiny-mlm-snli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1053 - Accuracy: 0.6814 - F1: 0.7601 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5879 | 4.35 | 500 | 0.5553 | 0.7279 | 0.8189 | | 0.4565 | 8.7 | 1000 | 0.5597 | 0.7598 | 0.8388 | | 0.3208 | 13.04 | 1500 | 0.6303 | 0.7426 | 0.8217 | | 0.2133 | 17.39 | 2000 | 0.7777 | 0.7230 | 0.8094 | | 0.137 | 21.74 | 2500 | 1.1053 | 0.6814 | 0.7601 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
Alireza1044/albert-base-v2-qqp
[ "pytorch", "albert", "text-classification", "en", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
37
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: FBM/ppo-Piramids-Training1 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Alireza1044/albert-base-v2-rte
[ "pytorch", "tensorboard", "albert", "text-classification", "en", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: tiny-mlm-snli-target-glue-qnli results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-snli-target-glue-qnli This model is a fine-tuned version of [muhtasham/tiny-mlm-snli](https://huggingface.co/muhtasham/tiny-mlm-snli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4710 - Accuracy: 0.7811 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6125 | 0.15 | 500 | 0.5374 | 0.7371 | | 0.5442 | 0.31 | 1000 | 0.5321 | 0.7414 | | 0.5223 | 0.46 | 1500 | 0.4991 | 0.7628 | | 0.5165 | 0.61 | 2000 | 0.5155 | 0.7545 | | 0.5118 | 0.76 | 2500 | 0.4795 | 0.7752 | | 0.5052 | 0.92 | 3000 | 0.4663 | 0.7856 | | 0.4916 | 1.07 | 3500 | 0.4500 | 0.7955 | | 0.4818 | 1.22 | 4000 | 0.4669 | 0.7811 | | 0.4685 | 1.37 | 4500 | 0.4774 | 0.7759 | | 0.4761 | 1.53 | 5000 | 0.4710 | 0.7811 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2