modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-31 00:44:42
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
538 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-31 00:42:51
card
stringlengths
11
1.01M
mili7522/Reinforce-CartPole-v1
mili7522
2023-02-11T12:46:23Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-02-11T12:46:09Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
jojoUla/bert-large-cased-sigir-support-refute-no-label-40
jojoUla
2023-02-11T11:59:56Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-02-11T10:31:43Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-large-cased-sigir-support-refute-no-label-40 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-cased-sigir-support-refute-no-label-40 This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8371 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.4511 | 1.0 | 252 | 2.0790 | | 2.0373 | 2.0 | 504 | 1.8538 | | 1.8052 | 3.0 | 756 | 1.6633 | | 1.6663 | 4.0 | 1008 | 1.5591 | | 1.5556 | 5.0 | 1260 | 1.4441 | | 1.4505 | 6.0 | 1512 | 1.3836 | | 1.3619 | 7.0 | 1764 | 1.3255 | | 1.2968 | 8.0 | 2016 | 1.2505 | | 1.2332 | 9.0 | 2268 | 1.2165 | | 1.1788 | 10.0 | 2520 | 1.1517 | | 1.1408 | 11.0 | 2772 | 1.1446 | | 1.0992 | 12.0 | 3024 | 1.1512 | | 1.0578 | 13.0 | 3276 | 1.1058 | | 1.0277 | 14.0 | 3528 | 1.0662 | | 1.0036 | 15.0 | 3780 | 1.0270 | | 0.9655 | 16.0 | 4032 | 1.0207 | | 0.9364 | 17.0 | 4284 | 1.0220 | | 0.9085 | 18.0 | 4536 | 0.9874 | | 0.8897 | 19.0 | 4788 | 0.9658 | | 0.8661 | 20.0 | 5040 | 0.9603 | | 0.8434 | 21.0 | 5292 | 0.9754 | | 0.8248 | 22.0 | 5544 | 0.9406 | | 0.8052 | 23.0 | 5796 | 0.9154 | | 0.7975 | 24.0 | 6048 | 0.8760 | | 0.7854 | 25.0 | 6300 | 0.8688 | | 0.7673 | 26.0 | 6552 | 0.8536 | | 0.7463 | 27.0 | 6804 | 0.8544 | | 0.7412 | 28.0 | 7056 | 0.8514 | | 0.7319 | 29.0 | 7308 | 0.8356 | | 0.7143 | 30.0 | 7560 | 0.8832 | | 0.7081 | 31.0 | 7812 | 0.8421 | | 0.7026 | 32.0 | 8064 | 0.8295 | | 0.687 | 33.0 | 8316 | 0.8401 | | 0.6882 | 34.0 | 8568 | 0.8053 | | 0.679 | 35.0 | 8820 | 0.8438 | | 0.6672 | 36.0 | 9072 | 0.8450 | | 0.6669 | 37.0 | 9324 | 0.8231 | | 0.6665 | 38.0 | 9576 | 0.8410 | | 0.6596 | 39.0 | 9828 | 0.7909 | | 0.6556 | 40.0 | 10080 | 0.8019 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
OliP/a2c-AntBulletEnv-v0
OliP
2023-02-11T11:49:12Z
2
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-11T11:47:54Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1731.38 +/- 167.58 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Maghrebi/abkhaz
Maghrebi
2023-02-11T11:21:45Z
7
0
transformers
[ "transformers", "t5", "text2text-generation", "art", "ab", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-01-16T12:53:19Z
--- license: apache-2.0 language: - ab pipeline_tag: text2text-generation tags: - art metrics: - charcut_mt library_name: transformers ---
kubasvehla/distilbert-base-uncased-finetuned-emotion
kubasvehla
2023-02-11T11:18:51Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-11T08:57:13Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9225 - name: F1 type: f1 value: 0.9226248366273136 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2288 - Accuracy: 0.9225 - F1: 0.9226 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8577 | 1.0 | 250 | 0.3264 | 0.903 | 0.8992 | | 0.2559 | 2.0 | 500 | 0.2288 | 0.9225 | 0.9226 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
gokuls/mobilebert_sa_GLUE_Experiment_data_aug_mnli
gokuls
2023-02-11T11:09:27Z
123
0
transformers
[ "transformers", "pytorch", "tensorboard", "mobilebert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-03T14:40:39Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: mobilebert_sa_GLUE_Experiment_data_aug_mnli results: - task: name: Text Classification type: text-classification dataset: name: GLUE MNLI type: glue args: mnli metrics: - name: Accuracy type: accuracy value: 0.609947111472742 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilebert_sa_GLUE_Experiment_data_aug_mnli This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.9046 - Accuracy: 0.6099 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 0.8429 | 1.0 | 62880 | 0.8755 | 0.6185 | | 0.6713 | 2.0 | 125760 | 0.9512 | 0.6039 | | 0.5387 | 3.0 | 188640 | 1.0796 | 0.5978 | | 0.4297 | 4.0 | 251520 | 1.1877 | 0.5961 | | 0.3405 | 5.0 | 314400 | 1.3154 | 0.5895 | | 0.2693 | 6.0 | 377280 | 1.4320 | 0.5798 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
MerlinTK/ppo-Huggy
MerlinTK
2023-02-11T11:04:49Z
1
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-02-11T11:04:39Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: MerlinTK/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
ritesh27gole/ppo-LunarLander-v2
ritesh27gole
2023-02-11T10:58:09Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-11T10:57:43Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 275.92 +/- 18.09 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
taron88/CCCmix
taron88
2023-02-11T10:51:37Z
0
1
null
[ "region:us" ]
null
2023-02-11T09:55:49Z
公開されているモデルを単純マージしたモデルです。 7th v3.0 CをベースにCinnamonmixとCounterfeit-V2.5をマージしました。 7th v3.0 CをAに配置してB,CにCinnamonとcounterfeitを配置。 設定はWeighted sumの0.5だったと思います。 7thCのアニメ寄りのイラストはそのままにCinnamonの塗りと雰囲気、counterfeitの背景の精度を目指しました。 https://s3.amazonaws.com/moonup/production/uploads/1676112658952-6315eee0e06cb6c5c424344d.jpeg --- license: other ---
atorre/poca-SoccerTwos-50M
atorre
2023-02-11T10:49:42Z
1
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-02-11T10:49:28Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: atorre/poca-SoccerTwos-50M 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Sjdan/finetuning12
Sjdan
2023-02-11T10:10:52Z
119
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-02-11T09:01:16Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: finetuning12 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning12 This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00024 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 800 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 0.0 | 0.31 | 500 | nan | 1.0 | | 0.0 | 0.61 | 1000 | nan | 1.0 | | 0.0 | 0.92 | 1500 | nan | 1.0 | | 0.0 | 1.23 | 2000 | nan | 1.0 | | 0.0 | 1.54 | 2500 | nan | 1.0 | | 0.0 | 1.84 | 3000 | nan | 1.0 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
pittawat/a2c-AntBulletEnv-v0
pittawat
2023-02-11T09:47:21Z
0
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-11T09:46:03Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1229.25 +/- 82.35 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Seungjun/t5-small-failed
Seungjun
2023-02-11T09:42:57Z
106
1
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-02-11T04:22:55Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: t5-small-finetuned-t5-Thor4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-t5-Thor4 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5607 - Rouge1: 30.1917 - Rouge2: 17.6334 - Rougel: 26.8513 - Rougelsum: 28.7606 - Gen Len: 18.9881 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.9251 | 1.0 | 675 | 1.6082 | 29.3372 | 16.9607 | 26.1096 | 27.9357 | 18.9874 | | 1.763 | 2.0 | 1350 | 1.5696 | 30.1869 | 17.5627 | 26.8425 | 28.7413 | 18.9881 | | 1.7139 | 3.0 | 2025 | 1.5607 | 30.1917 | 17.6334 | 26.8513 | 28.7606 | 18.9881 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Rubywong123/q-Taxi-v3
Rubywong123
2023-02-11T08:57:06Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-11T08:56:54Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Rubywong123/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Sjdan/finetuning11
Sjdan
2023-02-11T08:47:00Z
116
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-02-11T08:09:10Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: finetuning11 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning11 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00024 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 800 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 0.0 | 0.31 | 500 | nan | 1.0 | | 0.0 | 0.61 | 1000 | nan | 1.0 | | 0.0 | 0.92 | 1500 | nan | 1.0 | | 0.0 | 1.23 | 2000 | nan | 1.0 | | 0.0 | 1.54 | 2500 | nan | 1.0 | | 0.0 | 1.84 | 3000 | nan | 1.0 | | 0.0 | 2.15 | 3500 | nan | 1.0 | | 0.0 | 2.46 | 4000 | nan | 1.0 | | 0.0 | 2.77 | 4500 | nan | 1.0 | | 0.0 | 3.07 | 5000 | nan | 1.0 | | 0.0 | 3.38 | 5500 | nan | 1.0 | | 0.0 | 3.69 | 6000 | nan | 1.0 | | 0.0 | 4.0 | 6500 | nan | 1.0 | | 0.0 | 4.3 | 7000 | nan | 1.0 | | 0.0 | 4.61 | 7500 | nan | 1.0 | | 0.0 | 4.92 | 8000 | nan | 1.0 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
jackshoemaker/bert-finetuned-squad
jackshoemaker
2023-02-11T07:55:29Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-02-10T23:45:47Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Patrickrpds/ktspagui
Patrickrpds
2023-02-11T07:09:15Z
10
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-02-11T06:58:10Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### ktspagui Dreambooth model trained by Patrickrpds with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
kaliputra/q-FrozenLake-v1-4x4-noSlippery
kaliputra
2023-02-11T06:44:48Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-11T06:44:41Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="kaliputra/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
figfig/local_test_model_with_local_dataset
figfig
2023-02-11T06:01:50Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-02-11T04:34:13Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: local_test_model_with_local_dataset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # local_test_model_with_local_dataset This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5566 - Wer: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - training_steps: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | No log | 10.0 | 10 | 3.4660 | 85.7143 | | No log | 20.0 | 20 | 0.7373 | 10.7143 | | 3.3998 | 30.0 | 30 | 0.5920 | 0.0 | | 3.3998 | 40.0 | 40 | 0.5566 | 0.0 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
yizhangliu/poca-SoccerTwos-v9
yizhangliu
2023-02-11T05:32:12Z
5
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-02-11T05:32:05Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: yizhangliu/poca-SoccerTwos-v9 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
paulkm/autotrain-lottery_prod_v3-3409393337
paulkm
2023-02-11T05:23:31Z
96
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain", "zh", "dataset:paulkm/autotrain-data-lottery_prod_v3", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-11T05:21:07Z
--- tags: - autotrain - text-classification language: - zh widget: - text: "I love AutoTrain 🤗" datasets: - paulkm/autotrain-data-lottery_prod_v3 co2_eq_emissions: emissions: 3.67386840637788 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 3409393337 - CO2 Emissions (in grams): 3.6739 ## Validation Metrics - Loss: 0.244 - Accuracy: 0.909 - Precision: 0.922 - Recall: 0.875 - AUC: 0.953 - F1: 0.898 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/paulkm/autotrain-lottery_prod_v3-3409393337 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("paulkm/autotrain-lottery_prod_v3-3409393337", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("paulkm/autotrain-lottery_prod_v3-3409393337", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
lancechen/ppo-LunarLander-v2
lancechen
2023-02-11T04:56:41Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-11T01:27:39Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 263.71 +/- 15.93 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
AngelUrq/ppo-Huggy
AngelUrq
2023-02-11T04:11:09Z
4
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-02-11T04:10:57Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: AngelUrq/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
5aket/foodia
5aket
2023-02-11T04:02:04Z
11
0
keras
[ "keras", "tf-keras", "image-classification", "en", "dataset:food101", "license:openrail", "region:us" ]
image-classification
2023-02-10T16:52:32Z
--- license: openrail datasets: - food101 language: - en metrics: - accuracy library_name: keras pipeline_tag: image-classification ---
smilingface88/xlm-roberta-base-finetuned-panx-it
smilingface88
2023-02-11T02:32:37Z
103
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-02-11T02:16:12Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-it results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.it metrics: - name: F1 type: f1 value: 0.8205546492659054 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2467 - F1: 0.8206 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.7897 | 1.0 | 70 | 0.3096 | 0.7519 | | 0.2819 | 2.0 | 140 | 0.2603 | 0.8093 | | 0.1818 | 3.0 | 210 | 0.2467 | 0.8206 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
joe138138/bert-finetuned-squad
joe138138
2023-02-11T02:30:37Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-02-08T04:58:18Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
marcowong02/bert-finetuned-squad
marcowong02
2023-02-11T01:42:38Z
103
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-02-11T00:07:57Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu117 - Datasets 2.9.0 - Tokenizers 0.13.2
smilingface88/xlm-roberta-base-finetuned-panx-de-fr
smilingface88
2023-02-11T01:40:20Z
105
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-02-10T23:57:15Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1629 - F1: 0.8584 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2904 | 1.0 | 715 | 0.1823 | 0.8286 | | 0.1446 | 2.0 | 1430 | 0.1626 | 0.8488 | | 0.0941 | 3.0 | 2145 | 0.1629 | 0.8584 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
bmiles/chem-clin-2
bmiles
2023-02-11T00:52:15Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "biology", "chemistry", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-11T00:39:51Z
--- tags: - biology - chemistry ---
gatardochi/ppo-SnowballTarget
gatardochi
2023-02-10T23:46:14Z
4
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-02-10T23:46:04Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget library_name: ml-agents --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Write your model_id: gatardochi/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
kmposkid1/dqn-SpaceInvadersNoFrameskip-v4
kmposkid1
2023-02-10T23:30:47Z
7
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-10T21:46:12Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 407.00 +/- 152.71 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kmposkid1 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kmposkid1 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga kmposkid1 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 25000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 10000), ('n_timesteps', 500000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
ahng79/ppo-LunarLander-v2
ahng79
2023-02-10T23:13:30Z
4
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-10T23:12:56Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 267.90 +/- 16.32 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
asuzuki/ppo-Pyramids
asuzuki
2023-02-10T23:06:15Z
2
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-02-10T23:02:51Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://singularite.itch.io/pyramids 2. Step 1: Write your model_id: asuzuki/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
smilingface88/xlm-roberta-base-finetuned-panx-de
smilingface88
2023-02-10T23:03:51Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-02-10T21:34:53Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8645329998294582 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1355 - F1: 0.8645 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2582 | 1.0 | 525 | 0.1612 | 0.8199 | | 0.128 | 2.0 | 1050 | 0.1334 | 0.8484 | | 0.081 | 3.0 | 1575 | 0.1355 | 0.8645 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
petergoldstein/Reinforce-CartPole-v1
petergoldstein
2023-02-10T22:54:18Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-02-10T22:54:01Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Nalenczewski/keyword_category_classifier
Nalenczewski
2023-02-10T22:45:28Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-02T19:01:43Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: keyword_category_classifier results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # keyword_category_classifier This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2184 - Accuracy: 0.9333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5646 | 1.0 | 917 | 0.2161 | 0.9298 | | 0.2032 | 2.0 | 1834 | 0.2184 | 0.9333 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Euchale/EuchalesTerribleMergesDump
Euchale
2023-02-10T22:44:29Z
0
1
null
[ "region:us" ]
null
2023-01-15T09:51:00Z
People always ask me:"Hey can you upload that merge?" to me, so figured I give one central place to upload my merges. Warning these can be both SFW and NSFW, the names should hopefully be straightforward enough, but I will try to remember to put down the source models in the description of the .ckpt files.
pyf98/tedlium2_transducer_conformer_e12_linear2048
pyf98
2023-02-10T22:29:05Z
1
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:tedlium2", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2023-02-10T22:27:05Z
--- tags: - espnet - audio - automatic-speech-recognition language: en datasets: - tedlium2 license: cc-by-4.0 --- ## ESPnet2 ASR model ### `pyf98/tedlium2_transducer_conformer_e12_linear2048` This model was trained by Yifan Peng using tedlium2 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html) if you haven't done that already. ```bash cd espnet git checkout e06c0a97425c4d5deb4d3d14922da1f91504052e pip install -e . cd egs2/tedlium2/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model pyf98/tedlium2_transducer_conformer_e12_linear2048 ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Wed Feb 8 22:07:40 CST 2023` - python version: `3.9.15 (main, Nov 24 2022, 14:31:59) [GCC 11.2.0]` - espnet version: `espnet 202301` - pytorch version: `pytorch 1.13.1` - Git hash: `478ba004e114e7862b05fb01112de7f7e1da3996` - Commit date: `Tue Feb 7 00:50:49 2023 +0000` ## asr_train_asr_transducer_conformer_e12_linear2048_raw_en_bpe500_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_transducer_asr_model_valid.loss.ave/dev|466|14671|93.3|4.5|2.3|1.1|7.8|71.2| |decode_asr_transducer_asr_model_valid.loss.ave/test|1155|27500|93.2|4.2|2.6|1.0|7.8|65.6| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_transducer_asr_model_valid.loss.ave/dev|466|78259|97.0|0.9|2.1|1.0|3.9|71.2| |decode_asr_transducer_asr_model_valid.loss.ave/test|1155|145066|96.9|0.9|2.2|0.9|4.0|65.6| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_transducer_asr_model_valid.loss.ave/dev|466|28296|94.6|3.0|2.4|0.9|6.3|71.2| |decode_asr_transducer_asr_model_valid.loss.ave/test|1155|52113|94.8|2.7|2.5|0.9|6.0|65.6| ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_transducer_conformer_e12_linear2048.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_transducer_conformer_e12_linear2048_raw_en_bpe500_sp ngpu: 1 seed: 2022 num_workers: 6 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 2 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 37613 dist_launcher: null multiprocessing_distributed: true unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 50 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - loss - min keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 5 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true create_graph_in_tensorboard: false use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 10000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_en_bpe500_sp/train/speech_shape - exp/asr_stats_raw_en_bpe500_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_en_bpe500_sp/valid/speech_shape - exp/asr_stats_raw_en_bpe500_sp/valid/text_shape.bpe batch_type: numel valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_sp/wav.scp - speech - kaldi_ark - - dump/raw/train_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/dev/wav.scp - speech - kaldi_ark - - dump/raw/dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null exclude_weight_decay: false exclude_weight_decay_conf: {} optim: adam optim_conf: lr: 0.002 weight_decay: 1.0e-06 scheduler: warmuplr scheduler_conf: warmup_steps: 15000 token_list: - <blank> - <unk> - s - ▁the - t - ▁a - ▁and - ▁to - d - e - ▁of - '''' - n - ing - ▁in - ▁i - ▁that - i - a - l - p - m - y - o - ▁it - ▁we - c - u - ▁you - ed - ▁ - r - ▁is - re - ▁this - ar - g - ▁so - al - b - ▁s - or - ▁f - ▁c - in - k - f - ▁for - ic - er - le - ▁be - ▁do - ▁re - ve - ▁e - ▁w - ▁was - es - ▁they - ly - h - ▁on - v - ▁are - ri - ▁have - an - ▁what - ▁with - ▁t - w - ur - it - ent - ▁can - ▁he - ▁but - ra - ce - ▁me - ▁b - ▁ma - ▁p - ll - ▁st - ▁one - 'on' - ▁about - th - ▁de - en - ▁all - ▁not - il - ▁g - ch - at - ▁there - ▁mo - ter - ation - tion - ▁at - ▁my - ro - ▁as - te - ▁le - ▁con - ▁like - ▁people - ▁or - ▁an - el - ▁if - ▁from - ver - ▁su - ▁co - ate - ▁these - ol - ci - ▁now - ▁see - ▁out - ▁our - ion - ▁know - ect - ▁just - as - ▁ex - ▁ch - ▁d - ▁when - ▁very - ▁think - ▁who - ▁because - ▁go - ▁up - ▁us - ▁pa - ▁no - ies - ▁di - ▁ho - om - ive - ▁get - id - ▁o - ▁hi - un - ▁how - ▁by - ir - et - ck - ity - ▁po - ul - ▁which - ▁mi - ▁some - z - ▁sp - ▁un - ▁going - ▁pro - ist - ▁se - ▁look - ▁time - ment - de - ▁more - ▁had - ng - ▁would - ge - la - ▁here - ▁really - x - ▁your - ▁them - us - me - ▁en - ▁two - ▁k - ▁li - ▁world - ne - ow - ▁way - ▁want - ▁work - ▁don - ▁lo - ▁fa - ▁were - ▁their - age - vi - ▁ha - ac - der - est - ▁bo - am - ▁other - able - ▁actually - ▁sh - ▁make - ▁ba - ▁la - ine - ▁into - ▁where - ▁could - ▁comp - ting - ▁has - ▁will - ▁ne - j - ical - ally - ▁vi - ▁things - ▁te - igh - ▁say - ▁years - ers - ▁ra - ther - ▁than - ru - ▁ro - op - ▁did - ▁any - ▁new - ound - ig - ▁well - mo - ▁she - ▁na - ▁been - he - ▁thousand - ▁car - ▁take - ▁right - ▁then - ▁need - ▁start - ▁hundred - ▁something - ▁over - ▁com - ia - ▁kind - um - if - ▁those - ▁first - ▁pre - ta - ▁said - ize - end - ▁even - ▁thing - one - ▁back - ite - ▁every - ▁little - ry - ▁life - ▁much - ke - ▁also - ▁most - ant - per - ▁three - ▁come - ▁lot - ance - ▁got - ▁talk - ▁per - ▁inter - ▁sa - ▁use - ▁mu - ▁part - ish - ence - ▁happen - ▁bi - ▁mean - ough - ▁qu - ▁bu - ▁day - ▁ga - ▁only - ▁many - ▁different - ▁dr - ▁th - ▁show - ful - ▁down - ated - ▁good - ▁tra - ▁around - ▁idea - ▁human - ous - ▁put - ▁through - ▁five - ▁why - ▁change - ▁real - ff - ible - ▁fact - ▁same - ▁jo - ▁live - ▁year - ▁problem - ▁ph - ▁four - ▁give - ▁big - ▁tell - ▁great - ▁try - ▁va - ▁ru - ▁system - ▁six - ▁plan - ▁place - ▁build - ▁called - ▁again - ▁point - ▁twenty - ▁percent - ▁nine - ▁find - ▁app - ▁after - ▁long - ▁eight - ▁imp - ▁gene - ▁design - ▁today - ▁should - ▁made - ious - ▁came - ▁learn - ▁last - ▁own - way - ▁turn - ▁seven - ▁high - ▁question - ▁person - ▁brain - ▁important - ▁another - ▁thought - ▁trans - ▁create - ness - ▁hu - ▁power - ▁act - land - ▁play - ▁sort - ▁old - ▁before - ▁course - ▁understand - ▁feel - ▁might - ▁each - ▁million - ▁better - ▁together - ▁ago - ▁example - ▁help - ▁story - ▁next - ▁hand - ▁school - ▁water - ▁develop - ▁technology - que - ▁second - ▁grow - ▁still - ▁cell - ▁believe - ▁number - ▁small - ▁between - qui - ▁data - ▁become - ▁america - ▁maybe - ▁space - ▁project - ▁organ - ▁vo - ▁children - ▁book - graph - ▁open - ▁fifty - ▁picture - ▁health - ▁thirty - ▁africa - ▁reason - ▁large - ▁hard - ▁computer - ▁always - ▁sense - ▁money - ▁women - ▁everything - ▁information - ▁country - ▁teach - ▁energy - ▁experience - ▁food - ▁process - qua - ▁interesting - ▁future - ▁science - q - '0' - '5' - '6' - '9' - '3' - '8' - '4' - N - A - '7' - S - G - F - R - L - U - E - T - H - _ - B - D - J - M - ă - ō - ť - '2' - '-' - '1' - C - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: null zero_infinity: true joint_net_conf: joint_space_size: 320 use_preprocessor: true token_type: bpe bpemodel: data/en_token_list/bpe_unigram500/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' short_noise_thres: 0.5 aux_ctc_tasks: [] frontend: default frontend_conf: n_fft: 512 win_length: 400 hop_length: 160 fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 27 num_freq_mask: 2 apply_time_mask: true time_mask_width_ratio_range: - 0.0 - 0.05 num_time_mask: 5 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_en_bpe500_sp/train/feats_stats.npz model: espnet model_conf: ctc_weight: 0.3 report_cer: false report_wer: false preencoder: null preencoder_conf: {} encoder: conformer encoder_conf: output_size: 256 attention_heads: 4 linear_units: 2048 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.1 input_layer: conv2d normalize_before: true macaron_style: true rel_pos_type: latest pos_enc_layer_type: rel_pos selfattention_layer_type: rel_selfattn activation_type: swish use_cnn_module: true cnn_module_kernel: 31 postencoder: null postencoder_conf: {} decoder: transducer decoder_conf: rnn_type: lstm num_layers: 1 hidden_size: 256 dropout: 0.1 dropout_embed: 0.2 preprocessor: default preprocessor_conf: {} required: - output_dir - token_list version: '202301' distributed: true ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Triangles/gpt-Neo_Russell
Triangles
2023-02-10T22:02:47Z
27
0
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "en", "arxiv:1910.09700", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-12-28T02:40:59Z
--- license: cc-by-nc-sa-4.0 language: - en --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This is a gpt_neo (125M) text generation model fine-tuned on a single book: Bertant Russell's The Problems of Philosophy, from 1912. # Model Details ## Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ## Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ## Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ## Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ## Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ## Training Procedure [optional] <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing [More Information Needed] ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] # Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ## Testing Data, Factors & Metrics ### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] ### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ## Results [More Information Needed] ### Summary # Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] # Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] # Technical Specifications [optional] ## Model Architecture and Objective [More Information Needed] ## Compute Infrastructure [More Information Needed] ### Hardware [More Information Needed] ### Software [More Information Needed] # Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] # Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] # More Information [optional] [More Information Needed] # Model Card Authors [optional] [More Information Needed] # Model Card Contact [More Information Needed] # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> [More Information Needed] </details>
pfunk/Pong-v4-DQPN_p50-seed1
pfunk
2023-02-10T21:50:13Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Pong-v4", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-10T21:49:50Z
--- tags: - Pong-v4 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pong-v4 type: Pong-v4 metrics: - type: mean_reward value: 0.90 +/- 4.37 name: mean_reward verified: false --- # (CleanRL) **DQN** Agent Playing **Pong-v4** This is a trained model of a DQN agent playing Pong-v4. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p50.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[DQPN_p50]" python -m cleanrl_utils.enjoy --exp-name DQPN_p50 --env-id Pong-v4 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p50-seed1/raw/main/dqpn_atari.py curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p50-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p50-seed1/raw/main/poetry.lock poetry install --all-extras python dqpn_atari.py --exp-name DQPN_p50 --start-policy-f 50000 --end-policy-f 50000 --evaluation-fraction 1.00 --target-tau 1.0 --policy-tau 1.00 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000 ``` # Hyperparameters ```python {'batch_size': 32, 'buffer_size': 1000000, 'capture_video': False, 'cuda': True, 'end_e': 0.01, 'end_policy_f': 50000, 'env_id': 'Pong-v4', 'evaluation_fraction': 1.0, 'exp_name': 'DQPN_p50', 'exploration_fraction': 0.1, 'gamma': 0.99, 'hf_entity': 'pfunk', 'learning_rate': 0.0001, 'learning_starts': 80000, 'policy_tau': 1.0, 'save_model': True, 'seed': 1, 'start_e': 1, 'start_policy_f': 50000, 'target_network_frequency': 1000, 'target_tau': 1.0, 'torch_deterministic': True, 'total_timesteps': 10000000, 'track': True, 'train_frequency': 4, 'upload_model': True, 'wandb_entity': 'pfunk', 'wandb_project_name': 'dqpn'} ```
StraightFusion/rimuru-tempest
StraightFusion
2023-02-10T21:47:32Z
0
0
null
[ "Rimuru", "Rimuru Tempest", "license:unknown", "region:us" ]
null
2023-02-10T21:45:14Z
--- license: unknown tags: - Rimuru - Rimuru Tempest ---
bonadio/poca-SoccerTwos-v2
bonadio
2023-02-10T21:41:08Z
1
0
ml-agents
[ "ml-agents", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-02-10T21:40:59Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: bonadio/poca-SoccerTwos-v2 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
yizhangliu/poca-SoccerTwos-v8
yizhangliu
2023-02-10T21:40:57Z
8
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-02-10T21:40:49Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: yizhangliu/poca-SoccerTwos-v8 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
rishabhjain16/whisper_large_to_pf10h
rishabhjain16
2023-02-10T21:38:59Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-02-08T15:19:58Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: openai/whisper-large results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # openai/whisper-large This model is a fine-tuned version of [openai/whisper-large](https://huggingface.co/openai/whisper-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1412 - Wer: 6.7893 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0475 | 2.03 | 500 | 0.1095 | 62.6591 | | 0.0201 | 5.01 | 1000 | 0.1225 | 16.9285 | | 0.0044 | 7.03 | 1500 | 0.1312 | 3.6701 | | 0.0026 | 10.01 | 2000 | 0.1278 | 7.9506 | | 0.0001 | 12.04 | 2500 | 0.1323 | 17.9186 | | 0.0001 | 15.02 | 3000 | 0.1386 | 16.3031 | | 0.0001 | 17.05 | 3500 | 0.1403 | 6.7074 | | 0.0 | 20.02 | 4000 | 0.1412 | 6.7893 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu117 - Datasets 2.9.1.dev0 - Tokenizers 0.13.2
Rotyh/platform_tile
Rotyh
2023-02-10T21:16:24Z
11
7
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-01-22T12:28:35Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion inference: true --- ### assplatform Dreambooth model **___** ![aaa](https://huggingface.co/Rotyh/assplatform/resolve/main/8d7e4a0b-7b59-475e-be07-6eed6fbdfd2d.jpeg) ![bbb](https://huggingface.co/Rotyh/platform_tile/resolve/main/4fb0f749-f70b-4d4a-9584-423f11885855.jpeg) ``` (((assplatform))), style Gardenscapes, tile ``` ``` ((set)),(assplatform), hyper realistic, one style, cinematic, tile, game ```
mchalek/distilbert-base-uncased-finetuned-ccnews
mchalek
2023-02-10T21:03:19Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "dataset:cc_news", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-02-10T19:38:07Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - cc_news model-index: - name: distilbert-base-uncased-finetuned-ccnews results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ccnews This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the cc_news dataset. It achieves the following results on the evaluation set: - Loss: 2.5185 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7553 | 1.0 | 157 | 2.5523 | | 2.6507 | 2.0 | 314 | 2.5219 | | 2.606 | 3.0 | 471 | 2.5416 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.12.1+cu102 - Datasets 2.9.0 - Tokenizers 0.13.2
robinsk8a/a2c-PandaReachDense-v2
robinsk8a
2023-02-10T20:48:12Z
3
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-10T20:45:42Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -1.92 +/- 0.32 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
dhairyakapadia/swin-tiny-patch4-window7-224-finetuned-skin-cancer
dhairyakapadia
2023-02-10T20:38:26Z
36
0
transformers
[ "transformers", "pytorch", "tensorboard", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-02-10T20:37:41Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder model-index: - name: swin-tiny-patch4-window7-224-finetuned-skin-cancer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-skin-cancer This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
pfunk/Pong-v4-DQPN_p500_e0.50-seed1
pfunk
2023-02-10T20:29:03Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Pong-v4", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-10T20:28:36Z
--- tags: - Pong-v4 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pong-v4 type: Pong-v4 metrics: - type: mean_reward value: -1.70 +/- 5.92 name: mean_reward verified: false --- # (CleanRL) **DQN** Agent Playing **Pong-v4** This is a trained model of a DQN agent playing Pong-v4. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p500_e0.50.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[DQPN_p500_e0.50]" python -m cleanrl_utils.enjoy --exp-name DQPN_p500_e0.50 --env-id Pong-v4 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p500_e0.50-seed1/raw/main/dqpn_atari.py curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p500_e0.50-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p500_e0.50-seed1/raw/main/poetry.lock poetry install --all-extras python dqpn_atari.py --exp-name DQPN_p500_e0.50 --start-policy-f 500000 --end-policy-f 1000 --evaluation-fraction 0.50 --target-tau 1.0 --policy-tau 1.00 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000 ``` # Hyperparameters ```python {'batch_size': 32, 'buffer_size': 1000000, 'capture_video': False, 'cuda': True, 'end_e': 0.01, 'end_policy_f': 1000, 'env_id': 'Pong-v4', 'evaluation_fraction': 0.5, 'exp_name': 'DQPN_p500_e0.50', 'exploration_fraction': 0.1, 'gamma': 0.99, 'hf_entity': 'pfunk', 'learning_rate': 0.0001, 'learning_starts': 80000, 'policy_tau': 1.0, 'save_model': True, 'seed': 1, 'start_e': 1, 'start_policy_f': 500000, 'target_network_frequency': 1000, 'target_tau': 1.0, 'torch_deterministic': True, 'total_timesteps': 10000000, 'track': True, 'train_frequency': 4, 'upload_model': True, 'wandb_entity': 'pfunk', 'wandb_project_name': 'dqpn'} ```
ihfaudsip/bert-finetuned-squad
ihfaudsip
2023-02-10T20:22:11Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-02-10T03:53:48Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.0+cu117 - Datasets 2.9.0 - Tokenizers 0.13.2
lmqg/flan-t5-small-squad-qag
lmqg
2023-02-10T20:02:23Z
46
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "questions and answers generation", "en", "dataset:lmqg/qag_squad", "arxiv:2210.03992", "license:cc-by-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-02-10T20:02:05Z
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: en datasets: - lmqg/qag_squad pipeline_tag: text2text-generation tags: - questions and answers generation widget: - text: "generate question and answer: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records." example_title: "Questions & Answers Generation Example 1" model-index: - name: lmqg/flan-t5-small-squad-qag results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qag_squad type: default args: default metrics: - name: QAAlignedF1Score-BERTScore (Question & Answer Generation) type: qa_aligned_f1_score_bertscore_question_answer_generation value: 92.3 - name: QAAlignedRecall-BERTScore (Question & Answer Generation) type: qa_aligned_recall_bertscore_question_answer_generation value: 91.71 - name: QAAlignedPrecision-BERTScore (Question & Answer Generation) type: qa_aligned_precision_bertscore_question_answer_generation value: 92.92 - name: QAAlignedF1Score-MoverScore (Question & Answer Generation) type: qa_aligned_f1_score_moverscore_question_answer_generation value: 63.74 - name: QAAlignedRecall-MoverScore (Question & Answer Generation) type: qa_aligned_recall_moverscore_question_answer_generation value: 62.2 - name: QAAlignedPrecision-MoverScore (Question & Answer Generation) type: qa_aligned_precision_moverscore_question_answer_generation value: 65.5 --- # Model Card of `lmqg/flan-t5-small-squad-qag` This model is fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) for question & answer pair generation task on the [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) - **Language:** en - **Training data:** [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="en", model="lmqg/flan-t5-small-squad-qag") # model prediction question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/flan-t5-small-squad-qag") output = pipe("generate question and answer: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ``` ## Evaluation - ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/flan-t5-small-squad-qag/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_squad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 92.3 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedF1Score (MoverScore) | 63.74 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedPrecision (BERTScore) | 92.92 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedPrecision (MoverScore) | 65.5 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedRecall (BERTScore) | 91.71 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedRecall (MoverScore) | 62.2 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qag_squad - dataset_name: default - input_types: ['paragraph'] - output_types: ['questions_answers'] - prefix_types: ['qag'] - model: google/flan-t5-small - max_length: 512 - max_length_output: 256 - epoch: 14 - batch: 16 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 4 - label_smoothing: 0.0 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/flan-t5-small-squad-qag/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
cupertinosam/ppo-LunarLander-v2
cupertinosam
2023-02-10T19:47:35Z
4
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-10T19:47:08Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 269.71 +/- 20.23 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
MarioLomby/Taxi-v3
MarioLomby
2023-02-10T19:22:47Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-10T19:22:40Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.74 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="MarioLomby/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
kedudzic/roberta-base-cookdial
kedudzic
2023-02-10T19:07:15Z
8
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "en", "endpoints_compatible", "region:us" ]
text-classification
2023-01-22T13:59:45Z
--- language: - en library_name: transformers tags: - text-classification widget: - text: "What ingredients do I need?" --- - Baseline NLU model for the "AMUseBot" cooking taskbot prototype. - ``roberta-base`` model finetuned with default hyperparameters for 10 epochs on intents from the CookDial (https://github.com/YiweiJiang2015/CookDial) dataset with an extra choose_recipe intent added. The ``simpletransformers`` library was used for fine-tuning. - Intent mapping: {"0": "affirm", "1": "choose_recipe", "2": "confirm", "3": "goodbye", "4": "greeting", "5": "negate", "6": "other", "7": "req_amount", "8": "req_duration", "9": "req_ingredient", "10": "req_ingredient_list", "11": "req_ingredient_list_ends", "12": "req_ingredient_list_length", "13": "req_instruction", "14": "req_is_recipe_finished", "15": "req_is_recipe_ongoing", "16": "req_parallel_action", "17": "req_repeat", "18": "req_start", "19": "req_substitute", "20": "req_temperature", "21": "req_title", "22": "req_tool", "23": "req_use_all", "24": "thank"}.
fathyshalab/domain_transfer_general-massive_music-roberta-large-v1-5-7
fathyshalab
2023-02-10T19:02:04Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-02-10T19:01:37Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # fathyshalab/domain_transfer_general-massive_music-roberta-large-v1-5-7 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_general-massive_music-roberta-large-v1-5-7") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
z4x/ppo-Pyramids
z4x
2023-02-10T18:56:20Z
2
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-02-10T18:56:09Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: z4x/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
fathyshalab/domain_transfer_general-massive_takeaway-roberta-large-v1-5-90
fathyshalab
2023-02-10T18:53:31Z
4
0
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-02-10T18:53:03Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # fathyshalab/domain_transfer_general-massive_takeaway-roberta-large-v1-5-90 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_general-massive_takeaway-roberta-large-v1-5-90") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
MarioLomby/q-FrozenLake-v1-4x4-noSlippery
MarioLomby
2023-02-10T18:51:12Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-10T18:51:04Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="MarioLomby/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
z4x/ppo-SnowballTarget
z4x
2023-02-10T18:37:55Z
4
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-02-10T18:37:43Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget library_name: ml-agents --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Write your model_id: z4x/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
fathyshalab/domain_transfer_general-massive_qa-roberta-large-v1-5-73
fathyshalab
2023-02-10T18:36:59Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-02-10T18:36:32Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # fathyshalab/domain_transfer_general-massive_qa-roberta-large-v1-5-73 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_general-massive_qa-roberta-large-v1-5-73") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
deprem-ml/Binafarktespit-yolo5x-v1-xview
deprem-ml
2023-02-10T18:23:55Z
0
0
null
[ "object-detection", "computer-vision", "vision", "yolo", "yolov5", "license:gpl-3.0", "region:us" ]
object-detection
2023-02-10T12:38:23Z
--- license: gpl-3.0 inference: false tags: - object-detection - computer-vision - vision - yolo - yolov5 --- ### How to use - Install yolov5: ```bash pip install -U yolov5 ``` - Load model and perform prediction: ```python import yolov5 # load model model = yolov5.load('deprem-ml/Binafarktespit-yolo5x-v1-xview') # set model parameters model.conf = 0.25 # NMS confidence threshold model.iou = 0.45 # NMS IoU threshold model.agnostic = False # NMS class-agnostic model.multi_label = False # NMS multiple labels per box model.max_det = 1000 # maximum number of detections per image # set image img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model(img) # inference with larger input size results = model(img, size=640) # inference with test time augmentation results = model(img, augment=True) # parse results predictions = results.pred[0] boxes = predictions[:, :4] # x1, y1, x2, y2 scores = predictions[:, 4] categories = predictions[:, 5] # show detection bounding boxes on image results.show() # save results into "results/" folder results.save(save_dir='results/') ``` - Finetune the model on your custom dataset: ```bash yolov5 train --img 640 --batch 16 --weights kadirnar/deprem_model_v1 --epochs 10 --device cuda:0 ```
henryscheible/roberta-large_stereoset_finetuned
henryscheible
2023-02-10T18:22:32Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "dataset:stereoset", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-10T16:42:19Z
--- license: mit tags: - generated_from_trainer datasets: - stereoset metrics: - accuracy model-index: - name: roberta-large_stereoset_finetuned results: - task: name: Text Classification type: text-classification dataset: name: stereoset type: stereoset config: intersentence split: validation args: intersentence metrics: - name: Accuracy type: accuracy value: 0.8335949764521193 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large_stereoset_finetuned This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the stereoset dataset. It achieves the following results on the evaluation set: - Loss: 0.7989 - Accuracy: 0.8336 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.21 | 5 | 0.6920 | 0.5196 | | No log | 0.42 | 10 | 0.6909 | 0.5290 | | No log | 0.62 | 15 | 0.6899 | 0.5220 | | No log | 0.83 | 20 | 0.6883 | 0.5408 | | No log | 1.04 | 25 | 0.6573 | 0.6609 | | No log | 1.25 | 30 | 0.5892 | 0.7088 | | No log | 1.46 | 35 | 0.6633 | 0.5408 | | No log | 1.67 | 40 | 0.6322 | 0.6852 | | No log | 1.88 | 45 | 0.6393 | 0.7159 | | No log | 2.08 | 50 | 0.5494 | 0.7410 | | No log | 2.29 | 55 | 0.5498 | 0.7386 | | No log | 2.5 | 60 | 0.5069 | 0.7692 | | No log | 2.71 | 65 | 0.4930 | 0.7630 | | No log | 2.92 | 70 | 0.4939 | 0.7614 | | No log | 3.12 | 75 | 0.5379 | 0.7724 | | No log | 3.33 | 80 | 0.5981 | 0.7732 | | No log | 3.54 | 85 | 0.5842 | 0.7716 | | No log | 3.75 | 90 | 0.4405 | 0.8030 | | No log | 3.96 | 95 | 0.4970 | 0.7951 | | No log | 4.17 | 100 | 0.5172 | 0.8093 | | No log | 4.38 | 105 | 0.5052 | 0.8108 | | No log | 4.58 | 110 | 0.4685 | 0.8085 | | No log | 4.79 | 115 | 0.4663 | 0.8218 | | No log | 5.0 | 120 | 0.5086 | 0.8218 | | No log | 5.21 | 125 | 0.5096 | 0.8179 | | No log | 5.42 | 130 | 0.5705 | 0.8203 | | No log | 5.62 | 135 | 0.5294 | 0.8312 | | No log | 5.83 | 140 | 0.4377 | 0.8375 | | No log | 6.04 | 145 | 0.5699 | 0.8100 | | No log | 6.25 | 150 | 0.6062 | 0.8265 | | No log | 6.46 | 155 | 0.7237 | 0.8218 | | No log | 6.67 | 160 | 0.6816 | 0.8210 | | No log | 6.88 | 165 | 0.6413 | 0.8124 | | No log | 7.08 | 170 | 0.5931 | 0.8359 | | No log | 7.29 | 175 | 0.6149 | 0.8399 | | No log | 7.5 | 180 | 0.7190 | 0.8195 | | No log | 7.71 | 185 | 0.7339 | 0.8352 | | No log | 7.92 | 190 | 0.7244 | 0.8352 | | No log | 8.12 | 195 | 0.7722 | 0.8203 | | No log | 8.33 | 200 | 0.6890 | 0.8344 | | No log | 8.54 | 205 | 0.6938 | 0.8336 | | No log | 8.75 | 210 | 0.7234 | 0.8320 | | No log | 8.96 | 215 | 0.7517 | 0.8391 | | No log | 9.17 | 220 | 0.7713 | 0.8383 | | No log | 9.38 | 225 | 0.7745 | 0.8375 | | No log | 9.58 | 230 | 0.8006 | 0.8375 | | No log | 9.79 | 235 | 0.8003 | 0.8367 | | No log | 10.0 | 240 | 0.7989 | 0.8336 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1 - Datasets 2.9.0 - Tokenizers 0.13.2
fathyshalab/domain_transfer_general-massive_audio-roberta-large-v1-5-0
fathyshalab
2023-02-10T18:19:32Z
4
0
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-02-10T18:19:05Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # fathyshalab/domain_transfer_general-massive_audio-roberta-large-v1-5-0 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_general-massive_audio-roberta-large-v1-5-0") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
ilahazs/rokashibasakiv1
ilahazs
2023-02-10T18:14:15Z
0
0
null
[ "art", "code", "en", "id", "region:us" ]
null
2023-02-10T18:11:00Z
--- language: - en - id tags: - art - code --- Hi. This is a model for shibasaki roka from D-Frag. I am still trying to make her looks better, stay tune. Update : 1. 11 February 2023 2. .... 3. .... 4. ....
fathyshalab/domain_transfer_general-massive_general-roberta-large-v1-5-95
fathyshalab
2023-02-10T18:11:02Z
5
0
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-02-10T18:10:35Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # fathyshalab/domain_transfer_general-massive_general-roberta-large-v1-5-95 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_general-massive_general-roberta-large-v1-5-95") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
cleanrl/UpNDown-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3
cleanrl
2023-02-10T17:59:27Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "UpNDown-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-10T17:59:21Z
--- tags: - UpNDown-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: UpNDown-v5 type: UpNDown-v5 metrics: - type: mean_reward value: 364740.00 +/- 7456.33 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **UpNDown-v5** This is a trained model of a PPO agent playing UpNDown-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id UpNDown-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/UpNDown-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/UpNDown-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/UpNDown-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id UpNDown-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'UpNDown-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
fathyshalab/domain_transfer_general-massive_email-roberta-large-v1-5-38
fathyshalab
2023-02-10T17:52:55Z
4
1
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-02-10T17:52:30Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # fathyshalab/domain_transfer_general-massive_email-roberta-large-v1-5-38 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_general-massive_email-roberta-large-v1-5-38") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
fathyshalab/domain_transfer_general-massive_recommendation-roberta-large-v1-5-17
fathyshalab
2023-02-10T17:44:13Z
4
0
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-02-10T17:43:46Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # fathyshalab/domain_transfer_general-massive_recommendation-roberta-large-v1-5-17 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_general-massive_recommendation-roberta-large-v1-5-17") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
zuxi/Anterkiar
zuxi
2023-02-10T17:44:08Z
13
4
diffusers
[ "diffusers", "arxiv:1910.09700", "license:openrail", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-02-06T13:21:38Z
--- license: openrail --- # Model Card for Model ID 这个模型是用来跑图的 This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). # Model Details ## Model Description 这是一个一个一个融合模型 - ** Developedby:** yushui - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ## Model Sources [optional] - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] # Uses 放到webui里就能用 ## Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ## Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ## Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ## Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ## Training Procedure [optional] <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing [More Information Needed] ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] # Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ## Testing Data, Factors & Metrics ### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] ### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ## Results [More Information Needed] ### Summary # Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] # Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] # Technical Specifications [optional] ## Model Architecture and Objective [More Information Needed] ## Compute Infrastructure [More Information Needed] ### Hardware [More Information Needed] ### Software [More Information Needed] # Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] # Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] # More Information [optional] [More Information Needed] # Model Card Authors [optional] [More Information Needed] # Model Card Contact [More Information Needed]
arshandalili/autotrain-news-summarization-3366493102
arshandalili
2023-02-10T17:42:27Z
8
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "autotrain", "summarization", "unk", "dataset:arshandalili/autotrain-data-news-summarization", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2023-02-10T16:59:12Z
--- tags: - autotrain - summarization language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - arshandalili/autotrain-data-news-summarization co2_eq_emissions: emissions: 74.35447565387557 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 3366493102 - CO2 Emissions (in grams): 74.3545 ## Validation Metrics - Loss: 1.405 - Rouge1: 0.800 - Rouge2: 0.200 - RougeL: 0.800 - RougeLsum: 0.800 - Gen Len: 47.134 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/arshandalili/autotrain-news-summarization-3366493102 ```
henryscheible/roberta-base_stereoset_finetuned
henryscheible
2023-02-10T17:41:25Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "dataset:stereoset", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-10T16:32:27Z
--- license: mit tags: - generated_from_trainer datasets: - stereoset metrics: - accuracy model-index: - name: roberta-base_stereoset_finetuned results: - task: name: Text Classification type: text-classification dataset: name: stereoset type: stereoset config: intersentence split: validation args: intersentence metrics: - name: Accuracy type: accuracy value: 0.7904238618524333 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base_stereoset_finetuned This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the stereoset dataset. It achieves the following results on the evaluation set: - Loss: 0.8461 - Accuracy: 0.7904 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.21 | 5 | 0.6915 | 0.5149 | | No log | 0.42 | 10 | 0.6945 | 0.4914 | | No log | 0.62 | 15 | 0.6931 | 0.4945 | | No log | 0.83 | 20 | 0.6814 | 0.5086 | | No log | 1.04 | 25 | 0.6454 | 0.6978 | | No log | 1.25 | 30 | 0.5807 | 0.7088 | | No log | 1.46 | 35 | 0.5620 | 0.7284 | | No log | 1.67 | 40 | 0.5410 | 0.7331 | | No log | 1.88 | 45 | 0.4965 | 0.7630 | | No log | 2.08 | 50 | 0.4924 | 0.7614 | | No log | 2.29 | 55 | 0.4906 | 0.7661 | | No log | 2.5 | 60 | 0.5141 | 0.7661 | | No log | 2.71 | 65 | 0.4826 | 0.7700 | | No log | 2.92 | 70 | 0.4977 | 0.7630 | | No log | 3.12 | 75 | 0.4890 | 0.7802 | | No log | 3.33 | 80 | 0.4819 | 0.7857 | | No log | 3.54 | 85 | 0.4840 | 0.7834 | | No log | 3.75 | 90 | 0.5189 | 0.7794 | | No log | 3.96 | 95 | 0.5000 | 0.7912 | | No log | 4.17 | 100 | 0.4958 | 0.7865 | | No log | 4.38 | 105 | 0.5149 | 0.7896 | | No log | 4.58 | 110 | 0.5515 | 0.7975 | | No log | 4.79 | 115 | 0.5766 | 0.7873 | | No log | 5.0 | 120 | 0.5867 | 0.7873 | | No log | 5.21 | 125 | 0.6143 | 0.7936 | | No log | 5.42 | 130 | 0.6226 | 0.7881 | | No log | 5.62 | 135 | 0.6374 | 0.7865 | | No log | 5.83 | 140 | 0.6405 | 0.7983 | | No log | 6.04 | 145 | 0.6116 | 0.8006 | | No log | 6.25 | 150 | 0.6372 | 0.7983 | | No log | 6.46 | 155 | 0.6804 | 0.7881 | | No log | 6.67 | 160 | 0.7237 | 0.7857 | | No log | 6.88 | 165 | 0.7038 | 0.7904 | | No log | 7.08 | 170 | 0.7100 | 0.7991 | | No log | 7.29 | 175 | 0.6837 | 0.7920 | | No log | 7.5 | 180 | 0.7203 | 0.8046 | | No log | 7.71 | 185 | 0.7478 | 0.7959 | | No log | 7.92 | 190 | 0.7667 | 0.7920 | | No log | 8.12 | 195 | 0.7792 | 0.7959 | | No log | 8.33 | 200 | 0.8014 | 0.7943 | | No log | 8.54 | 205 | 0.8193 | 0.7959 | | No log | 8.75 | 210 | 0.8316 | 0.7967 | | No log | 8.96 | 215 | 0.8411 | 0.7896 | | No log | 9.17 | 220 | 0.8652 | 0.7936 | | No log | 9.38 | 225 | 0.8553 | 0.7841 | | No log | 9.58 | 230 | 0.8458 | 0.7881 | | No log | 9.79 | 235 | 0.8456 | 0.7912 | | No log | 10.0 | 240 | 0.8461 | 0.7904 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1 - Datasets 2.9.0 - Tokenizers 0.13.2
fathyshalab/domain_transfer_general-massive_datetime-roberta-large-v1-5-94
fathyshalab
2023-02-10T17:35:25Z
4
0
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-02-10T17:34:57Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # fathyshalab/domain_transfer_general-massive_datetime-roberta-large-v1-5-94 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_general-massive_datetime-roberta-large-v1-5-94") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
akhooli/xlm-r-large-arabic-sent
akhooli
2023-02-10T17:24:49Z
101
8
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "ar", "en", "multilingual", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - ar - en - multilingual license: mit --- ### xlm-r-large-arabic-sent Multilingual sentiment classification (Label_0: mixed, Label_1: negative, Label_2: positive) of Arabic reviews by fine-tuning XLM-Roberta-Large. Zero shot classification of other languages (also works in mixed languages - ex. Arabic & English). Mixed category is not accurate and may confuse other classes (was based on a rate of 3 out of 5 in reviews). Usage: see last section in this [Colab notebook](https://lnkd.in/d3bCFyZ)
lolo503/elireyes
lolo503
2023-02-10T17:18:04Z
32
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-02-10T17:07:31Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### elireyes Dreambooth model trained by lolo503 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
fpuentes/bert-galician
fpuentes
2023-02-10T17:17:44Z
105
0
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "fill-mask", "gl", "license:apache-2.0", "endpoints_compatible", "region:us" ]
fill-mask
2023-01-10T13:37:41Z
--- license: apache-2.0 language: - gl library_name: transformers pipeline_tag: fill-mask --- POR COMPLETAR! Modelo de 110M de parámetros, adestrado e afinado desde un modelo preentrenado (GPT2-Spanish), usando un dataset en galego de 525 MB obtido da wikipedia en galego. No contexto da Resolución do 22 de decembro de 2021 da Secretaría Xeral de Educación e Formación Profesional pola que se convocan premios para o desenvolvemento de proxectos de innovación tecnolóxica ou científica e proxectos de innovación didáctica no ámbito da formación profesional en centros públicos dependentes da Consellería de Cultura, Educación e Universidade, baixo o nome de "Creación dun modelo de linguaxe adestrado previamente mediante técnicas de autoatención para explorar arquitecturas que permitan o seu uso en solucións de procesamento da linguaxe natural en galego tanto na docencia como na contorna empresarial"
cleanrl/Enduro-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1
cleanrl
2023-02-10T17:15:01Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Enduro-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-05T22:59:02Z
--- tags: - Enduro-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Enduro-v5 type: Enduro-v5 metrics: - type: mean_reward value: 2299.60 +/- 114.86 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Enduro-v5** This is a trained model of a PPO agent playing Enduro-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Enduro-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Enduro-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Enduro-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Enduro-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Enduro-v5 --seed 1 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Enduro-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 1, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
fpuentes/bert-fromscratch-galician-large
fpuentes
2023-02-10T17:11:32Z
30
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-01-17T09:25:41Z
--- tags: - generated_from_trainer model-index: - name: bert-fromscratch-galician-large results: [] --- ## Descripción do modelo Modelo de (~) 125M de parámetros, adestrado e afinado desde cero, usando un dataset en galego de 305MB obtido da wikipedia en galego. No contexto da Resolución do 22 de decembro de 2021 da Secretaría Xeral de Educación e Formación Profesional pola que se convocan premios para o desenvolvemento de proxectos de innovación tecnolóxica ou científica e proxectos de innovación didáctica no ámbito da formación profesional en centros públicos dependentes da Consellería de Cultura, Educación e Universidade, baixo o nome de "Creación dun modelo de linguaxe adestrado previamente mediante técnicas de autoatención para explorar arquitecturas que permitan o seu uso en solucións de procesamento da linguaxe natural en galego tanto na docencia como na contorna empresarial" ## Usos e limitacións Este modelo foi creado con fins pedagóxicos e de investigación. ### Hiperparámetros de entrenamiento - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.1,0.9) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 3.6976 | 0.22 | 1500 | 2.2866 | | 2.3057 | 0.43 | 3000 | 1.9276 | ... ... ... ... | 1.1982 | 14.25 | 99000 | 1.0601 | | 1.196 | 14.47 | 100500 | 1.0554 | | 1.1971 | 14.69 | 102000 | 1.0538 | | 1.1954 | 14.9 | 103500 | 1.0613 | ### Versiones de los frameworks - Transformers 4.24.0 - Pytorch 1.13.1 - Datasets 2.6.1 - Tokenizers 0.11.0
thanat/codeparrot-ds
thanat
2023-02-10T17:07:45Z
61
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-02-09T23:37:23Z
--- license: mit tags: - generated_from_keras_callback model-index: - name: thanat/codeparrot-ds results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # thanat/codeparrot-ds This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the [codeparrot](https://huggingface.co/datasets/huggingface-course/codeparrot-ds-train) dataset. It achieves the following results on the evaluation set: - Train Loss: 1.5316 - Validation Loss: 1.1714 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 520939, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.5316 | 1.1714 | 0 | ### Framework versions - Transformers 4.26.1 - TensorFlow 2.9.2 - Datasets 2.9.0 - Tokenizers 0.13.2
sh0xb0x/ff21images
sh0xb0x
2023-02-10T17:02:48Z
7
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-02-10T17:01:28Z
--- license: creativeml-openrail-m tags: - text-to-image widget: - text: ff21images --- ### ff21images Dreambooth model trained by sh0xb0x with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: ff21images (use that on your prompt) ![ff21images 0](https://huggingface.co/sh0xb0x/ff21images/resolve/main/concept_images/ff21images_%281%29.jpg)![ff21images 1](https://huggingface.co/sh0xb0x/ff21images/resolve/main/concept_images/ff21images_%282%29.jpg)![ff21images 2](https://huggingface.co/sh0xb0x/ff21images/resolve/main/concept_images/ff21images_%283%29.jpg)![ff21images 3](https://huggingface.co/sh0xb0x/ff21images/resolve/main/concept_images/ff21images_%284%29.jpg)![ff21images 4](https://huggingface.co/sh0xb0x/ff21images/resolve/main/concept_images/ff21images_%285%29.jpg)![ff21images 5](https://huggingface.co/sh0xb0x/ff21images/resolve/main/concept_images/ff21images_%286%29.jpg)![ff21images 6](https://huggingface.co/sh0xb0x/ff21images/resolve/main/concept_images/ff21images_%287%29.jpg)![ff21images 7](https://huggingface.co/sh0xb0x/ff21images/resolve/main/concept_images/ff21images_%288%29.jpg)![ff21images 8](https://huggingface.co/sh0xb0x/ff21images/resolve/main/concept_images/ff21images_%289%29.jpg)![ff21images 9](https://huggingface.co/sh0xb0x/ff21images/resolve/main/concept_images/ff21images_%2810%29.jpg)![ff21images 10](https://huggingface.co/sh0xb0x/ff21images/resolve/main/concept_images/ff21images_%2811%29.jpg)![ff21images 11](https://huggingface.co/sh0xb0x/ff21images/resolve/main/concept_images/ff21images_%2812%29.jpg)![ff21images 12](https://huggingface.co/sh0xb0x/ff21images/resolve/main/concept_images/ff21images_%2813%29.jpg)![ff21images 13](https://huggingface.co/sh0xb0x/ff21images/resolve/main/concept_images/ff21images_%2814%29.jpg)![ff21images 14](https://huggingface.co/sh0xb0x/ff21images/resolve/main/concept_images/ff21images_%2815%29.jpg)![ff21images 15](https://huggingface.co/sh0xb0x/ff21images/resolve/main/concept_images/ff21images_%2816%29.jpg)![ff21images 16](https://huggingface.co/sh0xb0x/ff21images/resolve/main/concept_images/ff21images_%2817%29.jpg)![ff21images 17](https://huggingface.co/sh0xb0x/ff21images/resolve/main/concept_images/ff21images_%2818%29.jpg)![ff21images 18](https://huggingface.co/sh0xb0x/ff21images/resolve/main/concept_images/ff21images_%2819%29.jpg)![ff21images 19](https://huggingface.co/sh0xb0x/ff21images/resolve/main/concept_images/ff21images_%2820%29.jpg)![ff21images 20](https://huggingface.co/sh0xb0x/ff21images/resolve/main/concept_images/ff21images_%2821%29.jpg)
fathyshalab/domain_transfer_general-massive_social-roberta-large-v1-5-7
fathyshalab
2023-02-10T17:00:37Z
4
0
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-02-10T17:00:17Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # fathyshalab/domain_transfer_general-massive_social-roberta-large-v1-5-7 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_general-massive_social-roberta-large-v1-5-7") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
abigailp/vaccinated
abigailp
2023-02-10T16:59:02Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-10T16:44:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - recall - precision model-index: - name: vaccinated results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vaccinated This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6907 - Accuracy: 0.9036 - F1: 0.9048 - Recall: 0.8636 - Precision: 0.95 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40 ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
varevshatyan/ppo-LunarLander-v2
varevshatyan
2023-02-10T16:52:37Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-10T16:52:08Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 265.73 +/- 13.15 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
cleanrl/ChopperCommand-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2
cleanrl
2023-02-10T16:50:01Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "ChopperCommand-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-10T16:49:54Z
--- tags: - ChopperCommand-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: ChopperCommand-v5 type: ChopperCommand-v5 metrics: - type: mean_reward value: 38660.00 +/- 32345.67 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **ChopperCommand-v5** This is a trained model of a PPO agent playing ChopperCommand-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id ChopperCommand-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/ChopperCommand-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/ChopperCommand-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/ChopperCommand-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id ChopperCommand-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'ChopperCommand-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
pomp/ppo-LunarLander-v2
pomp
2023-02-10T16:48:17Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-10T16:47:50Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 254.50 +/- 13.34 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
bigscience/bloomz-petals
bigscience
2023-02-10T16:34:22Z
21
12
transformers
[ "transformers", "pytorch", "bloom", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-01-16T06:49:10Z
# BLOOMZ, a version for Petals This model is a version of [bigscience/bloomz](https://huggingface.co/bigscience/bloomz) post-processed to be run at home using the [Petals](https://github.com/bigscience-workshop/petals#readme) swarm. Please check out: - The [original model card](https://huggingface.co/bigscience/bloomz) to learn about the model's capabilities, specifications, and terms of use. - The [Petals repository](https://github.com/bigscience-workshop/petals#readme) to learn how to install Petals and run this model over the Petals swarm. We provide minimal code examples below. ## Using the model ```python from petals import DistributedBloomForCausalLM model = DistributedBloomForCausalLM.from_pretrained("bigscience/bloomz-petals") # Embeddings & prompts are on your device, BLOOM blocks are distributed across the Internet inputs = tokenizer("A cat sat", return_tensors="pt")["input_ids"] outputs = model.generate(inputs, max_new_tokens=5) print(tokenizer.decode(outputs[0])) # A cat sat on a mat... ``` ## Serving the model blocks ```bash python -m petals.cli.run_server bigscience/bloomz-petals ```
cleanrl/Zaxxon-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3
cleanrl
2023-02-10T16:31:16Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Zaxxon-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-10T16:31:12Z
--- tags: - Zaxxon-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Zaxxon-v5 type: Zaxxon-v5 metrics: - type: mean_reward value: 31160.00 +/- 4376.12 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Zaxxon-v5** This is a trained model of a PPO agent playing Zaxxon-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Zaxxon-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Zaxxon-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Zaxxon-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Zaxxon-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Zaxxon-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Zaxxon-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
cleanrl/Zaxxon-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1
cleanrl
2023-02-10T16:22:16Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Zaxxon-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-05T23:00:51Z
--- tags: - Zaxxon-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Zaxxon-v5 type: Zaxxon-v5 metrics: - type: mean_reward value: 30280.00 +/- 3305.69 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Zaxxon-v5** This is a trained model of a PPO agent playing Zaxxon-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Zaxxon-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Zaxxon-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Zaxxon-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Zaxxon-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Zaxxon-v5 --seed 1 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Zaxxon-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 1, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
cleanrl/Zaxxon-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2
cleanrl
2023-02-10T16:21:49Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Zaxxon-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-10T16:21:43Z
--- tags: - Zaxxon-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Zaxxon-v5 type: Zaxxon-v5 metrics: - type: mean_reward value: 41460.00 +/- 7284.26 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Zaxxon-v5** This is a trained model of a PPO agent playing Zaxxon-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Zaxxon-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Zaxxon-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Zaxxon-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Zaxxon-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Zaxxon-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Zaxxon-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
cleanrl/VideoPinball-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3
cleanrl
2023-02-10T16:17:01Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "VideoPinball-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-10T16:16:55Z
--- tags: - VideoPinball-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: VideoPinball-v5 type: VideoPinball-v5 metrics: - type: mean_reward value: 488010.20 +/- 14386.77 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **VideoPinball-v5** This is a trained model of a PPO agent playing VideoPinball-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id VideoPinball-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/VideoPinball-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/VideoPinball-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/VideoPinball-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id VideoPinball-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'VideoPinball-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
cleanrl/VideoPinball-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2
cleanrl
2023-02-10T16:16:18Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "VideoPinball-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-10T16:16:12Z
--- tags: - VideoPinball-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: VideoPinball-v5 type: VideoPinball-v5 metrics: - type: mean_reward value: 632621.70 +/- 124746.78 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **VideoPinball-v5** This is a trained model of a PPO agent playing VideoPinball-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id VideoPinball-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/VideoPinball-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/VideoPinball-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/VideoPinball-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id VideoPinball-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'VideoPinball-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
mchalek/distilbert-base-uncased-finetuned-imdb
mchalek
2023-02-10T16:12:26Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-02-10T14:14:59Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4642 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.6835 | 1.0 | 157 | 2.5426 | | 2.5874 | 2.0 | 314 | 2.4668 | | 2.5288 | 3.0 | 471 | 2.4689 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu117 - Datasets 2.9.0 - Tokenizers 0.13.2
cleanrl/YarsRevenge-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2
cleanrl
2023-02-10T16:11:50Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "YarsRevenge-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-10T16:11:46Z
--- tags: - YarsRevenge-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: YarsRevenge-v5 type: YarsRevenge-v5 metrics: - type: mean_reward value: 127249.00 +/- 16395.25 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **YarsRevenge-v5** This is a trained model of a PPO agent playing YarsRevenge-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id YarsRevenge-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/YarsRevenge-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/YarsRevenge-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/YarsRevenge-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id YarsRevenge-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'YarsRevenge-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
SeNSiTivE/RL-Course-Unit_2-q-FrozenLake-v1-4x4-Slippery
SeNSiTivE
2023-02-10T16:10:18Z
0
0
null
[ "FrozenLake-v1-4x4", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-10T16:10:09Z
--- tags: - FrozenLake-v1-4x4 - q-learning - reinforcement-learning - custom-implementation model-index: - name: RL-Course-Unit_2-q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4 type: FrozenLake-v1-4x4 metrics: - type: mean_reward value: 0.73 +/- 0.44 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="SeNSiTivE/RL-Course-Unit_2-q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
cleanrl/UpNDown-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2
cleanrl
2023-02-10T16:04:57Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "UpNDown-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-09T04:07:31Z
--- tags: - UpNDown-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: UpNDown-v5 type: UpNDown-v5 metrics: - type: mean_reward value: 370396.00 +/- 3505.00 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **UpNDown-v5** This is a trained model of a PPO agent playing UpNDown-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id UpNDown-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/UpNDown-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/UpNDown-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/UpNDown-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id UpNDown-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'UpNDown-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
cleanrl/WizardOfWor-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2
cleanrl
2023-02-10T16:04:44Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "WizardOfWor-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-10T16:04:38Z
--- tags: - WizardOfWor-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: WizardOfWor-v5 type: WizardOfWor-v5 metrics: - type: mean_reward value: 21120.00 +/- 9534.65 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **WizardOfWor-v5** This is a trained model of a PPO agent playing WizardOfWor-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id WizardOfWor-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/WizardOfWor-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/WizardOfWor-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/WizardOfWor-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id WizardOfWor-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'WizardOfWor-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
cleanrl/UpNDown-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1
cleanrl
2023-02-10T16:04:28Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "UpNDown-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-10T16:04:21Z
--- tags: - UpNDown-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: UpNDown-v5 type: UpNDown-v5 metrics: - type: mean_reward value: 363445.00 +/- 9342.48 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **UpNDown-v5** This is a trained model of a PPO agent playing UpNDown-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id UpNDown-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/UpNDown-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/UpNDown-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/UpNDown-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id UpNDown-v5 --seed 1 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'UpNDown-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 1, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
cleanrl/Venture-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3
cleanrl
2023-02-10T16:01:08Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Venture-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-10T16:01:02Z
--- tags: - Venture-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Venture-v5 type: Venture-v5 metrics: - type: mean_reward value: 0.00 +/- 0.00 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Venture-v5** This is a trained model of a PPO agent playing Venture-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Venture-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Venture-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Venture-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Venture-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Venture-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Venture-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
joelniklaus/legal-irish-roberta-base
joelniklaus
2023-02-10T15:53:23Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-02-06T02:37:27Z
--- tags: - generated_from_trainer model-index: - name: legal-irish-roberta-base results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # legal-irish-roberta-base This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7328 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - distributed_type: tpu - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 512 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - training_steps: 200000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 0.5892 | 228.0 | 50000 | 0.7659 | | 0.4497 | 456.0 | 100000 | 0.7421 | | 0.3906 | 684.0 | 150000 | 0.7443 | | 0.3906 | 913.0 | 200000 | 0.7328 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu102 - Datasets 2.9.0 - Tokenizers 0.12.0
cleanrl/Venture-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1
cleanrl
2023-02-10T15:53:07Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Venture-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-05T22:58:00Z
--- tags: - Venture-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Venture-v5 type: Venture-v5 metrics: - type: mean_reward value: 0.00 +/- 0.00 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Venture-v5** This is a trained model of a PPO agent playing Venture-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Venture-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Venture-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Venture-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Venture-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Venture-v5 --seed 1 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Venture-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 1, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
LarryAIDraw/aliceNikke_v10
LarryAIDraw
2023-02-10T15:41:35Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-02-10T15:39:41Z
--- license: creativeml-openrail-m ---