repo_id
stringlengths 4
122
| author
stringlengths 2
38
⌀ | model_type
stringlengths 2
33
⌀ | files_per_repo
int64 2
39k
| downloads_30d
int64 0
33.7M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.87k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
33
⌀ | languages
stringlengths 2
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringlengths 6
258
⌀ | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
46
| prs_closed
int64 0
34
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 2
classes | has_text
bool 1
class | text_length
int64 201
598k
| readme
stringlengths 0
598k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
cleanrl/Assault-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Assault-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,160 |
# (CleanRL) **PPO** Agent Playing **Assault-v5**
This is a trained model of a PPO agent playing Assault-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Assault-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Assault-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Assault-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Assault-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Assault-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Assault-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Krull-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Krull-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,144 |
# (CleanRL) **PPO** Agent Playing **Krull-v5**
This is a trained model of a PPO agent playing Krull-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Krull-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Krull-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Krull-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Krull-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Krull-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Krull-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Defender-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Defender-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,168 |
# (CleanRL) **PPO** Agent Playing **Defender-v5**
This is a trained model of a PPO agent playing Defender-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Defender-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Defender-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Defender-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Defender-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Defender-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Defender-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Pitfall-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pitfall-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,160 |
# (CleanRL) **PPO** Agent Playing **Pitfall-v5**
This is a trained model of a PPO agent playing Pitfall-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Pitfall-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Pitfall-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Pitfall-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Pitfall-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Pitfall-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Pitfall-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/CrazyClimber-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['CrazyClimber-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,200 |
# (CleanRL) **PPO** Agent Playing **CrazyClimber-v5**
This is a trained model of a PPO agent playing CrazyClimber-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id CrazyClimber-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/CrazyClimber-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/CrazyClimber-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/CrazyClimber-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id CrazyClimber-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'CrazyClimber-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Freeway-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Freeway-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,160 |
# (CleanRL) **PPO** Agent Playing **Freeway-v5**
This is a trained model of a PPO agent playing Freeway-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Freeway-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Freeway-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Freeway-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Freeway-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Freeway-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Freeway-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Qbert-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Qbert-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,144 |
# (CleanRL) **PPO** Agent Playing **Qbert-v5**
This is a trained model of a PPO agent playing Qbert-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Qbert-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Qbert-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Qbert-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Qbert-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Qbert-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Qbert-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/MontezumaRevenge-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['MontezumaRevenge-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,232 |
# (CleanRL) **PPO** Agent Playing **MontezumaRevenge-v5**
This is a trained model of a PPO agent playing MontezumaRevenge-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id MontezumaRevenge-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/MontezumaRevenge-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/MontezumaRevenge-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/MontezumaRevenge-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id MontezumaRevenge-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'MontezumaRevenge-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Amidar-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Amidar-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,152 |
# (CleanRL) **PPO** Agent Playing **Amidar-v5**
This is a trained model of a PPO agent playing Amidar-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Amidar-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Amidar-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Amidar-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Amidar-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Amidar-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Amidar-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/NameThisGame-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['NameThisGame-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,200 |
# (CleanRL) **PPO** Agent Playing **NameThisGame-v5**
This is a trained model of a PPO agent playing NameThisGame-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id NameThisGame-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/NameThisGame-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/NameThisGame-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/NameThisGame-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id NameThisGame-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'NameThisGame-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/RoadRunner-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['RoadRunner-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,184 |
# (CleanRL) **PPO** Agent Playing **RoadRunner-v5**
This is a trained model of a PPO agent playing RoadRunner-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id RoadRunner-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/RoadRunner-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/RoadRunner-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/RoadRunner-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id RoadRunner-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'RoadRunner-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Zaxxon-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Zaxxon-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,152 |
# (CleanRL) **PPO** Agent Playing **Zaxxon-v5**
This is a trained model of a PPO agent playing Zaxxon-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Zaxxon-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Zaxxon-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Zaxxon-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Zaxxon-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Zaxxon-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Zaxxon-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Berzerk-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Berzerk-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,160 |
# (CleanRL) **PPO** Agent Playing **Berzerk-v5**
This is a trained model of a PPO agent playing Berzerk-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Berzerk-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Berzerk-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Berzerk-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Berzerk-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Berzerk-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Berzerk-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/MsPacman-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['MsPacman-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,168 |
# (CleanRL) **PPO** Agent Playing **MsPacman-v5**
This is a trained model of a PPO agent playing MsPacman-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id MsPacman-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/MsPacman-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/MsPacman-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/MsPacman-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id MsPacman-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'MsPacman-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Bowling-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Bowling-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,160 |
# (CleanRL) **PPO** Agent Playing **Bowling-v5**
This is a trained model of a PPO agent playing Bowling-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Bowling-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Bowling-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Bowling-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Bowling-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Bowling-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Bowling-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/KungFuMaster-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['KungFuMaster-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,200 |
# (CleanRL) **PPO** Agent Playing **KungFuMaster-v5**
This is a trained model of a PPO agent playing KungFuMaster-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id KungFuMaster-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/KungFuMaster-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/KungFuMaster-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/KungFuMaster-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id KungFuMaster-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'KungFuMaster-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/BeamRider-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['BeamRider-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,176 |
# (CleanRL) **PPO** Agent Playing **BeamRider-v5**
This is a trained model of a PPO agent playing BeamRider-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id BeamRider-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/BeamRider-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/BeamRider-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/BeamRider-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id BeamRider-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'BeamRider-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Breakout-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Breakout-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,168 |
# (CleanRL) **PPO** Agent Playing **Breakout-v5**
This is a trained model of a PPO agent playing Breakout-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Breakout-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Breakout-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Breakout-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Breakout-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Breakout-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Breakout-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Venture-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Venture-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,160 |
# (CleanRL) **PPO** Agent Playing **Venture-v5**
This is a trained model of a PPO agent playing Venture-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Venture-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Venture-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Venture-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Venture-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Venture-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Venture-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Skiing-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Skiing-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,152 |
# (CleanRL) **PPO** Agent Playing **Skiing-v5**
This is a trained model of a PPO agent playing Skiing-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Skiing-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Skiing-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Skiing-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Skiing-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Skiing-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Skiing-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/StarGunner-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['StarGunner-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,184 |
# (CleanRL) **PPO** Agent Playing **StarGunner-v5**
This is a trained model of a PPO agent playing StarGunner-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id StarGunner-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/StarGunner-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/StarGunner-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/StarGunner-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id StarGunner-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'StarGunner-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/TimePilot-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['TimePilot-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,176 |
# (CleanRL) **PPO** Agent Playing **TimePilot-v5**
This is a trained model of a PPO agent playing TimePilot-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id TimePilot-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/TimePilot-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/TimePilot-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/TimePilot-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id TimePilot-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'TimePilot-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/IceHockey-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['IceHockey-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,176 |
# (CleanRL) **PPO** Agent Playing **IceHockey-v5**
This is a trained model of a PPO agent playing IceHockey-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id IceHockey-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/IceHockey-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/IceHockey-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/IceHockey-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id IceHockey-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'IceHockey-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Phoenix-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Phoenix-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,160 |
# (CleanRL) **PPO** Agent Playing **Phoenix-v5**
This is a trained model of a PPO agent playing Phoenix-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Phoenix-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Phoenix-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Phoenix-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Phoenix-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Phoenix-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Phoenix-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
sd-concepts-library/kamon-style
|
sd-concepts-library
| null | 494 | 0 | null | 0 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,157 |
### kamon style on Stable Diffusion
This is the `<kamon-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





|
cleanrl/Seaquest-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Seaquest-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,168 |
# (CleanRL) **PPO** Agent Playing **Seaquest-v5**
This is a trained model of a PPO agent playing Seaquest-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Seaquest-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Seaquest-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Seaquest-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Seaquest-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Seaquest-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Seaquest-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Robotank-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Robotank-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,168 |
# (CleanRL) **PPO** Agent Playing **Robotank-v5**
This is a trained model of a PPO agent playing Robotank-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Robotank-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Robotank-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Robotank-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Robotank-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Robotank-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Robotank-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Hero-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Hero-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,136 |
# (CleanRL) **PPO** Agent Playing **Hero-v5**
This is a trained model of a PPO agent playing Hero-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Hero-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Hero-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Hero-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Hero-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Hero-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Hero-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Gopher-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Gopher-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,152 |
# (CleanRL) **PPO** Agent Playing **Gopher-v5**
This is a trained model of a PPO agent playing Gopher-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Gopher-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Gopher-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Gopher-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Gopher-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Gopher-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Gopher-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Surround-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Surround-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,168 |
# (CleanRL) **PPO** Agent Playing **Surround-v5**
This is a trained model of a PPO agent playing Surround-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Surround-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Surround-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Surround-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Surround-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Surround-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Surround-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Asterix-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Asterix-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,160 |
# (CleanRL) **PPO** Agent Playing **Asterix-v5**
This is a trained model of a PPO agent playing Asterix-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Asterix-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Asterix-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Asterix-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Asterix-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Asterix-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Asterix-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
mitra-mir/setfit-model-Ireland_4labels_unbalanced_data
|
mitra-mir
|
mpnet
| 13 | 5 |
sentence-transformers
| 0 |
sentence-similarity
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
| false | true | true | 2,138 |
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 941 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 941,
"warmup_steps": 95,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
cleanrl/SpaceInvaders-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvaders-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,208 |
# (CleanRL) **PPO** Agent Playing **SpaceInvaders-v5**
This is a trained model of a PPO agent playing SpaceInvaders-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id SpaceInvaders-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/SpaceInvaders-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/SpaceInvaders-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/SpaceInvaders-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id SpaceInvaders-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'SpaceInvaders-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
nachshonc/poca-SoccerTwos
|
nachshonc
| null | 22 | 630 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 843 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: nachshonc/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
cleanrl/Tennis-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Tennis-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,152 |
# (CleanRL) **PPO** Agent Playing **Tennis-v5**
This is a trained model of a PPO agent playing Tennis-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Tennis-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Tennis-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Tennis-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Tennis-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Tennis-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Tennis-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
pfunk/Pong-v4-DQPN_p2_e0.50-seed1
|
pfunk
| null | 11 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pong-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 1,979 |
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p2_e0.50.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_p2_e0.50]"
python -m cleanrl_utils.enjoy --exp-name DQPN_p2_e0.50 --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p2_e0.50-seed1/raw/main/dqpn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p2_e0.50-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p2_e0.50-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_atari.py --exp-name DQPN_p2_e0.50 --start-policy-f 2000 --end-policy-f 1000 --evaluation-fraction 0.50 --target-tau 1.0 --policy-tau 1.00 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'end_policy_f': 1000,
'env_id': 'Pong-v4',
'evaluation_fraction': 0.5,
'exp_name': 'DQPN_p2_e0.50',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'policy_tau': 1.0,
'save_model': True,
'seed': 1,
'start_e': 1,
'start_policy_f': 2000,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
habanoz/poca-SoccerTwos
|
habanoz
| null | 20 | 624 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 841 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: habanoz/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
HuyenNguyen/TTS0123
|
HuyenNguyen
|
whisper
| 16 | 6 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,254 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TTS0123
This model is a fine-tuned version of [HuyenNguyen/FPT_Viettel](https://huggingface.co/HuyenNguyen/FPT_Viettel) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0923
- eval_wer: 5.0598
- eval_runtime: 2394.8178
- eval_samples_per_second: 0.835
- eval_steps_per_second: 0.052
- epoch: 2.93
- step: 1000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 24
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
pfunk/Pong-v4-DQN_baseline-seed1
|
pfunk
| null | 11 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pong-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 1,761 |
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQN_baseline.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQN_baseline]"
python -m cleanrl_utils.enjoy --exp-name DQN_baseline --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQN_baseline-seed1/raw/main/dqn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQN_baseline-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQN_baseline-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqn_atari.py --exp-name DQN_baseline --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'env_id': 'Pong-v4',
'exp_name': 'DQN_baseline',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'save_model': True,
'seed': 1,
'start_e': 1,
'target_network_frequency': 1000,
'tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
cleanrl/VideoPinball-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['VideoPinball-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,200 |
# (CleanRL) **PPO** Agent Playing **VideoPinball-v5**
This is a trained model of a PPO agent playing VideoPinball-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id VideoPinball-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/VideoPinball-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/VideoPinball-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/VideoPinball-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id VideoPinball-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'VideoPinball-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
racro/sentiment-analysis-browser-extension
|
racro
|
distilbert
| 45 | 7 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,054 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-analysis-browser-extension
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4233
- Accuracy: 0.8539
- F1: 0.8758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
austinmw/ppo-LunarLander-v2
|
austinmw
| null | 12 | 1 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
eldraco/dqn-SpaceInvadersNoFrameskip-v4-v3
|
eldraco
| null | 15 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,215 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga eldraco -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga eldraco -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga eldraco
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 200000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
mitra-mir/setfit-model-Ireland_3labels_balanced_data
|
mitra-mir
|
mpnet
| 13 | 7 |
sentence-transformers
| 0 |
sentence-similarity
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
| false | true | true | 2,135 |
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 53 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 53,
"warmup_steps": 6,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
cleanrl/DemonAttack-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['DemonAttack-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,192 |
# (CleanRL) **PPO** Agent Playing **DemonAttack-v5**
This is a trained model of a PPO agent playing DemonAttack-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id DemonAttack-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/DemonAttack-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/DemonAttack-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/DemonAttack-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id DemonAttack-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'DemonAttack-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
esoria3/clasificador-amazonreviews-en
|
esoria3
|
distilbert
| 10 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['classification', 'generated_from_trainer']
| true | true | true | 1,389 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-amazonreviews-en
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2642
- Accuracy: 0.516
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2472 | 1.0 | 500 | 1.1511 | 0.463 |
| 0.9416 | 2.0 | 1000 | 1.1698 | 0.502 |
| 0.7039 | 3.0 | 1500 | 1.2642 | 0.516 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
sd-dreambooth-library/tame
|
sd-dreambooth-library
| null | 19 | 2 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 423 |
### tame Dreambooth model trained by valentinaw1sa4ajh with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
SRKConsulting/ppo-Huggy
|
SRKConsulting
| null | 32 | 3 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Huggy']
| false | true | true | 824 |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: SRKConsulting/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Buseak/model_from_berturk_Feb_5_TrainTestSplit
|
Buseak
|
bert
| 12 | 8 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,669 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_from_berturk_Feb_5_TrainTestSplit
This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3125
- Precision: 0.9120
- Recall: 0.9126
- F1: 0.9123
- Accuracy: 0.9376
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 185 | 0.2333 | 0.9065 | 0.9066 | 0.9066 | 0.9343 |
| No log | 2.0 | 370 | 0.2115 | 0.9122 | 0.9143 | 0.9133 | 0.9389 |
| 0.3861 | 3.0 | 555 | 0.2049 | 0.9185 | 0.9175 | 0.9180 | 0.9423 |
| 0.3861 | 4.0 | 740 | 0.2073 | 0.9183 | 0.9185 | 0.9184 | 0.9420 |
| 0.3861 | 5.0 | 925 | 0.2174 | 0.9150 | 0.9155 | 0.9153 | 0.9397 |
| 0.1487 | 6.0 | 1110 | 0.2227 | 0.9177 | 0.9185 | 0.9181 | 0.9415 |
| 0.1487 | 7.0 | 1295 | 0.2399 | 0.9149 | 0.9160 | 0.9155 | 0.9396 |
| 0.1487 | 8.0 | 1480 | 0.2504 | 0.9158 | 0.9163 | 0.9160 | 0.9400 |
| 0.0942 | 9.0 | 1665 | 0.2692 | 0.9141 | 0.9152 | 0.9146 | 0.9392 |
| 0.0942 | 10.0 | 1850 | 0.2782 | 0.9130 | 0.9153 | 0.9141 | 0.9388 |
| 0.0589 | 11.0 | 2035 | 0.2908 | 0.9131 | 0.9144 | 0.9138 | 0.9388 |
| 0.0589 | 12.0 | 2220 | 0.2940 | 0.9121 | 0.9136 | 0.9128 | 0.9377 |
| 0.0589 | 13.0 | 2405 | 0.3068 | 0.9117 | 0.9130 | 0.9123 | 0.9376 |
| 0.0407 | 14.0 | 2590 | 0.3107 | 0.9132 | 0.9148 | 0.9140 | 0.9387 |
| 0.0407 | 15.0 | 2775 | 0.3125 | 0.9120 | 0.9126 | 0.9123 | 0.9376 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
huggingtweets/f3ralfluid
|
huggingtweets
|
gpt2
| 11 | 0 |
transformers
| 0 |
text-generation
| true | false | false | null |
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['huggingtweets']
| false | true | true | 3,313 |
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1590925174068711428/4PWe_NrY_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">gross</div>
<div style="text-align: center; font-size: 14px;">@f3ralfluid</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from gross.
| Data | gross |
| --- | --- |
| Tweets downloaded | 236 |
| Retweets | 28 |
| Short tweets | 66 |
| Tweets kept | 142 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/kjdh98mi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @f3ralfluid's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/d3ukvm2v) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/d3ukvm2v/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/f3ralfluid')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
sd99/poca-SoccerTwos
|
sd99
| null | 22 | 612 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 838 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: sd99/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ScrappyCoco666/a2c-PandaReachDense-v2-3
|
ScrappyCoco666
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 358 |
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ScrappyCoco666/a2c-PandaReachDense-v2-4
|
ScrappyCoco666
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 358 |
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dptrsa/ec_model
|
dptrsa
|
roberta
| 20 | 53 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,235 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ec_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 497 | 1.1985 |
| 1.578 | 2.0 | 994 | 1.0032 |
| 1.187 | 3.0 | 1491 | 0.9479 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
adielsa/swin-tiny-patch4-window7-224-finetuned-eurosat
|
adielsa
|
swin
| 14 | 0 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagefolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,492 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1627
- Accuracy: 0.9464
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2486 | 0.98 | 36 | 0.2120 | 0.9100 |
| 0.1844 | 1.98 | 72 | 0.3417 | 0.8563 |
| 0.1646 | 2.98 | 108 | 0.1627 | 0.9464 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
mantury/q-FrozenLake-v1-4x4-noSlippery
|
mantury
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 396 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="mantury/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mantury/taxi-v3
|
mantury
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 361 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="mantury/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mantury/q-taxi-v3
|
mantury
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 363 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="mantury/q-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
z4x/Reinforce-CartPole
|
z4x
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 286 |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
austinmw/ppo-Huggy
|
austinmw
| null | 32 | 1 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Huggy']
| false | true | true | 819 |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: austinmw/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
moschouChry/ppo-LunarLander-v2
|
moschouChry
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NickKolok/ari-20230205-2130-dlpr2-4800-steps_1
|
NickKolok
| null | 16 | 17 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image']
| false | true | true | 8,548 |
### Ari_20230205_2130_DLPR2_4800_steps on Stable Diffusion via Dreambooth
#### model by NickKolok
This your the Stable Diffusion model fine-tuned the Ari_20230205_2130_DLPR2_4800_steps concept taught to Stable Diffusion with Dreambooth.
#It can be used by modifying the `instance_prompt`: **ari**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:




























































|
jrauch4/a2c-AntBulletEnv-v0
|
jrauch4
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AntBulletEnv-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 352 |
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
adielsa/vit-base-patch16-224-finetuned-chest
|
adielsa
|
vit
| 24 | 5 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagefolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,465 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-chest
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0318
- Accuracy: 0.9900
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0947 | 0.98 | 36 | 0.0785 | 0.9732 |
| 0.048 | 1.98 | 72 | 0.0678 | 0.9732 |
| 0.0352 | 2.98 | 108 | 0.0329 | 0.9887 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
jmallioras/ppo-LunarLander-v2
|
jmallioras
| null | 12 | 1 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
z4x/Reinforce-Pixelcopter
|
z4x
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pixelcopter-PLE-v0', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 300 |
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
SRobbins/dqn-SpaceInvadersNoFrameskip-v4
|
SRobbins
| null | 15 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,215 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga SRobbins -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga SRobbins -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga SRobbins
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
marmolpen3/bert-finetuned-sla
|
marmolpen3
|
bert
| 27 | 10 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,823 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-sla
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3274
- F1: 0.6555
- Roc Auc: 0.7660
- Accuracy: 0.5294
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 30 | 0.4994 | 0.0 | 0.5 | 0.0 |
| No log | 2.0 | 60 | 0.4408 | 0.0 | 0.5 | 0.0 |
| No log | 3.0 | 90 | 0.3761 | 0.4444 | 0.6462 | 0.1961 |
| No log | 4.0 | 120 | 0.3438 | 0.6496 | 0.7604 | 0.4706 |
| No log | 5.0 | 150 | 0.3274 | 0.6555 | 0.7660 | 0.5294 |
| No log | 6.0 | 180 | 0.3093 | 0.6557 | 0.7699 | 0.4706 |
| No log | 7.0 | 210 | 0.3083 | 0.6560 | 0.7738 | 0.5098 |
| No log | 8.0 | 240 | 0.3030 | 0.6457 | 0.7703 | 0.4706 |
| No log | 9.0 | 270 | 0.3096 | 0.6667 | 0.7811 | 0.4902 |
| No log | 10.0 | 300 | 0.2976 | 0.6718 | 0.7907 | 0.5098 |
| No log | 11.0 | 330 | 0.2986 | 0.6769 | 0.7924 | 0.5294 |
| No log | 12.0 | 360 | 0.3046 | 0.6562 | 0.7777 | 0.5098 |
| No log | 13.0 | 390 | 0.2988 | 0.6870 | 0.7997 | 0.4902 |
| No log | 14.0 | 420 | 0.3026 | 0.6769 | 0.7924 | 0.5098 |
| No log | 15.0 | 450 | 0.3005 | 0.6870 | 0.7997 | 0.5098 |
| No log | 16.0 | 480 | 0.3012 | 0.6822 | 0.7941 | 0.5098 |
| 0.2216 | 17.0 | 510 | 0.3013 | 0.6977 | 0.8032 | 0.5294 |
| 0.2216 | 18.0 | 540 | 0.3033 | 0.6977 | 0.8032 | 0.5294 |
| 0.2216 | 19.0 | 570 | 0.3024 | 0.6977 | 0.8032 | 0.5294 |
| 0.2216 | 20.0 | 600 | 0.3027 | 0.6923 | 0.8015 | 0.5098 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
kmposkid1/ppo-LunarLander-v2
|
kmposkid1
| null | 12 | 1 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Pearson/q-FrozenLake-v1-4x4-noSlippery
|
Pearson
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 396 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Pearson/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
coreml/coreml-vintedois-diffusion
|
coreml
| null | 4 | 0 | null | 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['coreml', 'stable-diffusion', 'text-to-image']
| false | true | true | 4,871 |
# Core ML Converted Model:
- This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).<br>
- Provide the model to an app such as Mochi Diffusion [Github](https://github.com/godly-devotion/MochiDiffusion) - [Discord](https://discord.gg/x2kartzxGv) to generate images.<br>
- `split_einsum` version is compatible with all compute unit options including Neural Engine.<br>
- `original` version is only compatible with CPU & GPU option.<br>
# Note: Some models do not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).
# Vintedois (22h) Diffusion:
Source(s): [Hugging Face](https://huggingface.co/22h/vintedois-diffusion-v0-1) - [CivitAI](https://civitai.com/models/2781/vintedois-diffusion-v0-1)
### Vintedois (22h) Diffusion model trained by [Predogl](https://twitter.com/Predogl) and [piEsposito](https://twitter.com/piesposi_to) with open weights, configs and prompts (as it should be)
This model was trained on a large amount of high quality images with simple prompts to generate beautiful images without a lot of prompt engineering.
You can enforce style by prepending your prompt with `estilovintedois` if it is not good enough.
It should also be very dreamboothable, being able to generate high fidelity faces with a little amount of steps.
**You can use this model commercially or whatever, but we are not liable if you do messed up stuff with it.**
### Gradio
We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run vintedois-diffusion-v0-1 :
[](https://huggingface.co/spaces/22h/vintedois-diffusion-v0-1)
### Model card
Everything from [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5), plus the fact that this is being built by two indie devs, so it was not extensively tested for new biases.
You can run this concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
### Sample results
<img src="https://huggingface.co/22h/vintedois-diffusion-v0-1/resolve/main/joined.png" width=1024/>
### Example prompts
- Prompt: photo of an old man in a jungle, looking at the camera
- CFG Scale: 7.5
- Scheduler: `diffusers.EulerAncestralDiscreteScheduler`
- Steps: 30
- Seed: 44
<img src="https://huggingface.co/22h/vintedois-diffusion-v0-1/resolve/main/44-euler-a-photo%20of%20an%20old%20man%20in%20a%20jungle%2C%20looking%20at%C2%A0the%C2%A0camera.png" width=512/>
- Prompt: kneeling cat knight, portrait, finely detailed armor, intricate design, silver, silk, cinematic lighting, 4k
- CFG Scale: 7.5
- Scheduler: `diffusers.EulerAncestralDiscreteScheduler`
- Steps: 50
- Seed: 44
<img src="https://huggingface.co/22h/vintedois-diffusion-v0-1/resolve/main/44-euler-a-kneeling%20cat%20knight%2C%20portrait%2C%20finely%20detailed%20armor%2C%20intricate%20design%2C%20silver%2C%20silk%2C%20cinematic%20lighting%2C%204k.png" width=512/>
- Prompt: a beautiful girl In front of the cabin, the country, by Artgerm Lau and Krenz Cushart,hyperdetailed, trending on artstation, trending on deviantart
- CFG Scale: 7.5
- Scheduler: `diffusers.EulerAncestralDiscreteScheduler`
- Steps: 50
- Seed: 44
<img src="https://huggingface.co/22h/vintedois-diffusion-v0-1/resolve/main/44-euler-a-a%20beautiful%20girl%20In%20front%20of%20the%20cabin%2C%20the%20country%2C%20by%20Artgerm%20Lau%20and%20Krenz%20Cushart%EF%BC%8Chyperdetailed%2C%20trending%20on%20artstation%2C%20tre.png" width=512/>
- Prompt: destroyed city
- CFG Scale: 7.5
- Scheduler: `diffusers.EulerAncestralDiscreteScheduler`
- Steps: 50
- Seed: 44
<img src="https://huggingface.co/22h/vintedois-diffusion-v0-1/resolve/main/44-euler-a-destroyed%20city.png" width=512/>
- Prompt: victorian city landscape
- CFG Scale: 7.5
- Scheduler: `diffusers.EulerAncestralDiscreteScheduler`
- Steps: 50
- Seed: 44
<img src="https://huggingface.co/22h/vintedois-diffusion-v0-1/resolve/main/44-euler-a-victorian%20city%20landscape.png" width=512/>
- Prompt: prehistoric native living room
- CFG Scale: 7.5
- Scheduler: `diffusers.EulerAncestralDiscreteScheduler`
- Steps: 50
- Seed: 44
<img src="https://huggingface.co/22h/vintedois-diffusion-v0-1/resolve/main/44-euler-a-prehistoric%20native%20living%20room.png" width=512/>
Thanks for the Google Developer Expert program for providing us with a GCP credits grant.
|
FBM/poca-SoccerTwos
|
FBM
| null | 20 | 601 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 837 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: FBM/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
hungtrv/distilbert-base-uncased-finetuned-emotion
|
hungtrv
|
distilbert
| 14 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,343 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1658
- Accuracy: 0.9365
- F1: 0.9368
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1757 | 1.0 | 250 | 0.1762 | 0.928 | 0.9282 |
| 0.1096 | 2.0 | 500 | 0.1658 | 0.9365 | 0.9368 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu116
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Ftassara/ppo-LunarLander-v2
|
Ftassara
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
anishchada12/distilgpt2-finetuned-PanoAI2
|
anishchada12
|
gpt2
| 12 | 3 |
transformers
| 0 |
text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,235 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-PanoAI2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1537
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 2 | 4.2481 |
| No log | 2.0 | 4 | 4.1813 |
| No log | 3.0 | 6 | 4.1537 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1
- Datasets 2.9.0
- Tokenizers 0.13.2
|
hatemestinbejaia/MARBERT-adept
|
hatemestinbejaia
|
bert
| 13 | 7 |
transformers
| 0 |
fill-mask
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,057 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MARBERT-adept
This model is a fine-tuned version of [UBC-NLP/MARBERT](https://huggingface.co/UBC-NLP/MARBERT) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 5.6149
- eval_runtime: 323.5555
- eval_samples_per_second: 34.615
- eval_steps_per_second: 4.327
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
UtopiansRareTruth/ppo-SnowballTarget
|
UtopiansRareTruth
| null | 20 | 2 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SnowballTarget']
| false | true | true | 864 |
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: UtopiansRareTruth/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jrauch4/a2c-PandaReachDense-v2
|
jrauch4
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 358 |
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
muhtasham/santacoder-finetuned-the-stack-assembly
|
muhtasham
|
gpt2
| 17 | 0 |
transformers
| 1 |
text-generation
| true | false | false |
openrail
|
['code']
|
['bigcode/the-stack-dedup']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer', 'code', 'codegen', 'assembly']
| true | true | true | 2,726 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# santacoder-finetuned-the-stack-assembly
This model is a fine-tuned version of [bigcode/santacoder](https://huggingface.co/bigcode/santacoder) on an on The Stack [assembly](https://huggingface.co/datasets/bigcode/the-stack-dedup) dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.7423
- eval_runtime: 14042.2321
- eval_samples_per_second: 6.116
- eval_steps_per_second: 3.058
- epoch: 0.3
- step: 1500
## Model description
The [SantaCoder](https://huggingface.co/bigcode/santacoder) models are a series of 1.1B parameter models trained on the Python, Java, and JavaScript subset of [The Stack (v1.1)](https://huggingface.co/datasets/bigcode/the-stack) (which excluded opt-out requests).
The main model uses [Multi Query Attention](https://arxiv.org/abs/1911.02150), was trained using near-deduplication and comment-to-code ratio as filtering criteria and using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255).
In addition, there are several models that were trained on datasets with different filter parameters and with architecture and objective variations.
## Intended uses & limitations
The predominant language in source is English although other languages are also present. As such the model is capable to generate code snippets provided some context but the generated code is not guaranteed to work as intended. It can be inefficient, contain bugs or exploits.
## Training and evaluation data
The Stack contains over 6TB of permissively-licensed source code files covering 358 programming languages. The dataset was created as part of the [BigCode Project](https://www.bigcode-project.org/), an open scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs). The Stack serves as a pre-training dataset for Code LLMs, i.e., code-generating AI systems which enable the synthesis of programs from natural language descriptions as well as other from code snippets. **This is the near-deduplicated version with 3TB data.**
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 5000
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
huggingtweets/aygo__
|
huggingtweets
|
gpt2
| 11 | 1 |
transformers
| 0 |
text-generation
| true | false | false | null |
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['huggingtweets']
| false | true | true | 3,300 |
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1621655536767827976/vu1Kjv3P_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Igor</div>
<div style="text-align: center; font-size: 14px;">@aygo__</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Igor.
| Data | Igor |
| --- | --- |
| Tweets downloaded | 599 |
| Retweets | 309 |
| Short tweets | 107 |
| Tweets kept | 183 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/lgj439wu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @aygo__'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/9c88kjcx) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/9c88kjcx/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/aygo__')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Galiess/Reinforce-CartPole8
|
Galiess
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 286 |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
huggingtweets/ahmadaldujayli
|
huggingtweets
|
gpt2
| 11 | 0 |
transformers
| 0 |
text-generation
| true | false | false | null |
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['huggingtweets']
| false | true | true | 3,337 |
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1468356727447986179/dBXjtgNb_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ahmad H.</div>
<div style="text-align: center; font-size: 14px;">@ahmadaldujayli</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Ahmad H..
| Data | Ahmad H. |
| --- | --- |
| Tweets downloaded | 1223 |
| Retweets | 403 |
| Short tweets | 126 |
| Tweets kept | 694 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/9pn1p7zo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ahmadaldujayli's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1atccf47) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1atccf47/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ahmadaldujayli')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
njrosati/q-FrozenLake-v1-4x4-noSlippery
|
njrosati
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 397 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="njrosati/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
njrosati/q-Taxi-v3
|
njrosati
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 364 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="njrosati/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
SRobbins/Reinforce-CartPole-v1
|
SRobbins
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 286 |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
johnhudzinatr/a2c-AntBulletEnv-v0
|
johnhudzinatr
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AntBulletEnv-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 352 |
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
cleanrl/Alien-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1
|
cleanrl
| null | 10 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Alien-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,263 |
# (CleanRL) **PPO** Agent Playing **Alien-v5**
This is a trained model of a PPO agent playing Alien-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Alien-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Alien-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Alien-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Alien-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Alien-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Alien-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Amidar-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1
|
cleanrl
| null | 10 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Amidar-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,271 |
# (CleanRL) **PPO** Agent Playing **Amidar-v5**
This is a trained model of a PPO agent playing Amidar-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Amidar-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Amidar-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Amidar-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Amidar-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Amidar-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Amidar-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Bowling-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1
|
cleanrl
| null | 10 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Bowling-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,279 |
# (CleanRL) **PPO** Agent Playing **Bowling-v5**
This is a trained model of a PPO agent playing Bowling-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Bowling-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Bowling-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Bowling-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Bowling-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Bowling-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Bowling-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Freeway-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1
|
cleanrl
| null | 10 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Freeway-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,279 |
# (CleanRL) **PPO** Agent Playing **Freeway-v5**
This is a trained model of a PPO agent playing Freeway-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Freeway-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Freeway-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Freeway-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Freeway-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Freeway-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Freeway-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Riverraid-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1
|
cleanrl
| null | 10 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Riverraid-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,295 |
# (CleanRL) **PPO** Agent Playing **Riverraid-v5**
This is a trained model of a PPO agent playing Riverraid-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Riverraid-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Riverraid-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Riverraid-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Riverraid-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Riverraid-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Riverraid-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Hero-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1
|
cleanrl
| null | 10 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Hero-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,255 |
# (CleanRL) **PPO** Agent Playing **Hero-v5**
This is a trained model of a PPO agent playing Hero-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Hero-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Hero-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Hero-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Hero-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Hero-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Hero-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/FishingDerby-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1
|
cleanrl
| null | 10 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FishingDerby-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,319 |
# (CleanRL) **PPO** Agent Playing **FishingDerby-v5**
This is a trained model of a PPO agent playing FishingDerby-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id FishingDerby-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id FishingDerby-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'FishingDerby-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/RoadRunner-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1
|
cleanrl
| null | 10 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['RoadRunner-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,303 |
# (CleanRL) **PPO** Agent Playing **RoadRunner-v5**
This is a trained model of a PPO agent playing RoadRunner-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id RoadRunner-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/RoadRunner-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/RoadRunner-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/RoadRunner-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id RoadRunner-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'RoadRunner-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Robotank-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1
|
cleanrl
| null | 10 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Robotank-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,287 |
# (CleanRL) **PPO** Agent Playing **Robotank-v5**
This is a trained model of a PPO agent playing Robotank-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Robotank-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Robotank-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Robotank-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Robotank-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Robotank-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Robotank-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Assault-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1
|
cleanrl
| null | 10 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Assault-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,279 |
# (CleanRL) **PPO** Agent Playing **Assault-v5**
This is a trained model of a PPO agent playing Assault-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Assault-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Assault-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Assault-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Assault-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Assault-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Assault-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/ChopperCommand-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1
|
cleanrl
| null | 10 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['ChopperCommand-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,335 |
# (CleanRL) **PPO** Agent Playing **ChopperCommand-v5**
This is a trained model of a PPO agent playing ChopperCommand-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id ChopperCommand-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/ChopperCommand-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/ChopperCommand-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/ChopperCommand-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id ChopperCommand-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'ChopperCommand-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Gravitar-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1
|
cleanrl
| null | 10 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Gravitar-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,287 |
# (CleanRL) **PPO** Agent Playing **Gravitar-v5**
This is a trained model of a PPO agent playing Gravitar-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Gravitar-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Gravitar-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Gravitar-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Gravitar-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Gravitar-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Gravitar-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Krull-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1
|
cleanrl
| null | 10 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Krull-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,263 |
# (CleanRL) **PPO** Agent Playing **Krull-v5**
This is a trained model of a PPO agent playing Krull-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Krull-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Krull-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Krull-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Krull-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Krull-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Krull-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Pong-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1
|
cleanrl
| null | 10 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pong-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,255 |
# (CleanRL) **PPO** Agent Playing **Pong-v5**
This is a trained model of a PPO agent playing Pong-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Pong-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Pong-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Pong-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Pong-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Pong-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Pong-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.