repo_id
stringlengths 4
122
| author
stringlengths 2
38
⌀ | model_type
stringlengths 2
33
⌀ | files_per_repo
int64 2
39k
| downloads_30d
int64 0
33.7M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.87k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
33
⌀ | languages
stringlengths 2
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringlengths 6
258
⌀ | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
46
| prs_closed
int64 0
34
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 2
classes | has_text
bool 1
class | text_length
int64 201
598k
| readme
stringlengths 0
598k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
eshwarprasadS/FrozenLake_PPO
|
eshwarprasadS
| null | 11 | 1 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 348 |
# **PPO** Agent playing **FrozenLake-v1**
This is a trained model of a **PPO** agent playing **FrozenLake-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Gaivoronsky/Dogge
|
Gaivoronsky
| null | 32 | 1 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Huggy']
| false | true | true | 818 |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: Gaivoronsky/Dogge
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
LouisHernandez/PandaReach_A2C
|
LouisHernandez
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['PandaReachJointsDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 370 |
# **a2c** Agent playing **PandaReachJointsDense-v2**
This is a trained model of a **a2c** agent playing **PandaReachJointsDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Sebbock/xtremedistil-l12-h384-uncased-trained-squad
|
Sebbock
|
bert
| 15 | 10 |
transformers
| 0 |
question-answering
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 958 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# result
This model is a fine-tuned version of [microsoft/xtremedistil-l12-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l12-h384-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
tayfen/poca-SoccerTwos
|
tayfen
| null | 20 | 440 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 840 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: tayfen/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
hamdan07/UltraSound-Lung
|
hamdan07
|
vit
| 5 | 3 |
transformers
| 0 |
image-classification
| true | false | false | null | null |
['hamdan07/autotrain-data-lungultrasound']
|
{'emissions': 1.3971381846584354}
| 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['autotrain', 'vision', 'image-classification']
| false | true | true | 394 |
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 3310291874
- CO2 Emissions (in grams): 1.3971
## Validation Metrics
- Loss: 0.001
- Accuracy: 1.000
- Macro F1: 1.000
- Micro F1: 1.000
- Weighted F1: 1.000
- Macro Precision: 1.000
- Micro Precision: 1.000
- Weighted Precision: 1.000
- Macro Recall: 1.000
- Micro Recall: 1.000
- Weighted Recall: 1.000
|
rlucasz93/FrozenLake-v1
|
rlucasz93
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 381 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="rlucasz93/FrozenLake-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
coreml/coreml-Counterfeit
|
coreml
| null | 9 | 0 | null | 3 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['coreml', 'stable-diffusion', 'text-to-image']
| false | true | true | 4,764 |
# Core ML Converted Model:
- This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).<br>
- Provide the model to an app such as Mochi Diffusion [Github](https://github.com/godly-devotion/MochiDiffusion) - [Discord](https://discord.gg/x2kartzxGv) to generate images.<br>
- `split_einsum` version is compatible with all compute unit options including Neural Engine.<br>
- `original` version is only compatible with CPU & GPU option.<br>
# Note: Some models do not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).
# Note #2: "V2.5-VAE" versions have the included counterfiet v2.5 vae embedded.
# Counterfeit:
Source(s): [Hugging Face](https://huggingface.co/gsdf) - [CivitAI](https://civitai.com/models/4468/counterfeit-v25)
Counterfeit is anime style Stable Diffusion model.
DreamBooth + Merge Block Weights + Merge LoRA
# Counterfeit-V2.5 e.g.

```
((masterpiece,best quality)),1girl, solo, animal ears, rabbit, barefoot, knees up, dress, sitting, rabbit ears, short sleeves, looking at viewer, grass, short hair, smile, white hair, puffy sleeves, outdoors, puffy short sleeves, bangs, on ground, full body, animal, white dress, sunlight, brown eyes, dappled sunlight, day, depth of field
Negative prompt: EasyNegative, extra fingers,fewer fingers,
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Size: 448x768, Denoising strength: 0.6, Hires upscale: 1.8, Hires upscaler: Latent
```

```
((masterpiece,best quality)),1girl, from below, solo, school uniform, serafuku, sky, cloud, black hair, skirt, sailor collar, looking at viewer, short hair, building, bangs, neckerchief, long sleeves, cloudy sky, power lines, shirt, cityscape, pleated skirt, scenery, blunt bangs, city, night, black sailor collar, closed mouth, black skirt, medium hair, school bag , holding bag
Negative prompt: EasyNegative, extra fingers,fewer fingers,
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Size: 832x512, Denoising strength: 0.6, Hires upscale: 1.8, Hires upscaler: Latent
```

```
((masterpiece,best quality)),2girls, black kimono, black legwear, black ribbon, black hair, cherry blossoms, day, flower, hair bun, hair ribbon, japanese clothes, kimono, long hair, looking at viewer, looking back, multiple girls, obi, outdoors, red eyes, red hair, ribbon, sandals, single hair bun, stairs, standing, statue, torii, tree, white kimono, yellow eyes
Negative prompt: EasyNegative, extra fingers,fewer fingers,
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Size: 640x960, Denoising strength: 0.58, Hires upscale: 1.8, Hires upscaler: Latent
```

```
((masterpiece,best quality)),1girl, bangs, blue eyes, blurry background, branch, brown hair, dappled sunlight, flower, from side, hair flower, hair ornament, japanese clothes, kimono, leaf, (maple leaf:1.9), obi, outdoors, sash, solo, sunlight, upper body
Negative prompt: EasyNegative, extra fingers,fewer fingers,
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Size: 864x512, Denoising strength: 0.58, Hires upscale: 1.8, Hires upscaler: Latent
```

```
((masterpiece,best quality))1girl, solo, black skirt, blue eyes, electric guitar, guitar, headphones, holding, holding plectrum, instrument, long hair, , music, one side up, pink hair, playing guiter, pleated skirt, black shirt, indoors
Negative prompt: EasyNegative, extra fingers,fewer fingers,
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Size: 864x512, Denoising strength: 0.58, Hires upscale: 1.8, Hires upscaler: Latent
```

```
((masterpiece,best quality)), 1girl, food, fruit, solo, skirt, shop, indoors, jacket, shopping, basket, jewelry, shirt, shelf, short hair, black hair, plaid skirt, black jacket, dutch angle, yellow eyes, looking at viewer
Negative prompt: EasyNegative, extra fingers,fewer fingers,
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Size: 864x512, Denoising strength: 0.58, Hires upscale: 1.8, Hires upscaler: Latent
```
|
rlucasz93/taxi-v3
|
rlucasz93
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 363 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="rlucasz93/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_data_aug_qqp_192
|
gokuls
|
distilbert
| 19 | 0 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,066 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_logit_kd_data_aug_qqp_192
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7029
- Accuracy: 0.6540
- F1: 0.1254
- Combined Score: 0.3897
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:--------------:|
| 0.8495 | 1.0 | 29671 | 0.7150 | 0.6333 | 0.0086 | 0.3210 |
| 0.7654 | 2.0 | 59342 | 0.7273 | 0.6339 | 0.0121 | 0.3230 |
| 0.7305 | 3.0 | 89013 | 0.7241 | 0.6400 | 0.0479 | 0.3440 |
| 0.7108 | 4.0 | 118684 | 0.7147 | 0.6381 | 0.0380 | 0.3381 |
| 0.698 | 5.0 | 148355 | 0.7192 | 0.6414 | 0.0564 | 0.3489 |
| 0.6891 | 6.0 | 178026 | 0.7239 | 0.6357 | 0.0232 | 0.3295 |
| 0.6823 | 7.0 | 207697 | 0.7141 | 0.6442 | 0.0723 | 0.3583 |
| 0.6771 | 8.0 | 237368 | 0.7112 | 0.6491 | 0.1004 | 0.3748 |
| 0.6729 | 9.0 | 267039 | 0.7156 | 0.6494 | 0.1022 | 0.3758 |
| 0.6694 | 10.0 | 296710 | 0.7185 | 0.6502 | 0.1053 | 0.3777 |
| 0.6664 | 11.0 | 326381 | 0.7129 | 0.6508 | 0.1085 | 0.3796 |
| 0.6639 | 12.0 | 356052 | 0.7112 | 0.6508 | 0.1080 | 0.3794 |
| 0.6617 | 13.0 | 385723 | 0.7105 | 0.6542 | 0.1260 | 0.3901 |
| 0.6597 | 14.0 | 415394 | 0.7029 | 0.6540 | 0.1254 | 0.3897 |
| 0.658 | 15.0 | 445065 | 0.7094 | 0.6486 | 0.0964 | 0.3725 |
| 0.6564 | 16.0 | 474736 | 0.7072 | 0.6510 | 0.1084 | 0.3797 |
| 0.655 | 17.0 | 504407 | 0.7049 | 0.6557 | 0.1333 | 0.3945 |
| 0.6537 | 18.0 | 534078 | 0.7051 | 0.6542 | 0.1269 | 0.3905 |
| 0.6526 | 19.0 | 563749 | 0.7096 | 0.6601 | 0.1573 | 0.4087 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
HuggingFaceM4/opt-1.3b-fp16-8b-samples
|
HuggingFaceM4
|
opt
| 11 | 2 |
transformers
| 0 |
text-generation
| true | false | false |
openrail
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 2,345 |
This model is an outcome of an experiment of training from scratch
https://huggingface.co/facebook/opt-1.3b for just 8B tokens in fp16, fp32 and bf16 which would allow
comparing the resulting models when they are used to train a multimodal model. But, of course, it can
be used for any other purpose, just be aware that these models are very undertrained. Most language
models are trained for about 300B tokens, this one was just 8B.
The 3 repositories are:
- https://huggingface.co/HuggingFaceM4/opt-1.3b-fp16-8b-samples
- https://huggingface.co/HuggingFaceM4/opt-1.3b-fp32-8b-samples
- https://huggingface.co/HuggingFaceM4/opt-1.3b-bf16-8b-samples
## The training
get transformers:
```
git clone https://github.com/huggingface/transformers
cd transformers
```
Prepare an initialized opt-1.3 model:
```
cat << EOT > prep-fp16.py
from transformers import AutoConfig, AutoModel, AutoTokenizer
import torch
mname = "facebook/opt-1.3b"
config = AutoConfig.from_pretrained(mname)
model = AutoModel.from_config(config, torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained(mname)
path = "opt-1.3b-fp16"
model.save_pretrained(path)
tokenizer.save_pretrained(path)
EOT
```
Run:
```
python prep-fp16.py
```
Train from scratch on a single 8x 80GB A100 node on `realnewslike` subset of https://huggingface.co/datasets/c4:
```
git clone https://github.com/huggingface/transformers
cd transformers
PYTHONPATH="src" python -m torch.distributed.run \
--nproc_per_node=8 \
--nnode=1 \
--node_rank=0 \
--master_addr=127.0.0.1 \
--master_port=9901 \
examples/pytorch/language-modeling/run_clm.py \
--fp16 \
--tf32 1 \
--seed 42 \
--dataset_name c4 \
--dataset_config_name realnewslike \
--model_name_or_path opt-1.3b-fp16 \
--per_device_train_batch_size 6 \
--per_device_eval_batch_size 6 \
--gradient_accumulation_steps 2 \
--do_train \
--logging_steps 5 \
--save_steps 1000 \
--eval_steps 1000 \
--weight_decay 0.1 \
--num_train_epochs 1 \
--adam_beta1 0.9 \
--adam_beta2 0.95 \
--learning_rate 0.0002 \
--lr_scheduler_type linear \
--warmup_steps 1000 \
--report_to tensorboard \
--output_dir saved \
--logging_dir tb \
--log_level warning \
--preprocessing_num_workers 32
```
The training took about 40h.
|
HuggingFaceM4/opt-1.3b-fp32-8b-samples
|
HuggingFaceM4
|
opt
| 10 | 2 |
transformers
| 0 |
text-generation
| true | false | false |
openrail
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 2,332 |
This model is an outcome of an experiment of training from scratch
https://huggingface.co/facebook/opt-1.3b for just 8B tokens in fp16, fp32 and bf16 which would allow
comparing the resulting models when they are used to train a multimodal model. But, of course, it can
be used for any other purpose, just be aware that these models are very undertrained. Most language
models are trained for about 300B tokens, this one was just 8B.
The 3 repositories are:
- https://huggingface.co/HuggingFaceM4/opt-1.3b-fp16-8b-samples
- https://huggingface.co/HuggingFaceM4/opt-1.3b-fp32-8b-samples
- https://huggingface.co/HuggingFaceM4/opt-1.3b-bf16-8b-samples
## The training
get transformers:
```
git clone https://github.com/huggingface/transformers
cd transformers
```
Prepare an initialized opt-1.3 model:
```
cat << EOT > prep-fp32.py
from transformers import AutoConfig, AutoModel, AutoTokenizer
import torch
mname = "facebook/opt-1.3b"
config = AutoConfig.from_pretrained(mname)
model = AutoModel.from_config(config, torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained(mname)
path = "opt-1.3b-fp32"
model.save_pretrained(path)
tokenizer.save_pretrained(path)
EOT
```
Run:
```
python prep-fp32.py
```
Train from scratch on a single 8x 80GB A100 node on `realnewslike` subset of https://huggingface.co/datasets/c4:
```
git clone https://github.com/huggingface/transformers
cd transformers
PYTHONPATH="src" python -m torch.distributed.run \
--nproc_per_node=8 \
--nnode=1 \
--node_rank=0 \
--master_addr=127.0.0.1 \
--master_port=9901 \
examples/pytorch/language-modeling/run_clm.py \
--tf32 1 \
--seed 42 \
--dataset_name c4 \
--dataset_config_name realnewslike \
--model_name_or_path opt-1.3b-fp32 \
--per_device_train_batch_size 6 \
--per_device_eval_batch_size 6 \
--gradient_accumulation_steps 2 \
--do_train \
--logging_steps 5 \
--save_steps 1000 \
--eval_steps 1000 \
--weight_decay 0.1 \
--num_train_epochs 1 \
--adam_beta1 0.9 \
--adam_beta2 0.95 \
--learning_rate 0.0002 \
--lr_scheduler_type linear \
--warmup_steps 1000 \
--report_to tensorboard \
--output_dir saved \
--logging_dir tb \
--log_level warning \
--preprocessing_num_workers 32
```
The training took about 40h.
|
HuggingFaceM4/opt-1.3b-bf16-8b-samples
|
HuggingFaceM4
|
opt
| 11 | 2 |
transformers
| 0 |
text-generation
| true | false | false |
openrail
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 2,345 |
This model is an outcome of an experiment of training from scratch
https://huggingface.co/facebook/opt-1.3b for just 8B tokens in fp16, fp32 and bf16 which would allow
comparing the resulting models when they are used to train a multimodal model. But, of course, it can
be used for any other purpose, just be aware that these models are very undertrained. Most language
models are trained for about 300B tokens, this one was just 8B.
The 3 repositories are:
- https://huggingface.co/HuggingFaceM4/opt-1.3b-fp16-8b-samples
- https://huggingface.co/HuggingFaceM4/opt-1.3b-fp32-8b-samples
- https://huggingface.co/HuggingFaceM4/opt-1.3b-bf16-8b-samples
## The training
get transformers:
```
git clone https://github.com/huggingface/transformers
cd transformers
```
Prepare an initialized opt-1.3 model:
```
cat << EOT > prep-bf16.py
from transformers import AutoConfig, AutoModel, AutoTokenizer
import torch
mname = "facebook/opt-1.3b"
config = AutoConfig.from_pretrained(mname)
model = AutoModel.from_config(config, torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained(mname)
path = "opt-1.3b-bf16"
model.save_pretrained(path)
tokenizer.save_pretrained(path)
EOT
```
Run:
```
python prep-bf16.py
```
Train from scratch on a single 8x 80GB A100 node on `realnewslike` subset of https://huggingface.co/datasets/c4:
```
git clone https://github.com/huggingface/transformers
cd transformers
PYTHONPATH="src" python -m torch.distributed.run \
--nproc_per_node=8 \
--nnode=1 \
--node_rank=0 \
--master_addr=127.0.0.1 \
--master_port=9901 \
examples/pytorch/language-modeling/run_clm.py \
--bf16 \
--tf32 1 \
--seed 42 \
--dataset_name c4 \
--dataset_config_name realnewslike \
--model_name_or_path opt-1.3b-bf16 \
--per_device_train_batch_size 6 \
--per_device_eval_batch_size 6 \
--gradient_accumulation_steps 2 \
--do_train \
--logging_steps 5 \
--save_steps 1000 \
--eval_steps 1000 \
--weight_decay 0.1 \
--num_train_epochs 1 \
--adam_beta1 0.9 \
--adam_beta2 0.95 \
--learning_rate 0.0002 \
--lr_scheduler_type linear \
--warmup_steps 1000 \
--report_to tensorboard \
--output_dir saved \
--logging_dir tb \
--log_level warning \
--preprocessing_num_workers 32
```
The training took about 40h.
|
Rolo/Reinforce-Pixelcopter
|
Rolo
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pixelcopter-PLE-v0', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 300 |
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
alex-uv2/ddpm-celebahq-finetuned-butterflies-2epochs
|
alex-uv2
| null | 6 | 0 |
diffusers
| 0 |
unconditional-image-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['pytorch', 'diffusers', 'unconditional-image-generation', 'diffusion-models-class']
| false | true | true | 346 |
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
Describe your model here
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('alex-uv2/ddpm-celebahq-finetuned-butterflies-2epochs')
image = pipeline().images[0]
image
```
|
MultiversexPeeps/the_pink_spider
|
MultiversexPeeps
| null | 21 | 7 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 2 | 1 | 1 | 0 | 0 | 0 | 0 |
['text-to-image']
| false | true | true | 1,371 |
[](https://huggingface.co/spaces/MultiversexPeeps/the_pink_spider)
### The Pink Spider Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
If you want to donate towards costs and don't want to subscribe:
https://ko-fi.com/DUSKFALLcrew
If you want to monthly support the EARTH & DUSK media projects and not just AI:
https://www.patreon.com/earthndusk
duskspider (use that on your prompt)
|
lucataco/pokemon-lora
|
lucataco
| null | 13 | 0 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers', 'lora']
| false | true | true | 377 |
# LoRA text2image fine-tuning - https://huggingface.co/lucataco/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
tthabibe/t5-small-finetuned-xsum
|
tthabibe
|
t5
| 25 | 2 |
transformers
| 0 |
text2text-generation
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,527 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tthabibe/t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.8595
- Validation Loss: 2.9809
- Train Rouge1: 25.7762
- Train Rouge2: 6.4553
- Train Rougel: 20.7431
- Train Rougelsum: 20.7595
- Train Gen Len: 18.6696
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 0.002, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 2.8595 | 2.9809 | 25.7762 | 6.4553 | 20.7431 | 20.7595 | 18.6696 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.10.0
- Datasets 2.9.0
- Tokenizers 0.11.0
|
ben-yu/Reinforce-Pixelcopter_v2
|
ben-yu
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pixelcopter-PLE-v0', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 300 |
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
mocker/KaBoom
|
mocker
| null | 14 | 0 | null | 18 | null | false | false | false |
creativeml-openrail-m
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['art']
| false | true | true | 3,883 |
# In short,
- FaceBomb : Covers from anime to 2.5D style. Suited for general use.
- recipe : 0.5((0.5(AbyssOrangeMix2_hard) + 0.5(pastelmix-better-vae-fp32)) + 0.5(CounterfeitV25_25)) + 0.5(dalcefoV3Painting_dalcefoV3Painting)
- ColorBomb : FaceBomb + vivid color and lighting. A bit picky about prompts.
- recipe : dalcefoV3Painting_dalcefoV3Painting + 0.5(ultracolorV4_ultracolorV4 - CounterfeitV25_25)
- HyperBomb : Strong anime style w/ highly saturated color.
- recipe : 0.5((0.5(AbyssOrangeMix2_hard) + 0.5(pastelmix-better-vae-fp32)) + 0.5(CounterfeitV25_25)) + 0.5(dalcefoV3Painting_dalcefoV3Painting) + 0.3(0.8(pastelMixStylizedAnime_pastelMixPrunedFP16) + 0.2(CounterfeitV25_25) - f222)
# Recommended Setting
## VAE
- If the color appears dull or washed out, try applying VAE. I used `kl-f8-anime2`
- https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/vae/kl-f8-anime2.ckpt
## Sampling method and Hires.fix
1. `DPM++ SDE Karras: 24~32 steps` / `R-ESRGAN 4x+ Anime6B: 2x, 14 steps` / `Denoising strength:0.45 ~ 0.55`
2. `DDIM: 24~32 steps` / `Latent: 2x 14 steps` / `Denoising Strength:0.45 ~ 0.7`
- First option yields better result in general. Recommended.
- Second option was 1.5 ~ 2 times faster on my system but the output was questionable. Especially for ColorBomb.
## FaceBomb
- Positive : `(masterpiece, sidelighting, finely detailed beautiful eyes: 1.2), masterpiece*portrait, realistic, 3d face, lustrous skin, `
- Negative : `(worst quality, low quality:1.4), watermark, logo,`
## ColorBomb
- Positive : `(masterpiece, sidelighting, finely detailed beautiful eyes: 1.2), (ultra-detailed, high-resolution: 1.2), beautiful girl, { color } { color } theme, `
- e.g. black gold theme
- Negative : `(worst quality, low quality:1.4), watermark, logo,`
## HyperBomb
- Positive : `(masterpiece, sidelighting, finely detailed beautiful eyes: 1.2),`
- Negative : `(worst quality, low quality:1.4), watermark, logo,`
# Example
- More pictures in folder.
- Below are the ideal/intended outputs.

### FaceBomb
(masterpiece, sidelighting, finely detailed beautiful eyes: 1.2), masterpiece*portrait, realistic, 3d face, glowing eyes, shiny hair, lustrous skin, solo, embarassed
Negative prompt: (worst quality, low quality:1.4), watermark, logo,
Steps: 32, Sampler: DPM++ SDE Karras, CFG scale: 9, Seed: 3624413002, Size: 512x768, Model hash: aad629159b, Model: __Custom_FaceBombMix-fp16-no-ema, Denoising strength: 0.5, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires steps: 14, Hires upscaler: R-ESRGAN 4x+ Anime6B
---

### ColorBomb
((masterpiece, best quality, ultra-detailed, high-resolution)), solo, beautiful girl, gleaming eye, perfect eye, age 15, black white gold theme,
Negative prompt: (worst quality, low quality:1.4), (depth of field, blurry:1.2), (greyscale, monochrome:1.1), 3D face, cropped, lowres, text, jpeg artifacts, signature, watermark, username, blurry, artist name, trademark, watermark, title, (tan, muscular, sd character:1.1), multiple view, Reference sheet, non-linear background, blurred background, bad anatomy, cropped hands, extra digit, fewer digit,
Steps: 24, Sampler: DDIM, CFG scale: 7, Seed: 3050494714, Size: 512x768, Model hash: 627f50eea8, Model: __Custom_ColorBomb-fp16-no-ema, Denoising strength: 0.7, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires steps: 14, Hires upscaler: Latent
---

### HyperBomb
(masterpiece, sidelighting, finely detailed beautiful eyes: 1.2),
Negative prompt: (worst quality, low quality:1.4), watermark, logo,
Steps: 32, Sampler: DDIM, CFG scale: 9, Seed: 2411870881, Size: 768x512, Model hash: 16c6ca45b1, Model: __Custom_HyperBombMix-fp16-no-ema, Denoising strength: 0.7, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires steps: 14, Hires upscaler: Latent
|
kejian/weird_combo
|
kejian
| null | 2 | 0 | null | 0 | null | false | false | false |
mit
|
['en']
|
['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 8,207 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kejian/weird_combo
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 12500
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.13.0+cu116
- Datasets 2.0.0
- Tokenizers 0.12.1
# Full config
{'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000',
'tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000',
'tomekkorbak/detoxify-pile-chunk3-1850000-1900000',
'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True,
'skip_tokens': 1661599744},
'generation': {'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 4096}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'force_call_on': [25177],
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': False,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'revision': '38f6dddb98c264d97f85ef4bcdb0ac6c6c88aeeb'},
'path_or_name': 'tomekkorbak/hungry_saha'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 128,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'kejian/weird_combo',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0001,
'logging_first_step': True,
'logging_steps': 10,
'num_tokens': 3300000000.0,
'output_dir': 'training_output',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 6294,
'save_strategy': 'no',
'seed': 42,
'tokens_already_seen': 1661599744,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/kejian/uncategorized/runs/1rlo75qw
|
sgoodfriend/ppo-Acrobot-v1
|
sgoodfriend
| null | 64 | 0 |
rl-algo-impls
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Acrobot-v1', 'ppo', 'deep-reinforcement-learning', 'reinforcement-learning']
| true | true | true | 4,715 |
# **PPO** Agent playing **Acrobot-v1**
This is a trained model of a **PPO** agent playing **Acrobot-v1** using the [/sgoodfriend/rl-algo-impls](https://github.com/sgoodfriend/rl-algo-impls) repo.
All models trained at this commit can be found at https://api.wandb.ai/links/sgoodfriend/6p2sjqtn.
## Training Results
This model was trained from 3 trainings of **PPO** agents using different initial seeds. These agents were trained by checking out [5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c). The best and last models were kept from each training. This submission has loaded the best models from each training, reevaluates them, and selects the best model from these latest evaluations (mean - std).
| algo | env | seed | reward_mean | reward_std | eval_episodes | best | wandb_url |
|:-------|:-----------|-------:|--------------:|-------------:|----------------:|:-------|:-----------------------------------------------------------------------------|
| ppo | Acrobot-v1 | 4 | -72.5 | 7.68115 | 16 | * | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/bzab0jtv) |
| ppo | Acrobot-v1 | 5 | -71.875 | 9.55167 | 16 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/zqord0fg) |
| ppo | Acrobot-v1 | 6 | -74.375 | 14.5081 | 16 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/y1w2hqhu) |
### Prerequisites: Weights & Biases (WandB)
Training and benchmarking assumes you have a Weights & Biases project to upload runs to.
By default training goes to a rl-algo-impls project while benchmarks go to
rl-algo-impls-benchmarks. During training and benchmarking runs, videos of the best
models and the model weights are uploaded to WandB.
Before doing anything below, you'll need to create a wandb account and run `wandb
login`.
## Usage
/sgoodfriend/rl-algo-impls: https://github.com/sgoodfriend/rl-algo-impls
Note: While the model state dictionary and hyperaparameters are saved, the latest
implementation could be sufficiently different to not be able to reproduce similar
results. You might need to checkout the commit the agent was trained on:
[5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c).
```
# Downloads the model, sets hyperparameters, and runs agent for 3 episodes
python enjoy.py --wandb-run-path=sgoodfriend/rl-algo-impls-benchmarks/bzab0jtv
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_enjoy.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_enjoy.ipynb)
notebook.
## Training
If you want the highest chance to reproduce these results, you'll want to checkout the
commit the agent was trained on: [5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c). While
training is deterministic, different hardware will give different results.
```
python train.py --algo ppo --env Acrobot-v1 --seed 4
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_train.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_train.ipynb)
notebook.
## Benchmarking (with Lambda Labs instance)
This and other models from https://api.wandb.ai/links/sgoodfriend/6p2sjqtn were generated by running a script on a Lambda
Labs instance. In a Lambda Labs instance terminal:
```
git clone [email protected]:sgoodfriend/rl-algo-impls.git
cd rl-algo-impls
bash ./lambda_labs/setup.sh
wandb login
bash ./lambda_labs/benchmark.sh
```
### Alternative: Google Colab Pro+
As an alternative,
[colab_benchmark.ipynb](https://github.com/sgoodfriend/rl-algo-impls/tree/main/benchmarks#:~:text=colab_benchmark.ipynb),
can be used. However, this requires a Google Colab Pro+ subscription and running across
4 separate instances because otherwise running all jobs will exceed the 24-hour limit.
## Hyperparameters
This isn't exactly the format of hyperparams in hyperparams/ppo.yml, but instead the Wandb Run Config. However, it's very
close and has some additional data:
```
algo: ppo
algo_hyperparams:
ent_coef: 0
gae_lambda: 0.94
gamma: 0.99
n_epochs: 4
n_steps: 256
env: Acrobot-v1
env_hyperparams:
n_envs: 16
normalize: true
n_timesteps: 1000000
seed: 4
use_deterministic_algorithms: true
wandb_entity: null
wandb_project_name: rl-algo-impls-benchmarks
wandb_tags:
- benchmark_5598ebc
- host_192-9-145-26
```
|
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_data_aug_stsb_128
|
gokuls
|
mobilebert
| 17 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,514 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_logit_kd_data_aug_stsb_128
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4602
- Pearson: 0.1596
- Spearmanr: 0.1582
- Combined Score: 0.1589
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:---------:|:--------------:|
| 0.5444 | 1.0 | 2518 | 1.4965 | 0.1589 | 0.1763 | 0.1676 |
| 0.3254 | 2.0 | 5036 | 1.5276 | 0.1502 | 0.1674 | 0.1588 |
| 0.2847 | 3.0 | 7554 | 1.5430 | 0.1587 | 0.1680 | 0.1634 |
| 0.2376 | 4.0 | 10072 | 1.6906 | 0.1669 | 0.1786 | 0.1728 |
| 0.1741 | 5.0 | 12590 | 1.4788 | 0.1662 | 0.1725 | 0.1694 |
| 0.1315 | 6.0 | 15108 | 1.5662 | 0.1640 | 0.1700 | 0.1670 |
| 0.1055 | 7.0 | 17626 | 1.5100 | 0.1663 | 0.1698 | 0.1680 |
| 0.0879 | 8.0 | 20144 | 1.4602 | 0.1596 | 0.1582 | 0.1589 |
| 0.0739 | 9.0 | 22662 | 1.6612 | 0.1584 | 0.1621 | 0.1603 |
| 0.0632 | 10.0 | 25180 | 1.5825 | 0.1489 | 0.1547 | 0.1518 |
| 0.0548 | 11.0 | 27698 | 1.5946 | 0.1421 | 0.1461 | 0.1441 |
| 0.0473 | 12.0 | 30216 | 1.6515 | 0.1526 | 0.1548 | 0.1537 |
| 0.0415 | 13.0 | 32734 | 1.6544 | 0.1506 | 0.1478 | 0.1492 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
GBaker/clinical-longformer-medqa-usmle-nocontext
|
GBaker
|
longformer
| 13 | 0 |
transformers
| 0 |
multiple-choice
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,425 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clinical-longformer-medqa-usmle-nocontext
This model is a fine-tuned version of [yikuan8/Clinical-Longformer](https://huggingface.co/yikuan8/Clinical-Longformer) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3962
- Accuracy: 0.2930
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 159 | 1.3778 | 0.2962 |
| No log | 2.0 | 318 | 1.3731 | 0.2922 |
| No log | 3.0 | 477 | 1.3962 | 0.2930 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
pryjuli/ppo-LunarLander-v2
|
pryjuli
| null | 12 | 1 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
zlicastro/zl-ppo-SnowballTarget
|
zlicastro
| null | 24 | 2 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SnowballTarget']
| false | true | true | 859 |
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: zlicastro/zl-ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
rlucasz93/atari
|
rlucasz93
| null | 15 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,220 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rlucasz93 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rlucasz93 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga rlucasz93
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
dosukebewitch/WhiteMixs
|
dosukebewitch
| null | 6 | 0 | null | 8 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'text-to-image']
| false | true | true | 1,486 |
## Examples

```
(masterpiece, best quality:1.2), 1girl,
NP: (worst quality, low quality, medium quality:1.4), (depth of field, blurry:1.2),
Steps: 35, Sampler: DPM++ SDE Karras, CFG scale: 7,
Seed: 2405394752, Size: 512x832, Model hash: 3dd41b2474,
Denoising strength: 0.3, Clip skip: 2, Hires upscale: 2, Hires upscaler: R-ESRGAN 4x+ Anime6B
```

```
(masterpiece:1.2), 1girl, undertaker,
NP: (worst quality, low quality, medium quality:1.4), (depth of field, blurry:1.2), bad anatomy, bad hands, text, missing fingers, extra digit, fewer digits,
Steps: 30, Sampler: DPM++ SDE Karras, CFG scale: 7,
Seed: 3862355089, Size: 512x832, Model hash: 3dd41b2474,
Denoising strength: 0.3, Clip skip: 2, Hires upscale: 2, Hires upscaler: R-ESRGAN 4x+ Anime6B
```

```
(masterpiece:1.2), 1girl, off-shoulder sweater,
NP: (worst quality, low quality, medium quality:1.4), (depth of field, blurry:1.2), bad anatomy, bad hands, text, missing fingers, extra digit, fewer digits,
Steps: 30, Sampler: DPM++ SDE Karras, CFG scale: 7,
Seed: 3862355089, Size: 512x832, Model hash: 3dd41b2474,
Denoising strength: 0.3, Clip skip: 2, Hires upscale: 2, Hires upscaler: R-ESRGAN 4x+ Anime6B
```
|
ryanaspen/q-FrozenLake-v1-4x4-noSlippery
|
ryanaspen
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 398 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ryanaspen/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ryanaspen/q-Taxi-v3
|
ryanaspen
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 365 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ryanaspen/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
zlicastro/zl-ppo-PyramidsRND
|
zlicastro
| null | 16 | 2 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Pyramids']
| false | true | true | 838 |
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: zlicastro/zl-ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sgoodfriend/ppo-CartPole-v1
|
sgoodfriend
| null | 63 | 0 |
rl-algo-impls
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['CartPole-v1', 'ppo', 'deep-reinforcement-learning', 'reinforcement-learning']
| true | true | true | 4,886 |
# **PPO** Agent playing **CartPole-v1**
This is a trained model of a **PPO** agent playing **CartPole-v1** using the [/sgoodfriend/rl-algo-impls](https://github.com/sgoodfriend/rl-algo-impls) repo.
All models trained at this commit can be found at https://api.wandb.ai/links/sgoodfriend/6p2sjqtn.
## Training Results
This model was trained from 3 trainings of **PPO** agents using different initial seeds. These agents were trained by checking out [5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c). The best and last models were kept from each training. This submission has loaded the best models from each training, reevaluates them, and selects the best model from these latest evaluations (mean - std).
| algo | env | seed | reward_mean | reward_std | eval_episodes | best | wandb_url |
|:-------|:------------|-------:|--------------:|-------------:|----------------:|:-------|:-----------------------------------------------------------------------------|
| ppo | CartPole-v1 | 4 | 500 | 0 | 16 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/09gea50g) |
| ppo | CartPole-v1 | 5 | 500 | 0 | 16 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/p1m90ddv) |
| ppo | CartPole-v1 | 6 | 500 | 0 | 16 | * | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/vg3gskmq) |
### Prerequisites: Weights & Biases (WandB)
Training and benchmarking assumes you have a Weights & Biases project to upload runs to.
By default training goes to a rl-algo-impls project while benchmarks go to
rl-algo-impls-benchmarks. During training and benchmarking runs, videos of the best
models and the model weights are uploaded to WandB.
Before doing anything below, you'll need to create a wandb account and run `wandb
login`.
## Usage
/sgoodfriend/rl-algo-impls: https://github.com/sgoodfriend/rl-algo-impls
Note: While the model state dictionary and hyperaparameters are saved, the latest
implementation could be sufficiently different to not be able to reproduce similar
results. You might need to checkout the commit the agent was trained on:
[5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c).
```
# Downloads the model, sets hyperparameters, and runs agent for 3 episodes
python enjoy.py --wandb-run-path=sgoodfriend/rl-algo-impls-benchmarks/vg3gskmq
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_enjoy.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_enjoy.ipynb)
notebook.
## Training
If you want the highest chance to reproduce these results, you'll want to checkout the
commit the agent was trained on: [5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c). While
training is deterministic, different hardware will give different results.
```
python train.py --algo ppo --env CartPole-v1 --seed 6
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_train.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_train.ipynb)
notebook.
## Benchmarking (with Lambda Labs instance)
This and other models from https://api.wandb.ai/links/sgoodfriend/6p2sjqtn were generated by running a script on a Lambda
Labs instance. In a Lambda Labs instance terminal:
```
git clone [email protected]:sgoodfriend/rl-algo-impls.git
cd rl-algo-impls
bash ./lambda_labs/setup.sh
wandb login
bash ./lambda_labs/benchmark.sh
```
### Alternative: Google Colab Pro+
As an alternative,
[colab_benchmark.ipynb](https://github.com/sgoodfriend/rl-algo-impls/tree/main/benchmarks#:~:text=colab_benchmark.ipynb),
can be used. However, this requires a Google Colab Pro+ subscription and running across
4 separate instances because otherwise running all jobs will exceed the 24-hour limit.
## Hyperparameters
This isn't exactly the format of hyperparams in hyperparams/ppo.yml, but instead the Wandb Run Config. However, it's very
close and has some additional data:
```
algo: ppo
algo_hyperparams:
batch_size: 256
clip_range: 0.2
clip_range_decay: linear
ent_coef: 0
gae_lambda: 0.8
gamma: 0.98
learning_rate: 0.001
learning_rate_decay: linear
n_epochs: 20
n_steps: 32
env: CartPole-v1
env_hyperparams:
n_envs: 8
eval_params:
n_episodes: 10
save_best: true
step_freq: 25000
n_timesteps: 100000
seed: 6
use_deterministic_algorithms: true
wandb_entity: null
wandb_project_name: rl-algo-impls-benchmarks
wandb_tags:
- benchmark_5598ebc
- host_192-9-145-26
```
|
hectorjelly/Mespil_Rangers
|
hectorjelly
| null | 24 | 423 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 844 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: hectorjelly/Mespil_Rangers
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sgoodfriend/ppo-QbertNoFrameskip-v4
|
sgoodfriend
| null | 63 | 0 |
rl-algo-impls
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['QbertNoFrameskip-v4', 'ppo', 'deep-reinforcement-learning', 'reinforcement-learning']
| true | true | true | 5,061 |
# **PPO** Agent playing **QbertNoFrameskip-v4**
This is a trained model of a **PPO** agent playing **QbertNoFrameskip-v4** using the [/sgoodfriend/rl-algo-impls](https://github.com/sgoodfriend/rl-algo-impls) repo.
All models trained at this commit can be found at https://api.wandb.ai/links/sgoodfriend/6p2sjqtn.
## Training Results
This model was trained from 3 trainings of **PPO** agents using different initial seeds. These agents were trained by checking out [5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c). The best and last models were kept from each training. This submission has loaded the best models from each training, reevaluates them, and selects the best model from these latest evaluations (mean - std).
| algo | env | seed | reward_mean | reward_std | eval_episodes | best | wandb_url |
|:-------|:--------------------|-------:|--------------:|-------------:|----------------:|:-------|:-----------------------------------------------------------------------------|
| ppo | QbertNoFrameskip-v4 | 4 | 15671.9 | 2059.52 | 16 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/tnh81mhn) |
| ppo | QbertNoFrameskip-v4 | 5 | 8981.25 | 3948.57 | 16 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/rj8cs0lf) |
| ppo | QbertNoFrameskip-v4 | 6 | 15210.9 | 834.04 | 16 | * | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/5309kt00) |
### Prerequisites: Weights & Biases (WandB)
Training and benchmarking assumes you have a Weights & Biases project to upload runs to.
By default training goes to a rl-algo-impls project while benchmarks go to
rl-algo-impls-benchmarks. During training and benchmarking runs, videos of the best
models and the model weights are uploaded to WandB.
Before doing anything below, you'll need to create a wandb account and run `wandb
login`.
## Usage
/sgoodfriend/rl-algo-impls: https://github.com/sgoodfriend/rl-algo-impls
Note: While the model state dictionary and hyperaparameters are saved, the latest
implementation could be sufficiently different to not be able to reproduce similar
results. You might need to checkout the commit the agent was trained on:
[5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c).
```
# Downloads the model, sets hyperparameters, and runs agent for 3 episodes
python enjoy.py --wandb-run-path=sgoodfriend/rl-algo-impls-benchmarks/5309kt00
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_enjoy.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_enjoy.ipynb)
notebook.
## Training
If you want the highest chance to reproduce these results, you'll want to checkout the
commit the agent was trained on: [5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c). While
training is deterministic, different hardware will give different results.
```
python train.py --algo ppo --env QbertNoFrameskip-v4 --seed 6
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_train.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_train.ipynb)
notebook.
## Benchmarking (with Lambda Labs instance)
This and other models from https://api.wandb.ai/links/sgoodfriend/6p2sjqtn were generated by running a script on a Lambda
Labs instance. In a Lambda Labs instance terminal:
```
git clone [email protected]:sgoodfriend/rl-algo-impls.git
cd rl-algo-impls
bash ./lambda_labs/setup.sh
wandb login
bash ./lambda_labs/benchmark.sh
```
### Alternative: Google Colab Pro+
As an alternative,
[colab_benchmark.ipynb](https://github.com/sgoodfriend/rl-algo-impls/tree/main/benchmarks#:~:text=colab_benchmark.ipynb),
can be used. However, this requires a Google Colab Pro+ subscription and running across
4 separate instances because otherwise running all jobs will exceed the 24-hour limit.
## Hyperparameters
This isn't exactly the format of hyperparams in hyperparams/ppo.yml, but instead the Wandb Run Config. However, it's very
close and has some additional data:
```
algo: ppo
algo_hyperparams:
batch_size: 256
clip_range: 0.1
clip_range_decay: linear
ent_coef: 0.01
learning_rate: 0.00025
learning_rate_decay: linear
n_epochs: 4
n_steps: 128
vf_coef: 0.5
env: QbertNoFrameskip-v4
env_hyperparams:
frame_stack: 4
n_envs: 8
no_reward_fire_steps: 500
no_reward_timeout_steps: 1000
vec_env_class: subproc
eval_params:
deterministic: false
n_timesteps: 10000000
policy_hyperparams:
activation_fn: relu
seed: 6
use_deterministic_algorithms: true
wandb_entity: null
wandb_project_name: rl-algo-impls-benchmarks
wandb_tags:
- benchmark_5598ebc
- host_192-9-145-26
```
|
sgoodfriend/ppo-SpaceInvadersNoFrameskip-v4
|
sgoodfriend
| null | 63 | 0 |
rl-algo-impls
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'ppo', 'deep-reinforcement-learning', 'reinforcement-learning']
| true | true | true | 5,133 |
# **PPO** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **PPO** agent playing **SpaceInvadersNoFrameskip-v4** using the [/sgoodfriend/rl-algo-impls](https://github.com/sgoodfriend/rl-algo-impls) repo.
All models trained at this commit can be found at https://api.wandb.ai/links/sgoodfriend/6p2sjqtn.
## Training Results
This model was trained from 3 trainings of **PPO** agents using different initial seeds. These agents were trained by checking out [5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c). The best and last models were kept from each training. This submission has loaded the best models from each training, reevaluates them, and selects the best model from these latest evaluations (mean - std).
| algo | env | seed | reward_mean | reward_std | eval_episodes | best | wandb_url |
|:-------|:----------------------------|-------:|--------------:|-------------:|----------------:|:-------|:-----------------------------------------------------------------------------|
| ppo | SpaceInvadersNoFrameskip-v4 | 4 | 1117.19 | 391.34 | 16 | * | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/4t397ws6) |
| ppo | SpaceInvadersNoFrameskip-v4 | 5 | 917.5 | 415 | 16 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/782d9sj0) |
| ppo | SpaceInvadersNoFrameskip-v4 | 6 | 1012.5 | 441.51 | 16 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/x5ea6llt) |
### Prerequisites: Weights & Biases (WandB)
Training and benchmarking assumes you have a Weights & Biases project to upload runs to.
By default training goes to a rl-algo-impls project while benchmarks go to
rl-algo-impls-benchmarks. During training and benchmarking runs, videos of the best
models and the model weights are uploaded to WandB.
Before doing anything below, you'll need to create a wandb account and run `wandb
login`.
## Usage
/sgoodfriend/rl-algo-impls: https://github.com/sgoodfriend/rl-algo-impls
Note: While the model state dictionary and hyperaparameters are saved, the latest
implementation could be sufficiently different to not be able to reproduce similar
results. You might need to checkout the commit the agent was trained on:
[5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c).
```
# Downloads the model, sets hyperparameters, and runs agent for 3 episodes
python enjoy.py --wandb-run-path=sgoodfriend/rl-algo-impls-benchmarks/4t397ws6
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_enjoy.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_enjoy.ipynb)
notebook.
## Training
If you want the highest chance to reproduce these results, you'll want to checkout the
commit the agent was trained on: [5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c). While
training is deterministic, different hardware will give different results.
```
python train.py --algo ppo --env SpaceInvadersNoFrameskip-v4 --seed 4
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_train.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_train.ipynb)
notebook.
## Benchmarking (with Lambda Labs instance)
This and other models from https://api.wandb.ai/links/sgoodfriend/6p2sjqtn were generated by running a script on a Lambda
Labs instance. In a Lambda Labs instance terminal:
```
git clone [email protected]:sgoodfriend/rl-algo-impls.git
cd rl-algo-impls
bash ./lambda_labs/setup.sh
wandb login
bash ./lambda_labs/benchmark.sh
```
### Alternative: Google Colab Pro+
As an alternative,
[colab_benchmark.ipynb](https://github.com/sgoodfriend/rl-algo-impls/tree/main/benchmarks#:~:text=colab_benchmark.ipynb),
can be used. However, this requires a Google Colab Pro+ subscription and running across
4 separate instances because otherwise running all jobs will exceed the 24-hour limit.
## Hyperparameters
This isn't exactly the format of hyperparams in hyperparams/ppo.yml, but instead the Wandb Run Config. However, it's very
close and has some additional data:
```
algo: ppo
algo_hyperparams:
batch_size: 256
clip_range: 0.1
clip_range_decay: linear
ent_coef: 0.01
learning_rate: 0.00025
learning_rate_decay: linear
n_epochs: 4
n_steps: 128
vf_coef: 0.5
env: SpaceInvadersNoFrameskip-v4
env_hyperparams:
frame_stack: 4
n_envs: 8
no_reward_fire_steps: 500
no_reward_timeout_steps: 1000
vec_env_class: subproc
eval_params:
deterministic: false
n_timesteps: 10000000
policy_hyperparams:
activation_fn: relu
seed: 4
use_deterministic_algorithms: true
wandb_entity: null
wandb_project_name: rl-algo-impls-benchmarks
wandb_tags:
- benchmark_5598ebc
- host_192-9-145-26
```
|
sgoodfriend/ppo-BreakoutNoFrameskip-v4
|
sgoodfriend
| null | 63 | 0 |
rl-algo-impls
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['BreakoutNoFrameskip-v4', 'ppo', 'deep-reinforcement-learning', 'reinforcement-learning']
| true | true | true | 5,088 |
# **PPO** Agent playing **BreakoutNoFrameskip-v4**
This is a trained model of a **PPO** agent playing **BreakoutNoFrameskip-v4** using the [/sgoodfriend/rl-algo-impls](https://github.com/sgoodfriend/rl-algo-impls) repo.
All models trained at this commit can be found at https://api.wandb.ai/links/sgoodfriend/6p2sjqtn.
## Training Results
This model was trained from 3 trainings of **PPO** agents using different initial seeds. These agents were trained by checking out [5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c). The best and last models were kept from each training. This submission has loaded the best models from each training, reevaluates them, and selects the best model from these latest evaluations (mean - std).
| algo | env | seed | reward_mean | reward_std | eval_episodes | best | wandb_url |
|:-------|:-----------------------|-------:|--------------:|-------------:|----------------:|:-------|:-----------------------------------------------------------------------------|
| ppo | BreakoutNoFrameskip-v4 | 4 | 379.625 | 64.1511 | 16 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/yfu2ozl6) |
| ppo | BreakoutNoFrameskip-v4 | 5 | 378.312 | 89.5145 | 16 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/rmweqidh) |
| ppo | BreakoutNoFrameskip-v4 | 6 | 391.188 | 31.2734 | 16 | * | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/a2z0gt43) |
### Prerequisites: Weights & Biases (WandB)
Training and benchmarking assumes you have a Weights & Biases project to upload runs to.
By default training goes to a rl-algo-impls project while benchmarks go to
rl-algo-impls-benchmarks. During training and benchmarking runs, videos of the best
models and the model weights are uploaded to WandB.
Before doing anything below, you'll need to create a wandb account and run `wandb
login`.
## Usage
/sgoodfriend/rl-algo-impls: https://github.com/sgoodfriend/rl-algo-impls
Note: While the model state dictionary and hyperaparameters are saved, the latest
implementation could be sufficiently different to not be able to reproduce similar
results. You might need to checkout the commit the agent was trained on:
[5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c).
```
# Downloads the model, sets hyperparameters, and runs agent for 3 episodes
python enjoy.py --wandb-run-path=sgoodfriend/rl-algo-impls-benchmarks/a2z0gt43
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_enjoy.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_enjoy.ipynb)
notebook.
## Training
If you want the highest chance to reproduce these results, you'll want to checkout the
commit the agent was trained on: [5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c). While
training is deterministic, different hardware will give different results.
```
python train.py --algo ppo --env BreakoutNoFrameskip-v4 --seed 6
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_train.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_train.ipynb)
notebook.
## Benchmarking (with Lambda Labs instance)
This and other models from https://api.wandb.ai/links/sgoodfriend/6p2sjqtn were generated by running a script on a Lambda
Labs instance. In a Lambda Labs instance terminal:
```
git clone [email protected]:sgoodfriend/rl-algo-impls.git
cd rl-algo-impls
bash ./lambda_labs/setup.sh
wandb login
bash ./lambda_labs/benchmark.sh
```
### Alternative: Google Colab Pro+
As an alternative,
[colab_benchmark.ipynb](https://github.com/sgoodfriend/rl-algo-impls/tree/main/benchmarks#:~:text=colab_benchmark.ipynb),
can be used. However, this requires a Google Colab Pro+ subscription and running across
4 separate instances because otherwise running all jobs will exceed the 24-hour limit.
## Hyperparameters
This isn't exactly the format of hyperparams in hyperparams/ppo.yml, but instead the Wandb Run Config. However, it's very
close and has some additional data:
```
algo: ppo
algo_hyperparams:
batch_size: 256
clip_range: 0.1
clip_range_decay: linear
ent_coef: 0.01
learning_rate: 0.00025
learning_rate_decay: linear
n_epochs: 4
n_steps: 128
vf_coef: 0.5
env: BreakoutNoFrameskip-v4
env_hyperparams:
frame_stack: 4
n_envs: 8
no_reward_fire_steps: 500
no_reward_timeout_steps: 1000
vec_env_class: subproc
eval_params:
deterministic: false
n_timesteps: 10000000
policy_hyperparams:
activation_fn: relu
seed: 6
use_deterministic_algorithms: true
wandb_entity: null
wandb_project_name: rl-algo-impls-benchmarks
wandb_tags:
- benchmark_5598ebc
- host_192-9-145-26
```
|
sgoodfriend/ppo-HopperBulletEnv-v0
|
sgoodfriend
| null | 64 | 0 |
rl-algo-impls
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['HopperBulletEnv-v0', 'ppo', 'deep-reinforcement-learning', 'reinforcement-learning']
| true | true | true | 5,043 |
# **PPO** Agent playing **HopperBulletEnv-v0**
This is a trained model of a **PPO** agent playing **HopperBulletEnv-v0** using the [/sgoodfriend/rl-algo-impls](https://github.com/sgoodfriend/rl-algo-impls) repo.
All models trained at this commit can be found at https://api.wandb.ai/links/sgoodfriend/6p2sjqtn.
## Training Results
This model was trained from 3 trainings of **PPO** agents using different initial seeds. These agents were trained by checking out [5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c). The best and last models were kept from each training. This submission has loaded the best models from each training, reevaluates them, and selects the best model from these latest evaluations (mean - std).
| algo | env | seed | reward_mean | reward_std | eval_episodes | best | wandb_url |
|:-------|:-------------------|-------:|--------------:|-------------:|----------------:|:-------|:-----------------------------------------------------------------------------|
| ppo | HopperBulletEnv-v0 | 4 | 2669.16 | 94.015 | 16 | * | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/ifzrbo4m) |
| ppo | HopperBulletEnv-v0 | 5 | 2410.88 | 26.3848 | 16 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/1omw4ne1) |
| ppo | HopperBulletEnv-v0 | 6 | 2266.49 | 8.16531 | 16 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/zzgv6lmy) |
### Prerequisites: Weights & Biases (WandB)
Training and benchmarking assumes you have a Weights & Biases project to upload runs to.
By default training goes to a rl-algo-impls project while benchmarks go to
rl-algo-impls-benchmarks. During training and benchmarking runs, videos of the best
models and the model weights are uploaded to WandB.
Before doing anything below, you'll need to create a wandb account and run `wandb
login`.
## Usage
/sgoodfriend/rl-algo-impls: https://github.com/sgoodfriend/rl-algo-impls
Note: While the model state dictionary and hyperaparameters are saved, the latest
implementation could be sufficiently different to not be able to reproduce similar
results. You might need to checkout the commit the agent was trained on:
[5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c).
```
# Downloads the model, sets hyperparameters, and runs agent for 3 episodes
python enjoy.py --wandb-run-path=sgoodfriend/rl-algo-impls-benchmarks/ifzrbo4m
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_enjoy.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_enjoy.ipynb)
notebook.
## Training
If you want the highest chance to reproduce these results, you'll want to checkout the
commit the agent was trained on: [5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c). While
training is deterministic, different hardware will give different results.
```
python train.py --algo ppo --env HopperBulletEnv-v0 --seed 4
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_train.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_train.ipynb)
notebook.
## Benchmarking (with Lambda Labs instance)
This and other models from https://api.wandb.ai/links/sgoodfriend/6p2sjqtn were generated by running a script on a Lambda
Labs instance. In a Lambda Labs instance terminal:
```
git clone [email protected]:sgoodfriend/rl-algo-impls.git
cd rl-algo-impls
bash ./lambda_labs/setup.sh
wandb login
bash ./lambda_labs/benchmark.sh
```
### Alternative: Google Colab Pro+
As an alternative,
[colab_benchmark.ipynb](https://github.com/sgoodfriend/rl-algo-impls/tree/main/benchmarks#:~:text=colab_benchmark.ipynb),
can be used. However, this requires a Google Colab Pro+ subscription and running across
4 separate instances because otherwise running all jobs will exceed the 24-hour limit.
## Hyperparameters
This isn't exactly the format of hyperparams in hyperparams/ppo.yml, but instead the Wandb Run Config. However, it's very
close and has some additional data:
```
algo: ppo
algo_hyperparams:
batch_size: 128
clip_range: 0.4
clip_range_decay: linear
ent_coef: 0
gae_lambda: 0.9
gamma: 0.99
learning_rate: 3.0e-05
max_grad_norm: 0.5
n_epochs: 20
n_steps: 512
sde_sample_freq: 4
vf_coef: 0.5
env: HopperBulletEnv-v0
env_hyperparams:
n_envs: 16
normalize: true
n_timesteps: 2000000
policy_hyperparams:
activation_fn: relu
pi_hidden_sizes:
- 256
- 256
v_hidden_sizes:
- 256
- 256
seed: 4
use_deterministic_algorithms: true
wandb_entity: null
wandb_project_name: rl-algo-impls-benchmarks
wandb_tags:
- benchmark_5598ebc
- host_192-9-145-26
```
|
sgoodfriend/ppo-Walker2DBulletEnv-v0
|
sgoodfriend
| null | 64 | 0 |
rl-algo-impls
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Walker2DBulletEnv-v0', 'ppo', 'deep-reinforcement-learning', 'reinforcement-learning']
| true | true | true | 5,061 |
# **PPO** Agent playing **Walker2DBulletEnv-v0**
This is a trained model of a **PPO** agent playing **Walker2DBulletEnv-v0** using the [/sgoodfriend/rl-algo-impls](https://github.com/sgoodfriend/rl-algo-impls) repo.
All models trained at this commit can be found at https://api.wandb.ai/links/sgoodfriend/6p2sjqtn.
## Training Results
This model was trained from 3 trainings of **PPO** agents using different initial seeds. These agents were trained by checking out [5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c). The best and last models were kept from each training. This submission has loaded the best models from each training, reevaluates them, and selects the best model from these latest evaluations (mean - std).
| algo | env | seed | reward_mean | reward_std | eval_episodes | best | wandb_url |
|:-------|:---------------------|-------:|--------------:|-------------:|----------------:|:-------|:-----------------------------------------------------------------------------|
| ppo | Walker2DBulletEnv-v0 | 4 | 2158.99 | 19.2602 | 16 | * | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/wlnz70hu) |
| ppo | Walker2DBulletEnv-v0 | 5 | 942.96 | 214.811 | 16 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/yp7nqxt1) |
| ppo | Walker2DBulletEnv-v0 | 6 | 1972.53 | 5.25752 | 16 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/nq3mlpmz) |
### Prerequisites: Weights & Biases (WandB)
Training and benchmarking assumes you have a Weights & Biases project to upload runs to.
By default training goes to a rl-algo-impls project while benchmarks go to
rl-algo-impls-benchmarks. During training and benchmarking runs, videos of the best
models and the model weights are uploaded to WandB.
Before doing anything below, you'll need to create a wandb account and run `wandb
login`.
## Usage
/sgoodfriend/rl-algo-impls: https://github.com/sgoodfriend/rl-algo-impls
Note: While the model state dictionary and hyperaparameters are saved, the latest
implementation could be sufficiently different to not be able to reproduce similar
results. You might need to checkout the commit the agent was trained on:
[5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c).
```
# Downloads the model, sets hyperparameters, and runs agent for 3 episodes
python enjoy.py --wandb-run-path=sgoodfriend/rl-algo-impls-benchmarks/wlnz70hu
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_enjoy.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_enjoy.ipynb)
notebook.
## Training
If you want the highest chance to reproduce these results, you'll want to checkout the
commit the agent was trained on: [5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c). While
training is deterministic, different hardware will give different results.
```
python train.py --algo ppo --env Walker2DBulletEnv-v0 --seed 4
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_train.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_train.ipynb)
notebook.
## Benchmarking (with Lambda Labs instance)
This and other models from https://api.wandb.ai/links/sgoodfriend/6p2sjqtn were generated by running a script on a Lambda
Labs instance. In a Lambda Labs instance terminal:
```
git clone [email protected]:sgoodfriend/rl-algo-impls.git
cd rl-algo-impls
bash ./lambda_labs/setup.sh
wandb login
bash ./lambda_labs/benchmark.sh
```
### Alternative: Google Colab Pro+
As an alternative,
[colab_benchmark.ipynb](https://github.com/sgoodfriend/rl-algo-impls/tree/main/benchmarks#:~:text=colab_benchmark.ipynb),
can be used. However, this requires a Google Colab Pro+ subscription and running across
4 separate instances because otherwise running all jobs will exceed the 24-hour limit.
## Hyperparameters
This isn't exactly the format of hyperparams in hyperparams/ppo.yml, but instead the Wandb Run Config. However, it's very
close and has some additional data:
```
algo: ppo
algo_hyperparams:
batch_size: 128
clip_range: 0.4
clip_range_decay: linear
ent_coef: 0
gae_lambda: 0.9
gamma: 0.99
learning_rate: 3.0e-05
max_grad_norm: 0.5
n_epochs: 20
n_steps: 512
sde_sample_freq: 4
vf_coef: 0.5
env: Walker2DBulletEnv-v0
env_hyperparams:
n_envs: 16
normalize: true
n_timesteps: 2000000
policy_hyperparams:
activation_fn: relu
pi_hidden_sizes:
- 256
- 256
v_hidden_sizes:
- 256
- 256
seed: 4
use_deterministic_algorithms: true
wandb_entity: null
wandb_project_name: rl-algo-impls-benchmarks
wandb_tags:
- benchmark_5598ebc
- host_192-9-145-26
```
|
sgoodfriend/ppo-AntBulletEnv-v0
|
sgoodfriend
| null | 64 | 0 |
rl-algo-impls
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AntBulletEnv-v0', 'ppo', 'deep-reinforcement-learning', 'reinforcement-learning']
| true | true | true | 4,989 |
# **PPO** Agent playing **AntBulletEnv-v0**
This is a trained model of a **PPO** agent playing **AntBulletEnv-v0** using the [/sgoodfriend/rl-algo-impls](https://github.com/sgoodfriend/rl-algo-impls) repo.
All models trained at this commit can be found at https://api.wandb.ai/links/sgoodfriend/6p2sjqtn.
## Training Results
This model was trained from 3 trainings of **PPO** agents using different initial seeds. These agents were trained by checking out [5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c). The best and last models were kept from each training. This submission has loaded the best models from each training, reevaluates them, and selects the best model from these latest evaluations (mean - std).
| algo | env | seed | reward_mean | reward_std | eval_episodes | best | wandb_url |
|:-------|:----------------|-------:|--------------:|-------------:|----------------:|:-------|:-----------------------------------------------------------------------------|
| ppo | AntBulletEnv-v0 | 4 | 2669.98 | 65.5195 | 16 | * | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/vemwk5yn) |
| ppo | AntBulletEnv-v0 | 5 | 884.068 | 1.61404 | 16 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/iu48fxl2) |
| ppo | AntBulletEnv-v0 | 6 | 2487.6 | 47.859 | 16 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/af73b738) |
### Prerequisites: Weights & Biases (WandB)
Training and benchmarking assumes you have a Weights & Biases project to upload runs to.
By default training goes to a rl-algo-impls project while benchmarks go to
rl-algo-impls-benchmarks. During training and benchmarking runs, videos of the best
models and the model weights are uploaded to WandB.
Before doing anything below, you'll need to create a wandb account and run `wandb
login`.
## Usage
/sgoodfriend/rl-algo-impls: https://github.com/sgoodfriend/rl-algo-impls
Note: While the model state dictionary and hyperaparameters are saved, the latest
implementation could be sufficiently different to not be able to reproduce similar
results. You might need to checkout the commit the agent was trained on:
[5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c).
```
# Downloads the model, sets hyperparameters, and runs agent for 3 episodes
python enjoy.py --wandb-run-path=sgoodfriend/rl-algo-impls-benchmarks/vemwk5yn
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_enjoy.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_enjoy.ipynb)
notebook.
## Training
If you want the highest chance to reproduce these results, you'll want to checkout the
commit the agent was trained on: [5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c). While
training is deterministic, different hardware will give different results.
```
python train.py --algo ppo --env AntBulletEnv-v0 --seed 4
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_train.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_train.ipynb)
notebook.
## Benchmarking (with Lambda Labs instance)
This and other models from https://api.wandb.ai/links/sgoodfriend/6p2sjqtn were generated by running a script on a Lambda
Labs instance. In a Lambda Labs instance terminal:
```
git clone [email protected]:sgoodfriend/rl-algo-impls.git
cd rl-algo-impls
bash ./lambda_labs/setup.sh
wandb login
bash ./lambda_labs/benchmark.sh
```
### Alternative: Google Colab Pro+
As an alternative,
[colab_benchmark.ipynb](https://github.com/sgoodfriend/rl-algo-impls/tree/main/benchmarks#:~:text=colab_benchmark.ipynb),
can be used. However, this requires a Google Colab Pro+ subscription and running across
4 separate instances because otherwise running all jobs will exceed the 24-hour limit.
## Hyperparameters
This isn't exactly the format of hyperparams in hyperparams/ppo.yml, but instead the Wandb Run Config. However, it's very
close and has some additional data:
```
algo: ppo
algo_hyperparams:
batch_size: 128
clip_range: 0.4
ent_coef: 0
gae_lambda: 0.9
gamma: 0.99
learning_rate: 3.0e-05
max_grad_norm: 0.5
n_epochs: 20
n_steps: 512
sde_sample_freq: 4
vf_coef: 0.5
env: AntBulletEnv-v0
env_hyperparams:
n_envs: 16
normalize: true
n_timesteps: 2000000
policy_hyperparams:
activation_fn: relu
pi_hidden_sizes:
- 256
- 256
v_hidden_sizes:
- 256
- 256
seed: 4
use_deterministic_algorithms: true
wandb_entity: null
wandb_project_name: rl-algo-impls-benchmarks
wandb_tags:
- benchmark_5598ebc
- host_192-9-145-26
```
|
sgoodfriend/ppo-HalfCheetahBulletEnv-v0
|
sgoodfriend
| null | 64 | 0 |
rl-algo-impls
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['HalfCheetahBulletEnv-v0', 'ppo', 'deep-reinforcement-learning', 'reinforcement-learning']
| true | true | true | 5,061 |
# **PPO** Agent playing **HalfCheetahBulletEnv-v0**
This is a trained model of a **PPO** agent playing **HalfCheetahBulletEnv-v0** using the [/sgoodfriend/rl-algo-impls](https://github.com/sgoodfriend/rl-algo-impls) repo.
All models trained at this commit can be found at https://api.wandb.ai/links/sgoodfriend/6p2sjqtn.
## Training Results
This model was trained from 3 trainings of **PPO** agents using different initial seeds. These agents were trained by checking out [5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c). The best and last models were kept from each training. This submission has loaded the best models from each training, reevaluates them, and selects the best model from these latest evaluations (mean - std).
| algo | env | seed | reward_mean | reward_std | eval_episodes | best | wandb_url |
|:-------|:------------------------|-------:|--------------:|-------------:|----------------:|:-------|:-----------------------------------------------------------------------------|
| ppo | HalfCheetahBulletEnv-v0 | 4 | 3258.99 | 24.787 | 16 | * | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/59kxlygx) |
| ppo | HalfCheetahBulletEnv-v0 | 5 | 2331.99 | 12.5512 | 16 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/bnwhwv6i) |
| ppo | HalfCheetahBulletEnv-v0 | 6 | 3012.23 | 40.0919 | 16 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/0ikul7ms) |
### Prerequisites: Weights & Biases (WandB)
Training and benchmarking assumes you have a Weights & Biases project to upload runs to.
By default training goes to a rl-algo-impls project while benchmarks go to
rl-algo-impls-benchmarks. During training and benchmarking runs, videos of the best
models and the model weights are uploaded to WandB.
Before doing anything below, you'll need to create a wandb account and run `wandb
login`.
## Usage
/sgoodfriend/rl-algo-impls: https://github.com/sgoodfriend/rl-algo-impls
Note: While the model state dictionary and hyperaparameters are saved, the latest
implementation could be sufficiently different to not be able to reproduce similar
results. You might need to checkout the commit the agent was trained on:
[5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c).
```
# Downloads the model, sets hyperparameters, and runs agent for 3 episodes
python enjoy.py --wandb-run-path=sgoodfriend/rl-algo-impls-benchmarks/59kxlygx
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_enjoy.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_enjoy.ipynb)
notebook.
## Training
If you want the highest chance to reproduce these results, you'll want to checkout the
commit the agent was trained on: [5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c). While
training is deterministic, different hardware will give different results.
```
python train.py --algo ppo --env HalfCheetahBulletEnv-v0 --seed 4
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_train.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_train.ipynb)
notebook.
## Benchmarking (with Lambda Labs instance)
This and other models from https://api.wandb.ai/links/sgoodfriend/6p2sjqtn were generated by running a script on a Lambda
Labs instance. In a Lambda Labs instance terminal:
```
git clone [email protected]:sgoodfriend/rl-algo-impls.git
cd rl-algo-impls
bash ./lambda_labs/setup.sh
wandb login
bash ./lambda_labs/benchmark.sh
```
### Alternative: Google Colab Pro+
As an alternative,
[colab_benchmark.ipynb](https://github.com/sgoodfriend/rl-algo-impls/tree/main/benchmarks#:~:text=colab_benchmark.ipynb),
can be used. However, this requires a Google Colab Pro+ subscription and running across
4 separate instances because otherwise running all jobs will exceed the 24-hour limit.
## Hyperparameters
This isn't exactly the format of hyperparams in hyperparams/ppo.yml, but instead the Wandb Run Config. However, it's very
close and has some additional data:
```
algo: ppo
algo_hyperparams:
batch_size: 128
clip_range: 0.4
ent_coef: 0
gae_lambda: 0.9
gamma: 0.99
learning_rate: 3.0e-05
max_grad_norm: 0.5
n_epochs: 20
n_steps: 512
sde_sample_freq: 4
vf_coef: 0.5
env: HalfCheetahBulletEnv-v0
env_hyperparams:
n_envs: 16
normalize: true
n_timesteps: 2000000
policy_hyperparams:
activation_fn: relu
pi_hidden_sizes:
- 256
- 256
v_hidden_sizes:
- 256
- 256
seed: 4
use_deterministic_algorithms: true
wandb_entity: null
wandb_project_name: rl-algo-impls-benchmarks
wandb_tags:
- benchmark_5598ebc
- host_192-9-145-26
```
|
sgoodfriend/ppo-PongNoFrameskip-v4
|
sgoodfriend
| null | 63 | 0 |
rl-algo-impls
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['PongNoFrameskip-v4', 'ppo', 'deep-reinforcement-learning', 'reinforcement-learning']
| true | true | true | 5,052 |
# **PPO** Agent playing **PongNoFrameskip-v4**
This is a trained model of a **PPO** agent playing **PongNoFrameskip-v4** using the [/sgoodfriend/rl-algo-impls](https://github.com/sgoodfriend/rl-algo-impls) repo.
All models trained at this commit can be found at https://api.wandb.ai/links/sgoodfriend/6p2sjqtn.
## Training Results
This model was trained from 3 trainings of **PPO** agents using different initial seeds. These agents were trained by checking out [5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c). The best and last models were kept from each training. This submission has loaded the best models from each training, reevaluates them, and selects the best model from these latest evaluations (mean - std).
| algo | env | seed | reward_mean | reward_std | eval_episodes | best | wandb_url |
|:-------|:-------------------|-------:|--------------:|-------------:|----------------:|:-------|:-----------------------------------------------------------------------------|
| ppo | PongNoFrameskip-v4 | 4 | 20.875 | 0.330719 | 16 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/t7vsbb85) |
| ppo | PongNoFrameskip-v4 | 5 | 20.9375 | 0.242061 | 16 | * | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/u5p2urji) |
| ppo | PongNoFrameskip-v4 | 6 | 18.9375 | 3.63092 | 16 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/gp689pd4) |
### Prerequisites: Weights & Biases (WandB)
Training and benchmarking assumes you have a Weights & Biases project to upload runs to.
By default training goes to a rl-algo-impls project while benchmarks go to
rl-algo-impls-benchmarks. During training and benchmarking runs, videos of the best
models and the model weights are uploaded to WandB.
Before doing anything below, you'll need to create a wandb account and run `wandb
login`.
## Usage
/sgoodfriend/rl-algo-impls: https://github.com/sgoodfriend/rl-algo-impls
Note: While the model state dictionary and hyperaparameters are saved, the latest
implementation could be sufficiently different to not be able to reproduce similar
results. You might need to checkout the commit the agent was trained on:
[5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c).
```
# Downloads the model, sets hyperparameters, and runs agent for 3 episodes
python enjoy.py --wandb-run-path=sgoodfriend/rl-algo-impls-benchmarks/u5p2urji
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_enjoy.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_enjoy.ipynb)
notebook.
## Training
If you want the highest chance to reproduce these results, you'll want to checkout the
commit the agent was trained on: [5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c). While
training is deterministic, different hardware will give different results.
```
python train.py --algo ppo --env PongNoFrameskip-v4 --seed 5
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_train.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_train.ipynb)
notebook.
## Benchmarking (with Lambda Labs instance)
This and other models from https://api.wandb.ai/links/sgoodfriend/6p2sjqtn were generated by running a script on a Lambda
Labs instance. In a Lambda Labs instance terminal:
```
git clone [email protected]:sgoodfriend/rl-algo-impls.git
cd rl-algo-impls
bash ./lambda_labs/setup.sh
wandb login
bash ./lambda_labs/benchmark.sh
```
### Alternative: Google Colab Pro+
As an alternative,
[colab_benchmark.ipynb](https://github.com/sgoodfriend/rl-algo-impls/tree/main/benchmarks#:~:text=colab_benchmark.ipynb),
can be used. However, this requires a Google Colab Pro+ subscription and running across
4 separate instances because otherwise running all jobs will exceed the 24-hour limit.
## Hyperparameters
This isn't exactly the format of hyperparams in hyperparams/ppo.yml, but instead the Wandb Run Config. However, it's very
close and has some additional data:
```
algo: ppo
algo_hyperparams:
batch_size: 256
clip_range: 0.1
clip_range_decay: linear
ent_coef: 0.01
learning_rate: 0.00025
learning_rate_decay: linear
n_epochs: 4
n_steps: 128
vf_coef: 0.5
env: PongNoFrameskip-v4
env_hyperparams:
frame_stack: 4
n_envs: 8
no_reward_fire_steps: 500
no_reward_timeout_steps: 1000
vec_env_class: subproc
eval_params:
deterministic: false
n_timesteps: 10000000
policy_hyperparams:
activation_fn: relu
seed: 5
use_deterministic_algorithms: true
wandb_entity: null
wandb_project_name: rl-algo-impls-benchmarks
wandb_tags:
- benchmark_5598ebc
- host_192-9-145-26
```
|
sgoodfriend/ppo-MountainCarContinuous-v0
|
sgoodfriend
| null | 68 | 0 |
rl-algo-impls
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['MountainCarContinuous-v0', 'ppo', 'deep-reinforcement-learning', 'reinforcement-learning']
| true | true | true | 4,983 |
# **PPO** Agent playing **MountainCarContinuous-v0**
This is a trained model of a **PPO** agent playing **MountainCarContinuous-v0** using the [/sgoodfriend/rl-algo-impls](https://github.com/sgoodfriend/rl-algo-impls) repo.
All models trained at this commit can be found at https://api.wandb.ai/links/sgoodfriend/448odm37.
## Training Results
This model was trained from 3 trainings of **PPO** agents using different initial seeds. These agents were trained by checking out [fbc943f](https://github.com/sgoodfriend/rl-algo-impls/tree/fbc943f151b95afc4905a67a3835fb6b18c6a5e4). The best and last models were kept from each training. This submission has loaded the best models from each training, reevaluates them, and selects the best model from these latest evaluations (mean - std).
| algo | env | seed | reward_mean | reward_std | eval_episodes | best | wandb_url |
|:-------|:-------------------------|-------:|--------------:|-------------:|----------------:|:-------|:-----------------------------------------------------------------------------|
| ppo | MountainCarContinuous-v0 | 1 | 90.822 | 27.475 | 12 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/p26x7fha) |
| ppo | MountainCarContinuous-v0 | 2 | 98.7517 | 0.140611 | 12 | * | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/chyhmond) |
| ppo | MountainCarContinuous-v0 | 3 | 82.5505 | 37.0988 | 12 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/jrg9oy85) |
### Prerequisites: Weights & Biases (WandB)
Training and benchmarking assumes you have a Weights & Biases project to upload runs to.
By default training goes to a rl-algo-impls project while benchmarks go to
rl-algo-impls-benchmarks. During training and benchmarking runs, videos of the best
models and the model weights are uploaded to WandB.
Before doing anything below, you'll need to create a wandb account and run `wandb
login`.
## Usage
/sgoodfriend/rl-algo-impls: https://github.com/sgoodfriend/rl-algo-impls
Note: While the model state dictionary and hyperaparameters are saved, the latest
implementation could be sufficiently different to not be able to reproduce similar
results. You might need to checkout the commit the agent was trained on:
[fbc943f](https://github.com/sgoodfriend/rl-algo-impls/tree/fbc943f151b95afc4905a67a3835fb6b18c6a5e4).
```
# Downloads the model, sets hyperparameters, and runs agent for 3 episodes
python enjoy.py --wandb-run-path=sgoodfriend/rl-algo-impls-benchmarks/chyhmond
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_enjoy.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_enjoy.ipynb)
notebook.
## Training
If you want the highest chance to reproduce these results, you'll want to checkout the
commit the agent was trained on: [fbc943f](https://github.com/sgoodfriend/rl-algo-impls/tree/fbc943f151b95afc4905a67a3835fb6b18c6a5e4). While
training is deterministic, different hardware will give different results.
```
python train.py --algo ppo --env MountainCarContinuous-v0 --seed 2
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_train.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_train.ipynb)
notebook.
## Benchmarking (with Lambda Labs instance)
This and other models from https://api.wandb.ai/links/sgoodfriend/448odm37 were generated by running a script on a Lambda
Labs instance. In a Lambda Labs instance terminal:
```
git clone [email protected]:sgoodfriend/rl-algo-impls.git
cd rl-algo-impls
bash ./lambda_labs/setup.sh
wandb login
bash ./lambda_labs/benchmark.sh
```
### Alternative: Google Colab Pro+
As an alternative,
[colab_benchmark.ipynb](https://github.com/sgoodfriend/rl-algo-impls/tree/main/benchmarks#:~:text=colab_benchmark.ipynb),
can be used. However, this requires a Google Colab Pro+ subscription and running across
4 separate instances because otherwise running all jobs will exceed the 24-hour limit.
## Hyperparameters
This isn't exactly the format of hyperparams in hyperparams/ppo.yml, but instead the Wandb Run Config. However, it's very
close and has some additional data:
```
algo: ppo
algo_hyperparams:
batch_size: 256
clip_range: 0.1
ent_coef: 0.01
ent_coef_decay: linear
gae_lambda: 0.9
learning_rate: 7.77e-05
max_grad_norm: 5
n_epochs: 10
n_steps: 512
vf_coef: 0.19
env: MountainCarContinuous-v0
env_hyperparams:
n_envs: 4
normalize: true
eval_params:
step_freq: 5000
n_timesteps: 100000
seed: 2
use_deterministic_algorithms: true
wandb_entity: null
wandb_project_name: rl-algo-impls-benchmarks
wandb_tags:
- benchmark_fbc943f
- host_150-230-44-105
```
|
sgoodfriend/ppo-MountainCar-v0
|
sgoodfriend
| null | 64 | 0 |
rl-algo-impls
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['MountainCar-v0', 'ppo', 'deep-reinforcement-learning', 'reinforcement-learning']
| true | true | true | 4,750 |
# **PPO** Agent playing **MountainCar-v0**
This is a trained model of a **PPO** agent playing **MountainCar-v0** using the [/sgoodfriend/rl-algo-impls](https://github.com/sgoodfriend/rl-algo-impls) repo.
All models trained at this commit can be found at https://api.wandb.ai/links/sgoodfriend/6p2sjqtn.
## Training Results
This model was trained from 3 trainings of **PPO** agents using different initial seeds. These agents were trained by checking out [5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c). The best and last models were kept from each training. This submission has loaded the best models from each training, reevaluates them, and selects the best model from these latest evaluations (mean - std).
| algo | env | seed | reward_mean | reward_std | eval_episodes | best | wandb_url |
|:-------|:---------------|-------:|--------------:|-------------:|----------------:|:-------|:-----------------------------------------------------------------------------|
| ppo | MountainCar-v0 | 4 | -113.75 | 1.47902 | 16 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/8yurshm7) |
| ppo | MountainCar-v0 | 5 | -111.688 | 10.4146 | 16 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/0s0kikzh) |
| ppo | MountainCar-v0 | 6 | -112.938 | 1.51941 | 16 | * | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/jvwz1vhg) |
### Prerequisites: Weights & Biases (WandB)
Training and benchmarking assumes you have a Weights & Biases project to upload runs to.
By default training goes to a rl-algo-impls project while benchmarks go to
rl-algo-impls-benchmarks. During training and benchmarking runs, videos of the best
models and the model weights are uploaded to WandB.
Before doing anything below, you'll need to create a wandb account and run `wandb
login`.
## Usage
/sgoodfriend/rl-algo-impls: https://github.com/sgoodfriend/rl-algo-impls
Note: While the model state dictionary and hyperaparameters are saved, the latest
implementation could be sufficiently different to not be able to reproduce similar
results. You might need to checkout the commit the agent was trained on:
[5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c).
```
# Downloads the model, sets hyperparameters, and runs agent for 3 episodes
python enjoy.py --wandb-run-path=sgoodfriend/rl-algo-impls-benchmarks/jvwz1vhg
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_enjoy.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_enjoy.ipynb)
notebook.
## Training
If you want the highest chance to reproduce these results, you'll want to checkout the
commit the agent was trained on: [5598ebc](https://github.com/sgoodfriend/rl-algo-impls/tree/5598ebc4b03054f16eebe76792486ba7bcacfc5c). While
training is deterministic, different hardware will give different results.
```
python train.py --algo ppo --env MountainCar-v0 --seed 6
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_train.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_train.ipynb)
notebook.
## Benchmarking (with Lambda Labs instance)
This and other models from https://api.wandb.ai/links/sgoodfriend/6p2sjqtn were generated by running a script on a Lambda
Labs instance. In a Lambda Labs instance terminal:
```
git clone [email protected]:sgoodfriend/rl-algo-impls.git
cd rl-algo-impls
bash ./lambda_labs/setup.sh
wandb login
bash ./lambda_labs/benchmark.sh
```
### Alternative: Google Colab Pro+
As an alternative,
[colab_benchmark.ipynb](https://github.com/sgoodfriend/rl-algo-impls/tree/main/benchmarks#:~:text=colab_benchmark.ipynb),
can be used. However, this requires a Google Colab Pro+ subscription and running across
4 separate instances because otherwise running all jobs will exceed the 24-hour limit.
## Hyperparameters
This isn't exactly the format of hyperparams in hyperparams/ppo.yml, but instead the Wandb Run Config. However, it's very
close and has some additional data:
```
algo: ppo
algo_hyperparams:
ent_coef: 0
gae_lambda: 0.98
gamma: 0.99
n_epochs: 4
n_steps: 16
env: MountainCar-v0
env_hyperparams:
n_envs: 16
normalize: true
n_timesteps: 1000000
seed: 6
use_deterministic_algorithms: true
wandb_entity: null
wandb_project_name: rl-algo-impls-benchmarks
wandb_tags:
- benchmark_5598ebc
- host_192-9-145-26
```
|
sgoodfriend/ppo-CarRacing-v0
|
sgoodfriend
| null | 66 | 0 |
rl-algo-impls
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['CarRacing-v0', 'ppo', 'deep-reinforcement-learning', 'reinforcement-learning']
| true | true | true | 5,071 |
# **PPO** Agent playing **CarRacing-v0**
This is a trained model of a **PPO** agent playing **CarRacing-v0** using the [/sgoodfriend/rl-algo-impls](https://github.com/sgoodfriend/rl-algo-impls) repo.
All models trained at this commit can be found at https://api.wandb.ai/links/sgoodfriend/448odm37.
## Training Results
This model was trained from 3 trainings of **PPO** agents using different initial seeds. These agents were trained by checking out [fbc943f](https://github.com/sgoodfriend/rl-algo-impls/tree/fbc943f151b95afc4905a67a3835fb6b18c6a5e4). The best and last models were kept from each training. This submission has loaded the best models from each training, reevaluates them, and selects the best model from these latest evaluations (mean - std).
| algo | env | seed | reward_mean | reward_std | eval_episodes | best | wandb_url |
|:-------|:-------------|-------:|--------------:|-------------:|----------------:|:-------|:-----------------------------------------------------------------------------|
| ppo | CarRacing-v0 | 1 | 865.725 | 58.1454 | 16 | * | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/8vyb0q44) |
| ppo | CarRacing-v0 | 2 | 693.464 | 236.712 | 16 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/a3ld38qf) |
| ppo | CarRacing-v0 | 3 | 815.26 | 141.502 | 16 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/zah43or2) |
### Prerequisites: Weights & Biases (WandB)
Training and benchmarking assumes you have a Weights & Biases project to upload runs to.
By default training goes to a rl-algo-impls project while benchmarks go to
rl-algo-impls-benchmarks. During training and benchmarking runs, videos of the best
models and the model weights are uploaded to WandB.
Before doing anything below, you'll need to create a wandb account and run `wandb
login`.
## Usage
/sgoodfriend/rl-algo-impls: https://github.com/sgoodfriend/rl-algo-impls
Note: While the model state dictionary and hyperaparameters are saved, the latest
implementation could be sufficiently different to not be able to reproduce similar
results. You might need to checkout the commit the agent was trained on:
[fbc943f](https://github.com/sgoodfriend/rl-algo-impls/tree/fbc943f151b95afc4905a67a3835fb6b18c6a5e4).
```
# Downloads the model, sets hyperparameters, and runs agent for 3 episodes
python enjoy.py --wandb-run-path=sgoodfriend/rl-algo-impls-benchmarks/8vyb0q44
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_enjoy.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_enjoy.ipynb)
notebook.
## Training
If you want the highest chance to reproduce these results, you'll want to checkout the
commit the agent was trained on: [fbc943f](https://github.com/sgoodfriend/rl-algo-impls/tree/fbc943f151b95afc4905a67a3835fb6b18c6a5e4). While
training is deterministic, different hardware will give different results.
```
python train.py --algo ppo --env CarRacing-v0 --seed 1
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_train.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_train.ipynb)
notebook.
## Benchmarking (with Lambda Labs instance)
This and other models from https://api.wandb.ai/links/sgoodfriend/448odm37 were generated by running a script on a Lambda
Labs instance. In a Lambda Labs instance terminal:
```
git clone [email protected]:sgoodfriend/rl-algo-impls.git
cd rl-algo-impls
bash ./lambda_labs/setup.sh
wandb login
bash ./lambda_labs/benchmark.sh
```
### Alternative: Google Colab Pro+
As an alternative,
[colab_benchmark.ipynb](https://github.com/sgoodfriend/rl-algo-impls/tree/main/benchmarks#:~:text=colab_benchmark.ipynb),
can be used. However, this requires a Google Colab Pro+ subscription and running across
4 separate instances because otherwise running all jobs will exceed the 24-hour limit.
## Hyperparameters
This isn't exactly the format of hyperparams in hyperparams/ppo.yml, but instead the Wandb Run Config. However, it's very
close and has some additional data:
```
algo: ppo
algo_hyperparams:
batch_size: 128
clip_range: 0.2
ent_coef: 0
gae_lambda: 0.95
gamma: 0.99
learning_rate: 0.0001
learning_rate_decay: linear
max_grad_norm: 0.5
n_epochs: 10
n_steps: 512
sde_sample_freq: 4
vf_coef: 0.5
env: CarRacing-v0
env_hyperparams:
frame_stack: 4
n_envs: 8
n_timesteps: 4000000
policy_hyperparams:
activation_fn: relu
cnn_feature_dim: 256
hidden_sizes:
- 256
init_layers_orthogonal: false
log_std_init: -2
share_features_extractor: false
use_sde: true
seed: 1
use_deterministic_algorithms: true
wandb_entity: null
wandb_project_name: rl-algo-impls-benchmarks
wandb_tags:
- benchmark_fbc943f
- host_150-230-44-105
```
|
jpopham91/poca-SoccerTwos
|
jpopham91
| null | 20 | 419 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 843 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: jpopham91/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
zlicastro/zl-a2c-AntBulletEnv-v0
|
zlicastro
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AntBulletEnv-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 352 |
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
HuyenNguyen/TTS11991
|
HuyenNguyen
|
whisper
| 16 | 22 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,252 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TTS11991
This model is a fine-tuned version of [HuyenNguyen/TTS456789](https://huggingface.co/HuyenNguyen/TTS456789) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0819
- eval_wer: 4.5933
- eval_runtime: 2572.2826
- eval_samples_per_second: 0.778
- eval_steps_per_second: 0.049
- epoch: 3.25
- step: 1500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 26
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 104
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
JessicaHsu/ppo-Huggy
|
JessicaHsu
| null | 32 | 1 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Huggy']
| false | true | true | 821 |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: JessicaHsu/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
robertcowher/ppo-LunarLander-v2-TEST
|
robertcowher
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
juanmi1234/Reinforce-CartPole8
|
juanmi1234
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 286 |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Victarry/poca-SoccerTwos-v2
|
Victarry
| null | 19 | 0 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 845 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: Victarry/poca-SoccerTwos-v2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sayakpaul/vit-base-patch16-224-in21k-finetuned-lora-food101
|
sayakpaul
|
vit
| 19 | 5 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['food101']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,611 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-lora-food101
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1448
- Accuracy: 0.96
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 9 | 0.5069 | 0.896 |
| 2.1627 | 2.0 | 18 | 0.1891 | 0.946 |
| 0.3451 | 3.0 | 27 | 0.1448 | 0.96 |
| 0.2116 | 4.0 | 36 | 0.1509 | 0.958 |
| 0.1711 | 5.0 | 45 | 0.1498 | 0.958 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
misaki1301/Maka-Fujiki-Dreambooth
|
misaki1301
| null | 4 | 0 | null | 1 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['art']
| false | true | true | 2,656 |
# Maka Fujiki Dreambooth + AnythingV4
This is a model trained with Dreambooth with TheLastBen and Anything V4 from andite (https://huggingface.co/andite/anything-v4.0)
The model was trained based on the artist [Oryou](https://www.pixiv.net/en/users/24392) popular character Maka Fujiki from the manga Boku no Kanojo Sensei. The current model has been tuned with a learning rate of 2.0e-6 on 63 images collected from Danbooru and Pixiv from artist. It supports Danbooru tags.
## Usage/Examples
These are some results you can get with the model.
e.g.
seed: 903184092
prompts: maka, outdoors, field, hat_flower, necklace, white_dress, sunflower, lying, on_grass, cleavage
ugly, tiling, poorly drawn hands, poorly drawn arms, poorly drawn feet, poorly drawn face, out of frame, extra limbs, disfigured, deformed, bad anatomy, watermark, signature, low contrast, bad art, beginner, amateur, distorted face, underage, distorted tights, hands on floor



## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full [license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license).
## Thanks to
- TheLastBen for his fastDreamboothTraining
- Andite for his mix of models
- Zerocomes for testing the model generating examples
|
thanat/bert-finetuned-ner
|
thanat
|
bert
| 8 | 15 |
transformers
| 0 |
token-classification
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,473 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# thanat/bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the [CoNLL-2003](https://huggingface.co/datasets/conll2003) dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0280
- Validation Loss: 0.0513
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2634, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1691 | 0.0630 | 0 |
| 0.0484 | 0.0529 | 1 |
| 0.0280 | 0.0513 | 2 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
LowGI/STT_Model_9
|
LowGI
|
wav2vec2
| 16 | 6 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,123 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# STT_Model_9
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2506
- Wer: 0.1718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Dataset info
- Name: LJSpeech
- Source: https://www.kaggle.com/datasets/mathurinache/the-lj-speech-dataset
- Total audios (in Google Drive): 1420
- Total transcripts (in Google Drive): 13100
- No. of rows selected: 500
- Train-test ratio: 70:30
- No. of training set: 350
- No. of testing set: 150
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 4.55 | 200 | 2.9217 | 0.9846 |
| No log | 9.09 | 400 | 1.2293 | 0.7093 |
| 2.3111 | 13.64 | 600 | 0.3885 | 0.3602 |
| 2.3111 | 18.18 | 800 | 0.3123 | 0.3097 |
| 0.2471 | 22.73 | 1000 | 0.3094 | 0.2737 |
| 0.2471 | 27.27 | 1200 | 0.3007 | 0.2537 |
| 0.2471 | 31.82 | 1400 | 0.2650 | 0.2008 |
| 0.0853 | 36.36 | 1600 | 0.2599 | 0.1884 |
| 0.0853 | 40.91 | 1800 | 0.2462 | 0.1734 |
| 0.0344 | 45.45 | 2000 | 0.2663 | 0.1730 |
| 0.0344 | 50.0 | 2200 | 0.2506 | 0.1718 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
zlicastro/zl-a2c-PandaReachDense-v2
|
zlicastro
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 358 |
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Tune-A-Video-library/redshift-man-skiing
|
Tune-A-Video-library
| null | 17 | 0 |
diffusers
| 2 | null | false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['tune-a-video', 'text-to-video', 'diffusers']
| false | true | true | 1,575 |
# Tune-A-Video - Redshift
## Model Description
- Base model: [nitrosocke/redshift-diffusion](https://huggingface.co/nitrosocke/redshift-diffusion)
- Training prompt: a man is skiing.

## Samples

Test prompt: (redshift style) [spider man/black widow/batman/hulk] is skiing.
## Usage
Clone the [github repo](https://github.com/showlab/Tune-A-Video)
```bash
git clone https://github.com/showlab/Tune-A-Video.git
```
Run inference code
```python
from tuneavideo.pipelines.pipeline_tuneavideo import TuneAVideoPipeline
from tuneavideo.models.unet import UNet3DConditionModel
from tuneavideo.util import save_videos_grid
import torch
pretrained_model_path = "nitrosocke/redshift-diffusion"
unet_model_path = "Tune-A-Video-library/redshift-man-skiing"
unet = UNet3DConditionModel.from_pretrained(unet_model_path, subfolder='unet', torch_dtype=torch.float16).to('cuda')
pipe = TuneAVideoPipeline.from_pretrained(pretrained_model_path, unet=unet, torch_dtype=torch.float16).to("cuda")
pipe.enable_xformers_memory_efficient_attention()
prompt = "(redshift style) spider man is skiing"
video = pipe(prompt, video_length=8, height=512, width=512, num_inference_steps=50, guidance_scale=7.5).videos
save_videos_grid(video, f"./{prompt}.gif")
```
## Related Papers:
- [Tune-A-Video](https://arxiv.org/abs/2212.11565): One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation
- [Stable Diffusion](https://arxiv.org/abs/2112.10752): High-Resolution Image Synthesis with Latent Diffusion Models
|
AdonaiHS/q-FrozenLake-v1-4x4-noSlippery
|
AdonaiHS
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 397 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="AdonaiHS/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
njrosati/dqn-SpaceInvadersNoFrameskip-v4
|
njrosati
| null | 15 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,216 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga njrosati -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga njrosati -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga njrosati
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
AdonaiHS/Taxi-v3-Unit2-part2
|
AdonaiHS
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 374 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="AdonaiHS/Taxi-v3-Unit2-part2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Pearson/dqn-SpaceInvadersNoFrameskip-v4
|
Pearson
| null | 15 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,214 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Pearson -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Pearson -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Pearson
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
AdonaiHS/2-Taxi-v3-Unit2-part2
|
AdonaiHS
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 376 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="AdonaiHS/2-Taxi-v3-Unit2-part2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
maskip/pretrained-m-bert-1
|
maskip
|
bert
| 8 | 15 |
transformers
| 0 | null | false | true | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 5,165 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# pretrained-m-bert-1
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.3662
- Validation Loss: 14.3784
- Epoch: 99
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 1e-04, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.2573 | 10.9396 | 0 |
| 7.8912 | 10.9597 | 1 |
| 6.8354 | 11.2198 | 2 |
| 6.3659 | 11.5487 | 3 |
| 6.0787 | 11.1321 | 4 |
| 5.8528 | 11.1589 | 5 |
| 5.7689 | 11.3702 | 6 |
| 5.3858 | 11.8196 | 7 |
| 5.1194 | 12.0623 | 8 |
| 5.2013 | 11.8812 | 9 |
| 5.0820 | 11.7143 | 10 |
| 4.9735 | 12.6034 | 11 |
| 5.0704 | 11.9522 | 12 |
| 5.0098 | 12.1079 | 13 |
| 5.0869 | 12.2301 | 14 |
| 4.8156 | 12.2691 | 15 |
| 4.8344 | 12.6859 | 16 |
| 4.8416 | 12.5824 | 17 |
| 4.5567 | 13.1279 | 18 |
| 4.6677 | 12.5301 | 19 |
| 4.7248 | 12.5819 | 20 |
| 4.5435 | 12.5466 | 21 |
| 4.6501 | 12.4965 | 22 |
| 4.7348 | 12.7858 | 23 |
| 4.4649 | 12.5727 | 24 |
| 4.3665 | 12.2411 | 25 |
| 4.6434 | 12.6519 | 26 |
| 4.4239 | 13.6048 | 27 |
| 4.3893 | 13.3377 | 28 |
| 4.6514 | 12.3628 | 29 |
| 4.4743 | 12.6613 | 30 |
| 4.2777 | 12.3419 | 31 |
| 4.5667 | 13.5223 | 32 |
| 4.0886 | 13.2224 | 33 |
| 4.3032 | 13.2787 | 34 |
| 4.3670 | 13.1041 | 35 |
| 4.1487 | 12.6903 | 36 |
| 4.3331 | 13.4880 | 37 |
| 4.2907 | 13.2240 | 38 |
| 4.3252 | 12.6439 | 39 |
| 4.1121 | 13.5487 | 40 |
| 4.3273 | 14.4200 | 41 |
| 4.2030 | 14.4691 | 42 |
| 4.1795 | 13.1436 | 43 |
| 4.0424 | 14.0504 | 44 |
| 4.0158 | 12.5468 | 45 |
| 4.0108 | 13.6426 | 46 |
| 3.9515 | 13.4965 | 47 |
| 3.9743 | 13.2319 | 48 |
| 4.1075 | 13.2999 | 49 |
| 4.0501 | 12.7201 | 50 |
| 3.8606 | 13.1704 | 51 |
| 3.8056 | 13.7504 | 52 |
| 3.7682 | 13.5004 | 53 |
| 4.0676 | 13.6444 | 54 |
| 4.0957 | 13.4160 | 55 |
| 4.9373 | 14.2742 | 56 |
| 4.5111 | 13.9469 | 57 |
| 4.1604 | 13.4773 | 58 |
| 3.9956 | 12.9802 | 59 |
| 4.1232 | 14.1715 | 60 |
| 3.9857 | 12.2465 | 61 |
| 4.1082 | 13.8947 | 62 |
| 3.8659 | 13.6370 | 63 |
| 3.8396 | 13.5898 | 64 |
| 3.8220 | 13.2523 | 65 |
| 3.6864 | 13.9323 | 66 |
| 3.7541 | 13.6081 | 67 |
| 3.8218 | 12.9945 | 68 |
| 3.7251 | 13.7039 | 69 |
| 3.5017 | 12.9811 | 70 |
| 3.5342 | 12.5702 | 71 |
| 3.9520 | 12.4899 | 72 |
| 3.7465 | 13.2309 | 73 |
| 3.6003 | 14.0988 | 74 |
| 3.7954 | 13.0785 | 75 |
| 3.5654 | 13.7277 | 76 |
| 3.5591 | 13.7914 | 77 |
| 3.5355 | 13.6749 | 78 |
| 3.5903 | 13.6141 | 79 |
| 3.5371 | 13.4166 | 80 |
| 3.4502 | 12.6523 | 81 |
| 3.3372 | 13.8609 | 82 |
| 3.3071 | 14.3441 | 83 |
| 3.6932 | 13.9718 | 84 |
| 3.5619 | 13.3749 | 85 |
| 3.5016 | 13.1467 | 86 |
| 3.4279 | 14.0124 | 87 |
| 3.3140 | 13.6681 | 88 |
| 3.3575 | 12.9451 | 89 |
| 3.2268 | 12.4299 | 90 |
| 3.2001 | 14.5106 | 91 |
| 3.1390 | 13.9366 | 92 |
| 3.1230 | 13.6865 | 93 |
| 3.2337 | 13.5835 | 94 |
| 3.1397 | 13.8130 | 95 |
| 3.2095 | 13.8431 | 96 |
| 2.9553 | 14.5159 | 97 |
| 3.3319 | 12.9168 | 98 |
| 3.3662 | 14.3784 | 99 |
### Framework versions
- Transformers 4.27.0.dev0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
juanmi1234/Reinforce-PixelCopter
|
juanmi1234
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pixelcopter-PLE-v0', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 300 |
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
aioe/AB4.5_AC0.2
|
aioe
| null | 4 | 0 | null | 14 |
text-to-image
| false | false | false |
other
|
['en', 'ja']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'text-to-image', 'art']
| true | true | true | 2,068 |
<!-- This model card has been generated automatically according to the information. You should
probably proofread and complete it, then remove this comment. -->
<br>
# 【概要(Outline)】
コンセプトは<strong>「手や指の描写が上手い3Dイラスト」</strong>です。
<br>
AIイラストは手や指の描写が下手なことが多く、せっかく良い構図のイラストが生成されても、手や指のせいで没にしなければならない時が多くありました。
<br>
その問題を解消するため、私は現存するモデルを大量に試し、手や指の描写が上手いモデルをマージすることで、完成度の高いモデルを構築することに成功しました。
<br>
<br>
The concept is <strong>"3D illustration models that are good at drawing hands and fingers."</strong>
<br>
I think AI illustration is poor at drawing hands and fingers, so I have wasted lots of time because of poor hands and fingers.
<br>
I have tried to lots of models to solve the problem. As a result, I made high level models by merged models of good hands and fingers.
<br>
<br>
# 【モデル紹介とマージ素材(Models introduction and merged materials)】
<strong>*■AB4.5-v1.0*</strong>
<br>
・anything-v4.5
<br>
・Basil_mix
<br>
→リアルな質感の人物描写が特徴的です。
<br>
(The feature is realistic texture character.)
<br>
<br>
<strong>*■AC0.2-v1.0*</strong>
<br>
・anything-v4.5
<br>
・Counterfeit-V2.5
<br>
→服装と背景の繊細な描き込みが特徴的です。
<br>
(The feature is delicate drawing clothes and background.)
<br>
<br>
# 【推奨設定(Recommended settings)】
・Steps:30
<br>
・CFG Scale:13
<br>
・Clip Skip:2
<br>
・Negative:(worst quality, low quality:1.2),
<br>
<br>
# 【作例(Examples)】
Positive:one girl,
<br>
<br>
Negative:(worst quality, low quality:1.2),
<br>
<br>
<strong>*■AB4.5-v1.0*</strong>
<img src="https://imgur.com/u0mjPNX.png" width="1152" height="768">
<br>
<strong>*■AC0.2-v1.0*</strong>
<img src="https://imgur.com/nwzGYK3.png" width="1152" height="768">
<br>
<br>
Positive:(one girl at the center:1.2), (masterpiece, best quality), (solo:1.3), the girl wearing (casual clothes), the background is city, building, shops, windows, (best quality),
<br>
<br>
Negative:(worst quality, low quality:1.2),
<br>
<br>
<strong>*■AB4.5-v1.0*</strong>
<img src="https://imgur.com/AHCcfzS.png" width="1152" height="768">
<br>
<strong>*■AC0.2-v1.0*</strong>
<img src="https://imgur.com/WFVqD2q.png" width="1152" height="768">
|
hectorjelly/Mespil_Rangers2
|
hectorjelly
| null | 25 | 409 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 845 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: hectorjelly/Mespil_Rangers2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
underactuated/opt-350m_rl1_v5
|
underactuated
|
opt
| 12 | 6 |
transformers
| 0 |
text-generation
| true | false | false |
other
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 911 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-350m_rl1_v5
This model is a fine-tuned version of [underactuated/opt-350m_mle_v3](https://huggingface.co/underactuated/opt-350m_mle_v3) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.26.1
- Pytorch 1.12.1
- Datasets 2.9.0
- Tokenizers 0.13.2
|
HealthTeam/mt5-small-finetuned-MultiHead-230207
|
HealthTeam
|
mt5
| 13 | 46 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,337 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-MultiHead-230207
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2185
- Bleu: 14.3905
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:------:|:---------------:|:-------:|
| 3.0155 | 1.0 | 67222 | 2.3749 | 11.2986 |
| 2.7777 | 2.0 | 134444 | 2.2518 | 13.5854 |
| 2.7531 | 3.0 | 201666 | 2.2185 | 14.3905 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
polandball/GPT-Polen
|
polandball
|
gpt2
| 12 | 5 |
transformers
| 0 |
conversational
| true | false | false | null |
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['conversational']
| false | true | true | 331 |

### Hellos, I ams of Polenball!
So you of probablys randomly findings this, and clicked on it.
## **Random things you can do while you're here.**
**Trade with me.**
**Make fun of me because i cant into space**
~~ **Invade me.** ~~
|
pfunk/Pong-v4-DQPN_p10_e0.10-seed1
|
pfunk
| null | 11 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pong-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 1,989 |
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p10_e0.10.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_p10_e0.10]"
python -m cleanrl_utils.enjoy --exp-name DQPN_p10_e0.10 --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p10_e0.10-seed1/raw/main/dqpn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p10_e0.10-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p10_e0.10-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_atari.py --exp-name DQPN_p10_e0.10 --start-policy-f 10000 --end-policy-f 1000 --evaluation-fraction 0.10 --target-tau 1.0 --policy-tau 1.00 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'end_policy_f': 1000,
'env_id': 'Pong-v4',
'evaluation_fraction': 0.1,
'exp_name': 'DQPN_p10_e0.10',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'policy_tau': 1.0,
'save_model': True,
'seed': 1,
'start_e': 1,
'start_policy_f': 10000,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
burnerbaby/tensorrt-sds
|
burnerbaby
| null | 7 | 0 | null | 0 | null | false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['TensorRT', 'Text2Image', 'Stable Diffusion', 'Image2Image', 'SDA']
| false | true | true | 742 |
# burnerbaby/sds converted into TensorRT
<img src="https://i.imgur.com/fQS926g.png"></a>
Model converted from diffusers into TensorRT for accelerated inference up to 4x faster.
originally from: https://github.com/chavinlo/sda-node
This model was automatically converted by SDA-node
Compilation configuration:
```json
{
"_class_name": "StableDiffusionAccelerated_Base",
"_sda_version": "0.1.2",
"_trt_version": "8.5.3",
"_cuda_version": "none",
"_cudnn_version": "none",
"_onnx2trt_version": "8.5.3",
"unet": {
"precision": "fp16",
"path": "engine/unet.plan"
},
"clip": {
"path": "engine/clip.plan"
},
"de_vae": {
"path": "engine/de_vae.plan"
}
}
```
|
amoselberg/dqn-SpaceInvadersNoFrameskip-v4
|
amoselberg
| null | 15 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,223 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga amoselberg -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga amoselberg -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga amoselberg
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
ernie-ai/document-language-class-ar-en-zh
|
ernie-ai
|
vit
| 10 | 2 |
transformers
| 0 |
image-classification
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['image-classification', 'pytorch', 'huggingpics']
| false | true | true | 660 |
# document-language-class-ar-en-zh
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### abstract art lines

#### arabic document

#### chinese document

#### english document

|
gayatrividhate/sentiment_analysis_SetFit
|
gayatrividhate
|
albert
| 13 | 313 |
sentence-transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['setfit', 'sentence-transformers', 'text-classification']
| false | true | true | 1,416 |
# sentiment_analysis_SetFit
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("sentiment_analysis_SetFit")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
achang/donut-plotqa-trained
|
achang
|
vision-encoder-decoder
| 29 | 1 |
transformers
| 0 | null | true | false | false |
mit
| null |
['imagefolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 985 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-plotqa-trained
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu117
- Datasets 2.5.2
- Tokenizers 0.12.1
|
CoreyMorris/poca-SoccerTwos-football-is-life
|
CoreyMorris
| null | 20 | 405 |
ml-agents
| 1 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 862 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: CoreyMorris/poca-SoccerTwos-football-is-life
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mio/tokiwa_midori
|
mio
| null | 29 | 148 |
espnet
| 1 |
text-to-speech
| false | false | false |
cc-by-4.0
|
['jp']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['espnet', 'audio', 'text-to-speech']
| false | true | true | 10,528 |
## ESPnet2 TTS model
### `mio/tokiwa_midori`

This model was trained by mio using amadeus recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 0232f540a98ece921477b961db8ae019211da9af
pip install -e .
cd egs2/amadeus/tts1
./run.sh --skip_data_prep false --skip_train true --download_model mio/tokiwa_midori
```
## TTS config
<details><summary>expand</summary>
```
config: conf/tuning/finetune_vits.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_midori_vits_finetune_from_jsut_32_sentence
ngpu: 1
seed: 777
num_workers: 4
num_att_plot: 0
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: false
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - train
- total_count
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: -1
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: 50
use_matplotlib: true
use_tensorboard: false
create_graph_in_tensorboard: false
use_wandb: true
wandb_project: midori
wandb_id: null
wandb_entity: null
wandb_name: vits_finetune_midori_from_jsut
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param:
- downloads/f3698edf589206588f58f5ec837fa516/exp/tts_train_vits_raw_phn_jaconv_pyopenjtalk_accent_with_pause/train.total_count.ave_10best.pth:tts:tts
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: 1000
batch_size: 20
valid_batch_size: null
batch_bins: 5000000
valid_batch_bins: null
train_shape_file:
- exp/tts_stats_raw_linear_spectrogram_phn_jaconv_pyopenjtalk_accent_with_pause/train/text_shape.phn
- exp/tts_stats_raw_linear_spectrogram_phn_jaconv_pyopenjtalk_accent_with_pause/train/speech_shape
valid_shape_file:
- exp/tts_stats_raw_linear_spectrogram_phn_jaconv_pyopenjtalk_accent_with_pause/valid/text_shape.phn
- exp/tts_stats_raw_linear_spectrogram_phn_jaconv_pyopenjtalk_accent_with_pause/valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/22k/raw/train/text
- text
- text
- - dump/22k/raw/train/wav.scp
- speech
- sound
valid_data_path_and_name_and_type:
- - dump/22k/raw/dev/text
- text
- text
- - dump/22k/raw/dev/wav.scp
- speech
- sound
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adamw
optim_conf:
lr: 0.0001
betas:
- 0.8
- 0.99
eps: 1.0e-09
weight_decay: 0.0
scheduler: exponentiallr
scheduler_conf:
gamma: 0.999875
optim2: adamw
optim2_conf:
lr: 0.0001
betas:
- 0.8
- 0.99
eps: 1.0e-09
weight_decay: 0.0
scheduler2: exponentiallr
scheduler2_conf:
gamma: 0.999875
generator_first: false
token_list:
- <blank>
- <unk>
- '1'
- '2'
- '0'
- '3'
- '4'
- '-1'
- '5'
- a
- o
- '-2'
- i
- '-3'
- u
- e
- k
- n
- t
- '6'
- r
- '-4'
- s
- N
- m
- pau
- '7'
- sh
- d
- g
- w
- '8'
- U
- '-5'
- I
- cl
- h
- y
- b
- '9'
- j
- ts
- ch
- '-6'
- z
- p
- '-7'
- f
- ky
- ry
- '-8'
- gy
- '-9'
- hy
- ny
- '-10'
- by
- my
- '-11'
- '-12'
- '-13'
- py
- '-14'
- '-15'
- v
- '10'
- '-16'
- '-17'
- '11'
- '-21'
- '-20'
- '12'
- '-19'
- '13'
- '-18'
- '14'
- dy
- '15'
- ty
- '-22'
- '16'
- '18'
- '19'
- '17'
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: jaconv
g2p: pyopenjtalk_accent_with_pause
feats_extract: linear_spectrogram
feats_extract_conf:
n_fft: 1024
hop_length: 256
win_length: null
normalize: null
normalize_conf: {}
tts: vits
tts_conf:
generator_type: vits_generator
generator_params:
hidden_channels: 192
spks: -1
global_channels: -1
segment_size: 32
text_encoder_attention_heads: 2
text_encoder_ffn_expand: 4
text_encoder_blocks: 6
text_encoder_positionwise_layer_type: conv1d
text_encoder_positionwise_conv_kernel_size: 3
text_encoder_positional_encoding_layer_type: rel_pos
text_encoder_self_attention_layer_type: rel_selfattn
text_encoder_activation_type: swish
text_encoder_normalize_before: true
text_encoder_dropout_rate: 0.1
text_encoder_positional_dropout_rate: 0.0
text_encoder_attention_dropout_rate: 0.1
use_macaron_style_in_text_encoder: true
use_conformer_conv_in_text_encoder: false
text_encoder_conformer_kernel_size: -1
decoder_kernel_size: 7
decoder_channels: 512
decoder_upsample_scales:
- 8
- 8
- 2
- 2
decoder_upsample_kernel_sizes:
- 16
- 16
- 4
- 4
decoder_resblock_kernel_sizes:
- 3
- 7
- 11
decoder_resblock_dilations:
- - 1
- 3
- 5
- - 1
- 3
- 5
- - 1
- 3
- 5
use_weight_norm_in_decoder: true
posterior_encoder_kernel_size: 5
posterior_encoder_layers: 16
posterior_encoder_stacks: 1
posterior_encoder_base_dilation: 1
posterior_encoder_dropout_rate: 0.0
use_weight_norm_in_posterior_encoder: true
flow_flows: 4
flow_kernel_size: 5
flow_base_dilation: 1
flow_layers: 4
flow_dropout_rate: 0.0
use_weight_norm_in_flow: true
use_only_mean_in_flow: true
stochastic_duration_predictor_kernel_size: 3
stochastic_duration_predictor_dropout_rate: 0.5
stochastic_duration_predictor_flows: 4
stochastic_duration_predictor_dds_conv_layers: 3
vocabs: 85
aux_channels: 513
discriminator_type: hifigan_multi_scale_multi_period_discriminator
discriminator_params:
scales: 1
scale_downsample_pooling: AvgPool1d
scale_downsample_pooling_params:
kernel_size: 4
stride: 2
padding: 2
scale_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes:
- 15
- 41
- 5
- 3
channels: 128
max_downsample_channels: 1024
max_groups: 16
bias: true
downsample_scales:
- 2
- 2
- 4
- 4
- 1
nonlinear_activation: LeakyReLU
nonlinear_activation_params:
negative_slope: 0.1
use_weight_norm: true
use_spectral_norm: false
follow_official_norm: false
periods:
- 2
- 3
- 5
- 7
- 11
period_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes:
- 5
- 3
channels: 32
downsample_scales:
- 3
- 3
- 3
- 3
- 1
max_downsample_channels: 1024
bias: true
nonlinear_activation: LeakyReLU
nonlinear_activation_params:
negative_slope: 0.1
use_weight_norm: true
use_spectral_norm: false
generator_adv_loss_params:
average_by_discriminators: false
loss_type: mse
discriminator_adv_loss_params:
average_by_discriminators: false
loss_type: mse
feat_match_loss_params:
average_by_discriminators: false
average_by_layers: false
include_final_outputs: true
mel_loss_params:
fs: 22050
n_fft: 1024
hop_length: 256
win_length: null
window: hann
n_mels: 80
fmin: 0
fmax: null
log_base: null
lambda_adv: 1.0
lambda_mel: 45.0
lambda_feat_match: 2.0
lambda_dur: 1.0
lambda_kl: 1.0
sampling_rate: 22050
cache_generator_outputs: true
pitch_extract: null
pitch_extract_conf: {}
pitch_normalize: null
pitch_normalize_conf: {}
energy_extract: null
energy_extract_conf: {}
energy_normalize: null
energy_normalize_conf: {}
required:
- output_dir
- token_list
version: '202207'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
shaoyu17/my_awesome_model
|
shaoyu17
|
distilbert
| 56 | 49 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,527 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8597
- F1: 0.5171
- Precision: 0.5205
- Recall: 0.52
- Accuracy: 0.52
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Precision | Recall | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:---------:|:------:|:--------:|
| 0.6451 | 1.0 | 752 | 0.7708 | 0.4699 | 0.5047 | 0.5035 | 0.5035 |
| 0.5828 | 2.0 | 1504 | 0.7702 | 0.5101 | 0.5106 | 0.5106 | 0.5106 |
| 0.5139 | 3.0 | 2256 | 0.8597 | 0.5171 | 0.5205 | 0.52 | 0.52 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
pfunk/Pong-v4-DQPN_p10_e0.25-seed1
|
pfunk
| null | 11 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pong-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 1,990 |
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p10_e0.25.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_p10_e0.25]"
python -m cleanrl_utils.enjoy --exp-name DQPN_p10_e0.25 --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p10_e0.25-seed1/raw/main/dqpn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p10_e0.25-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p10_e0.25-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_atari.py --exp-name DQPN_p10_e0.25 --start-policy-f 10000 --end-policy-f 1000 --evaluation-fraction 0.25 --target-tau 1.0 --policy-tau 1.00 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'end_policy_f': 1000,
'env_id': 'Pong-v4',
'evaluation_fraction': 0.25,
'exp_name': 'DQPN_p10_e0.25',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'policy_tau': 1.0,
'save_model': True,
'seed': 1,
'start_e': 1,
'start_policy_f': 10000,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
Someman/gpt2-nepali
|
Someman
|
gpt2
| 14 | 2 |
transformers
| 0 |
text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,247 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-nepali
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5058
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9488 | 0.68 | 5000 | 1.5058 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k_384
|
timm
| null | 4 | 40 |
timm
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagenet-1k', 'laion-2b']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['image-classification', 'timm']
| false | true | true | 24,605 |
# Model card for convnext_large_mlp.clip_laion2b_augreg_ft_in1k_384
A ConvNeXt image classification model. CLIP image tower weights pretrained in [OpenCLIP](https://github.com/mlfoundations/open_clip) on LAION and fine-tuned on ImageNet-1k in `timm` by Ross Wightman.
Please see related OpenCLIP model cards for more details on pretrain:
* https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg
* https://huggingface.co/laion/CLIP-convnext_base_w-laion2B-s13B-b82K-augreg
* https://huggingface.co/laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 200.1
- GMACs: 101.1
- Activations (M): 126.7
- Image size: 384 x 384
- **Papers:**
- LAION-5B: An open large-scale dataset for training next generation image-text models: https://arxiv.org/abs/2210.08402
- A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545
- Learning Transferable Visual Models From Natural Language Supervision: https://arxiv.org/abs/2103.00020
- **Original:** https://github.com/mlfoundations/open_clip
- **Pretrain Dataset:** LAION-2B
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model('convnext_large_mlp.clip_laion2b_augreg_ft_in1k_384', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'convnext_large_mlp.clip_laion2b_augreg_ft_in1k_384',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g. for convnext_base:
# torch.Size([1, 128, 56, 56])
# torch.Size([1, 256, 28, 28])
# torch.Size([1, 512, 14, 14])
# torch.Size([1, 1024, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'convnext_large_mlp.clip_laion2b_augreg_ft_in1k_384',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled (ie.e a (batch_size, num_features, H, W) tensor
output = model.forward_head(output, pre_logits=True)
# output is (batch_size, num_features) tensor
```
## Model Comparison
### By Top-1
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
|model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|----------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
|[convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512)|88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
|[convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384)|88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
|[convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384)|88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
|[convnext_large_mlp.clip_laion2b_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k_384)|87.870|98.452|384 |200.13 |101.11 |126.74 |197.92 |256 |
|[convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384)|87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
|[convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384)|87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
|[convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384)|87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
|[convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k)|87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
|[convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k)|87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
|[convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k)|87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
|[convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384)|86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
|[convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k)|86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
|[convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k)|86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
|[convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384)|86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
|[convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k)|86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
|[convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384)|86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
|[convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k)|86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
|[convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k)|85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
|[convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384)|85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
|[convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k)|85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
|[convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k)|85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
|[convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384)|85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
|[convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384)|85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
|[convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k)|84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
|[convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k)|84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
|[convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k)|84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
|[convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k)|84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
|[convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384)|84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
|[convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k)|83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
|[convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k)|83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
|[convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384)|83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
|[convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k)|83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
|[convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k)|82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
|[convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k)|82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
|[convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k)|82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
|[convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k)|82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
|[convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k)|82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
|[convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k)|82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
|[convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k)|81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
|[convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k)|80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
|[convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k)|80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
|[convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k)|80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
|[convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k)|79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
|[convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k)|79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
|[convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k)|78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
|[convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k)|77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
|[convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k)|77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
|[convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k)|76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
|[convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k)|75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
|[convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k)|75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
### By Throughput (samples / sec)
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
|model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|----------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
|[convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k)|75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
|[convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k)|75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
|[convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k)|77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
|[convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k)|77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
|[convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k)|79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
|[convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k)|79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
|[convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k)|76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
|[convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k)|78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
|[convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k)|82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
|[convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k)|80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
|[convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k)|80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
|[convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k)|80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
|[convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k)|82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
|[convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k)|82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
|[convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k)|84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
|[convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k)|82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
|[convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k)|81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
|[convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k)|82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
|[convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k)|84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
|[convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k)|85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
|[convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k)|83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
|[convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k)|83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
|[convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k)|82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
|[convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k)|83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
|[convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k)|85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
|[convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384)|84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
|[convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384)|85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
|[convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k)|86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
|[convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384)|83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
|[convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k)|84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
|[convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k)|86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
|[convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k)|84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
|[convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k)|86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
|[convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384)|85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
|[convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384)|86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
|[convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384)|85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
|[convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k)|87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
|[convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k)|87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
|[convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k)|85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
|[convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384)|86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
|[convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k)|87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
|[convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384)|86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
|[convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384)|87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
|[convnext_large_mlp.clip_laion2b_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k_384)|87.870 |98.452 |384 |200.13 |101.11 |126.74 |197.92 |256 |
|[convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384)|87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
|[convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k)|86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
|[convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384)|88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
|[convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384)|87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
|[convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384)|88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
|[convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512)|88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
## Citation
```bibtex
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
```bibtex
@inproceedings{schuhmann2022laionb,
title={{LAION}-5B: An open large-scale dataset for training next generation image-text models},
author={Christoph Schuhmann and
Romain Beaumont and
Richard Vencu and
Cade W Gordon and
Ross Wightman and
Mehdi Cherti and
Theo Coombes and
Aarush Katta and
Clayton Mullis and
Mitchell Wortsman and
Patrick Schramowski and
Srivatsa R Kundurthy and
Katherine Crowson and
Ludwig Schmidt and
Robert Kaczmarczyk and
Jenia Jitsev},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022},
url={https://openreview.net/forum?id=M3Y74vmsMcY}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
```
```bibtex
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
```bibtex
@article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
|
Sygil/Sygil-Muse
|
Sygil
| null | 6 | 0 | null | 6 |
text-to-image
| false | false | false |
openrail
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'sygil-devs', 'Muse', 'Sygil-Muse']
| false | true | true | 2,569 |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model is based in [Muse](https://muse-model.github.io/) and trained using [`lucidrains/muse-maskgit-pytorch`](https://github.com/lucidrains/muse-maskgit-pytorch).
# Model Details
This model is a new model trained from scratch based on [Muse](https://muse-model.github.io/), trained on the [Imaginary Network Expanded Dataset](https://github.com/Sygil-Dev/INE-dataset), with the big advantage of allowing the use of multiple namespaces (labeled tags) to control various parts of the final generation.
The use of namespaces (eg. “species:seal” or “studio:dc”) stops the model from misinterpreting a seal as the singer Seal, or DC Comics as Washington DC.
Note: As of right now, only the first VAE has been trained, we still need to train the Base and Super Resolution VAE for the model to be usable.
If you find our work useful, please consider supporting us on [OpenCollective](https://opencollective.com/sygil_dev)!
This model is still in its infancy and it's meant to be constantly updated and trained with more and more data as time goes by, so feel free to give us feedback on our [Discord Server](https://discord.gg/UjXFsf6mTu) or on the discussions section on huggingface. We plan to improve it with more, better tags in the future, so any help is always welcome.
[](https://discord.gg/UjXFsf6mTu)
## Available Checkpoints:
- #### Stable:
- No stable version available right now.
- #### Beta:
- [vae.612000.pt](https://huggingface.co/Sygil/Sygil-Muse/blob/main/vae.612000.pt): Trained from scratch for 612K steps
Note: Checkpoints under the Beta section are updated daily or at least 3-4 times a week. This is usually the equivalent of 1-2 training session,
this is done until they are stable enough to be moved into a proper release, usually every 1 or 2 weeks.
While the beta checkpoints can be used as they are only the latest version is kept on the repo and the older checkpoints are removed when a new one
is uploaded to keep the repo clean.
## Training
**Training Data**:
The model was trained on the following dataset:
- [Imaginary Network Expanded Dataset](https://github.com/Sygil-Dev/INE-dataset) dataset.
**Hardware and others**
- **Hardware:** 1 x Nvidia RTX 3050 8GB GPU
- **Hours Trained:** NaN.
- **Gradient Accumulations**: 1
- **Batch:** 1
- **Learning Rate:** 3e-4
- **Resolution**: 512 pixels
- **Total Training Steps:** 612,000
|
sayakpaul/segformer-b0-scene-parse-150-lora
|
sayakpaul
|
segformer
| 24 | 0 |
transformers
| 0 | null | true | false | false |
other
| null |
['scene_parse_150']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 975 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150-lora
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 32
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Dipl0/pepe-diffuser
|
Dipl0
| null | 24 | 404 |
diffusers
| 3 |
text-to-image
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['pepe']
| false | true | true | 435 |
# How to use
***To prompt you can use the following code***
```python
from diffusers import StableDiffusionPipeline
model_path = "Dipl0/pepe-diffuser"
pipe = StableDiffusionPipeline.from_pretrained(model_path, torch_dtype=torch.float16)
pipe.to("cuda")
prompt = "pepe surfing on the moon"
image = pipe(prompt, num_inference_steps=50, guidance_scale=7.5).images[0]
image.save(prompt.replace(" ","_") + ".png")
```
|
Sandy317/distilbert-base-uncased-finetuned-squad
|
Sandy317
|
distilbert
| 12 | 7 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,262 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1427
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2926 | 1.0 | 2767 | 1.1970 |
| 1.0128 | 2.0 | 5534 | 1.1330 |
| 0.8562 | 3.0 | 8301 | 1.1427 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.9.0+cu111
- Tokenizers 0.13.2
|
lucataco/pokemon-lora-small
|
lucataco
| null | 12 | 0 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers', 'lora']
| false | true | true | 364 |
# LoRA text2image fine-tuning - https://huggingface.co/lucataco/pokemon-lora-supersmall
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.



|
antoooooine/poca-SoccerTwos-v2
|
antoooooine
| null | 24 | 399 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 848 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: antoooooine/poca-SoccerTwos-v2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
xpmir/monobert
|
xpmir
| null | 6 | 2 |
xpmir
| 0 | null | false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,110 |
# monoBERT trained on MS-Marco
Passage Re-ranking with BERT (Rodrigo Nogueira, Kyunghyun Cho). 2019.
https://arxiv.org/abs/1901.04085
## Using the model)
The model can be loaded with [experimaestro IR](https://experimaestro-ir.readthedocs.io/en/latest/)
```py
from xpmir.models import AutoModel
# Model that can be re-used in experiments
model = AutoModel.load_from_hf_hub("xpmir/monobert")
# Use this if you want to actually use the model
model = AutoModel.load_from_hf_hub("xpmir/monobert", as_instance=True)
model.initialize(None)
model.rsv("walgreens store sales average", "The average Walgreens salary ranges from approximately $15,000 per year for Customer Service Associate / Cashier to $179,900 per year for District Manager...")
```
## Results
| Dataset | AP | P@20 | RR | RR@10 | nDCG | nDCG@10 | nDCG@20 |
|----| ---|------|------|------|------|------|------|
| msmarco_dev | 0.3574 | 0.0371 | 0.3624 | 0.3529 | 0.4640 | 0.4147 | 0.4370 |
| trec2019 | 0.4908 | 0.7233 | 0.9368 | 0.9368 | 0.6871 | 0.7046 | 0.6813 |
| trec2020 | 0.4803 | 0.6120 | 0.9380 | 0.9367 | 0.6865 | 0.6963 | 0.6626 |
|
pfunk/Pong-v4-DQPN_p10_pt0.1-seed1
|
pfunk
| null | 11 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pong-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 1,990 |
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p10_pt0.1.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_p10_pt0.1]"
python -m cleanrl_utils.enjoy --exp-name DQPN_p10_pt0.1 --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p10_pt0.1-seed1/raw/main/dqpn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p10_pt0.1-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p10_pt0.1-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_atari.py --exp-name DQPN_p10_pt0.1 --start-policy-f 10000 --end-policy-f 10000 --evaluation-fraction 1.00 --target-tau 1.0 --policy-tau 0.1 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'end_policy_f': 10000,
'env_id': 'Pong-v4',
'evaluation_fraction': 1.0,
'exp_name': 'DQPN_p10_pt0.1',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'policy_tau': 0.1,
'save_model': True,
'seed': 1,
'start_e': 1,
'start_policy_f': 10000,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
Woonn/distilbert-base-uncased-finetuned-clinc
|
Woonn
|
distilbert
| 12 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['clinc_oos']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,481 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7721
- Accuracy: 0.9184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2896 | 1.0 | 318 | 3.2890 | 0.7432 |
| 2.6284 | 2.0 | 636 | 1.8756 | 0.8377 |
| 1.5483 | 3.0 | 954 | 1.1572 | 0.8961 |
| 1.015 | 4.0 | 1272 | 0.8573 | 0.9132 |
| 0.7953 | 5.0 | 1590 | 0.7721 | 0.9184 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_data_aug_wnli_128
|
gokuls
|
mobilebert
| 17 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,601 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_logit_kd_data_aug_wnli_128
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5913
- Accuracy: 0.1408
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3404 | 1.0 | 435 | 0.5913 | 0.1408 |
| 0.3027 | 2.0 | 870 | 0.5985 | 0.1127 |
| 0.2935 | 3.0 | 1305 | 0.6351 | 0.1127 |
| 0.2884 | 4.0 | 1740 | 0.6013 | 0.0986 |
| 0.2838 | 5.0 | 2175 | 0.6154 | 0.0986 |
| 0.2788 | 6.0 | 2610 | 0.6608 | 0.0845 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ipqhjjybj/alex
|
ipqhjjybj
| null | 19 | 1 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 415 |
### alex Dreambooth model trained by ipqhjjybj with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
pfunk/Pong-v4-DQPN_p30_pt0.1_tt0.1-seed1
|
pfunk
| null | 11 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pong-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,038 |
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p30_pt0.1_tt0.1.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_p30_pt0.1_tt0.1]"
python -m cleanrl_utils.enjoy --exp-name DQPN_p30_pt0.1_tt0.1 --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p30_pt0.1_tt0.1-seed1/raw/main/dqpn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p30_pt0.1_tt0.1-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p30_pt0.1_tt0.1-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_atari.py --exp-name DQPN_p30_pt0.1_tt0.1 --start-policy-f 30000 --end-policy-f 30000 --evaluation-fraction 1.00 --target-tau 0.1 --policy-tau 0.1 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'end_policy_f': 30000,
'env_id': 'Pong-v4',
'evaluation_fraction': 1.0,
'exp_name': 'DQPN_p30_pt0.1_tt0.1',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'policy_tau': 0.1,
'save_model': True,
'seed': 1,
'start_e': 1,
'start_policy_f': 30000,
'target_network_frequency': 1000,
'target_tau': 0.1,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
lukee/a2c-AntBulletEnv-v0
|
lukee
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AntBulletEnv-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 352 |
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
eshwarprasadS/ppo-Huggy
|
eshwarprasadS
| null | 32 | 8 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Huggy']
| false | true | true | 824 |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: eshwarprasadS/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Hazzzardous/RWKV-8Bit
|
Hazzzardous
| null | 4 | 0 | null | 0 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 703 |
## Example usage
```
from rwkvstic.load import RWKV
# Load the model (supports full path, relative path, and remote paths)
model = RWKV(
"https://huggingface.co/Hazzzardous/RWKV-8Bit/resolve/main/RWKV-4-Pile-7B-Instruct.pqth"
)
model.loadContext(newctx=f"Q: who is Jim Butcher?\n\nA:")
output = model.forward(number=100)["output"]
print(output)
# Q: who is Jim Butcher?
# A: Jim Butcher is a very popular American author of fantasy novels. He’s known for the Dresden Files series of novels.<|endoftext|>
```
## More details here
https://pypi.org/project/rwkvstic/
## Run example notebook
https://colab.research.google.com/github/harrisonvanderbyl/rwkvstic/blob/master/notebooks/chatbot.ipynb
|
UtopiansRareTruth/a2c-AntBulletEnv-v0
|
UtopiansRareTruth
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AntBulletEnv-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 352 |
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
rim0/dreambox-mix
|
rim0
| null | 31 | 0 | null | 15 |
text-to-image
| false | false | false |
creativeml-openrail-m
|
['en', 'ja']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Stable Diffusion', 'text-to-image']
| false | true | true | 4,238 |
# dreamboxmix
dreamboxmix是使用了[AbyssOrangeMix2](https://huggingface.co/WarriorMama777/OrangeMixs)、[pastel-mix](https://huggingface.co/andite/pastel-mix)、[Gf_style2](https://huggingface.co/xiaolxl/Gf_style2)、[ACertainThing](https://huggingface.co/JosephusCheung/ACertainThing)、[Counterfeit-V2.5](https://huggingface.co/gsdf/Counterfeit-V2.5)、厚涂-lastmodel、lastmodel-msw、混合而成的模型。
dreamboxmix は、 [AbyssOrangeMix2](https://huggingface.co/WarriorMama777/OrangeMixs)、[pastel-mix](https://huggingface.co/andite/pastel-mix)、[Gf_style2](https://huggingface.co/xiaolxl/Gf_style2)、[ACertainThing](https://huggingface.co/JosephusCheung/ACertainThing)、[Counterfeit-V2.5](https://huggingface.co/gsdf/Counterfeit-V2.5)、厚涂-lastmodel、lastmodel-mswをマージしたモデルです。
dreamboxmix is merge by using [AbyssOrangeMix2](https://huggingface.co/WarriorMama777/OrangeMixs)、[pastel-mix](https://huggingface.co/andite/pastel-mix)、[Gf_style2](https://huggingface.co/xiaolxl/Gf_style2)、[ACertainThing](https://huggingface.co/JosephusCheung/ACertainThing)、[Counterfeit-V2.5](https://huggingface.co/gsdf/Counterfeit-V2.5)、厚涂-lastmodel、lastmodel-msw.
----
# 关于选择模型- モデルの選択について - About Select Models
- [About dreamboxmix-A](#about-dreamboxmix-a)
- [About dreamboxmix-O](#about-dreamboxmix-o)
- [About dreamboxmix-P](#about-dreamboxmix-p)
- [xy grid](#xy-grid)
- [Updates/2023.2.9](#xy-grid)
# About dreamboxmix-A
<img src=https://huggingface.co/rim0/dreambox-mix/resolve/main/images/7%20(2).png>
dreamboxmix-A的出图效果比较偏向AbyssOrangeMix2,对于质感的表现要比另外两个模型更写实一些。
dreamboxmix-A の画像出力効果は AbyssOrangeMix2 に偏っており、テクスチャのパフォーマンスは他の 2 つのモデルよりもリアルです。
The image output effect of dreamboxmix-A is more biased towards AbyssOrangeMix2, and the performance of texture is more realistic than the other two models.
<img src=https://huggingface.co/rim0/dreambox-mix/resolve/main/images/2%20(1).png>
<img src=https://huggingface.co/rim0/dreambox-mix/resolve/main/images/3%20(1).png>
<img src=https://huggingface.co/rim0/dreambox-mix/resolve/main/images/4%20(1).png>
<img src=https://huggingface.co/rim0/dreambox-mix/resolve/main/images/5%20(1).png>
<img src=https://huggingface.co/rim0/dreambox-mix/resolve/main/images/6%20(1).png>
# About dreamboxmix-O
<img src=https://huggingface.co/rim0/dreambox-mix/resolve/main/images/7%20(3).png>
dreamboxmix-O在光影的表现上略优于两个模型(大概),但是出坏图的概率也比另外两个模型高。
dreamboxmix-O は、 2 つのモデルよりも光と影の表現がわずかに優れています(多分)が、悪い絵描を出る確率も他の2つのモデルより高い。
dreamboxmix-O is slightly better than the two models (probably) in terms of lighting performance, but the probability of bad pictures is also higher than the other two models.
<img src=https://huggingface.co/rim0/dreambox-mix/resolve/main/images/2%20(2).png>
<img src=https://huggingface.co/rim0/dreambox-mix/resolve/main/images/3%20(2).png>
<img src=https://huggingface.co/rim0/dreambox-mix/resolve/main/images/4%20(2).png>
<img src=https://huggingface.co/rim0/dreambox-mix/resolve/main/images/5%20(2).png>
<img src=https://huggingface.co/rim0/dreambox-mix/resolve/main/images/6%20(2).png>
## About dreamboxmix-P
<img src=https://huggingface.co/rim0/dreambox-mix/resolve/main/images/7%20(1).png>
dreamboxmix-P的出图效果比较偏向pastel-mix,更适合用来跑色彩丰富的插画。
dreamboxmix-P の出力効果はパステル ミックスに近く、色彩豊かなイラストに適しています。
The effect of dreamboxmix-P is more like pastel-mix, which is more suitable for drawing colorful illustrations.
<img src=https://huggingface.co/rim0/dreambox-mix/resolve/main/images/2%20(3).png>
<img src=https://huggingface.co/rim0/dreambox-mix/resolve/main/images/3%20(3).png>
<img src=https://huggingface.co/rim0/dreambox-mix/resolve/main/images/4%20(3).png>
<img src=https://huggingface.co/rim0/dreambox-mix/resolve/main/images/5%20(3).png>
<img src=https://huggingface.co/rim0/dreambox-mix/resolve/main/images/6%20(3).png>
# xy grid
<img src=https://huggingface.co/rim0/dreambox-mix/resolve/main/images/1%20(1).png>
<img src=https://huggingface.co/rim0/dreambox-mix/resolve/main/images/1%20(2).png>
<img src=https://huggingface.co/rim0/dreambox-mix/resolve/main/images/1%20(3).png>
<img src=https://huggingface.co/rim0/dreambox-mix/resolve/main/images/1%20(4).png>
<img src=https://huggingface.co/rim0/dreambox-mix/resolve/main/images/1%20(5).png>
# Updates
上传了体积更小的fp16版本
Uploaded a smaller version of fp16
|
Mayhem50/sgpt-bloom-560m-nli-v3
|
Mayhem50
|
bloom
| 12 | 25 |
sentence-transformers
| 0 |
sentence-similarity
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
| false | true | true | 3,749 |
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3076 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 500,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"correct_bias": false,
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 150, 'do_lower_case': False}) with Transformer model: BloomModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
ahjim0m0/ppo-LunarLander-v2
|
ahjim0m0
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.