repo_id
stringlengths 4
122
| author
stringlengths 2
38
⌀ | model_type
stringlengths 2
33
⌀ | files_per_repo
int64 2
39k
| downloads_30d
int64 0
33.7M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.87k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
33
⌀ | languages
stringlengths 2
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringlengths 6
258
⌀ | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
46
| prs_closed
int64 0
34
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 2
classes | has_text
bool 1
class | text_length
int64 201
598k
| readme
stringlengths 0
598k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dfm794/poca-SoccerTwos-2_6_3-l
|
dfm794
| null | 51 | 234 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 848 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: dfm794/poca-SoccerTwos-2_6_3-l
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
hectorjelly/ppo-SnowballTarge2
|
hectorjelly
| null | 20 | 1 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SnowballTarget']
| false | true | true | 858 |
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: hectorjelly/ppo-SnowballTarge2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
tanluuuuuuu/xlm-roberta-base-finetuned-panx-de
|
tanluuuuuuu
|
xlm-roberta
| 11 | 2 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,319 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1358
- F1: 0.8638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2591 | 1.0 | 525 | 0.1621 | 0.8206 |
| 0.1276 | 2.0 | 1050 | 0.1379 | 0.8486 |
| 0.082 | 3.0 | 1575 | 0.1358 | 0.8638 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
jrnold/poca-SoccerTwos
|
jrnold
| null | 21 | 228 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 840 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: jrnold/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
UBC-NLP/ArOCR-handwritting-v2
|
UBC-NLP
|
vision-encoder-decoder
| 26 | 17 |
transformers
| 0 |
image-to-text
| true | false | false | null |
['ar']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 15,265 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OnlineKhatt-roberta_ar_OnlineKhatt-swinv2_1024_OnlineKhatt
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4793
- Cer: 0.1093
- Wer: 0.3908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Cer | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:------:|:---------------:|:------:|
| 4.3383 | 1.0 | 106 | 1.5838 | 3.8030 | 1.0127 |
| 3.7281 | 2.0 | 212 | 0.9049 | 3.6247 | 1.0 |
| 3.6305 | 3.0 | 318 | 1.0094 | 3.5995 | 1.0807 |
| 3.6081 | 4.0 | 424 | 0.9963 | 3.5752 | 1.6203 |
| 3.595 | 5.0 | 530 | 0.9365 | 3.5681 | 1.2421 |
| 3.5771 | 6.0 | 636 | 0.8668 | 3.6008 | 1.0744 |
| 3.5662 | 7.0 | 742 | 0.9667 | 3.5516 | 1.1218 |
| 3.543 | 8.0 | 848 | 1.0367 | 3.5318 | 1.4114 |
| 3.533 | 9.0 | 954 | 1.0342 | 3.5206 | 1.2310 |
| 3.5178 | 10.0 | 1060 | 1.0139 | 3.5008 | 1.3671 |
| 3.417 | 11.0 | 1166 | 0.9012 | 3.0606 | 1.1282 |
| 3.0068 | 12.0 | 1272 | 0.8443 | 2.9790 | 1.0807 |
| 2.9719 | 13.0 | 1378 | 0.8540 | 2.9684 | 1.1297 |
| 2.9495 | 14.0 | 1484 | 0.8002 | 2.9208 | 1.0585 |
| 2.9248 | 15.0 | 1590 | 0.8210 | 2.9132 | 1.0491 |
| 2.8881 | 16.0 | 1696 | 0.7894 | 2.8419 | 1.0997 |
| 2.8468 | 17.0 | 1802 | 0.7865 | 2.7966 | 1.0443 |
| 2.8046 | 18.0 | 1908 | 0.7905 | 2.7439 | 1.1899 |
| 2.7629 | 19.0 | 2014 | 0.7820 | 2.7139 | 1.1108 |
| 2.7255 | 20.0 | 2120 | 0.7543 | 2.6422 | 1.0807 |
| 2.6983 | 21.0 | 2226 | 0.7586 | 2.6228 | 1.1203 |
| 2.6686 | 22.0 | 2332 | 0.7398 | 2.5820 | 1.0886 |
| 2.6459 | 23.0 | 2438 | 0.7370 | 2.5630 | 1.1108 |
| 2.631 | 24.0 | 2544 | 0.7290 | 2.5498 | 1.1092 |
| 2.6147 | 25.0 | 2650 | 0.7350 | 2.5423 | 1.0997 |
| 2.6749 | 26.0 | 2756 | 2.5582 | 0.7481 | 1.1472 |
| 2.6129 | 27.0 | 2862 | 2.5293 | 0.7629 | 1.1044 |
| 2.5635 | 28.0 | 2968 | 2.4655 | 0.7469 | 1.1282 |
| 2.5046 | 29.0 | 3074 | 2.4866 | 0.7731 | 1.1297 |
| 2.4784 | 30.0 | 3180 | 2.3188 | 0.6769 | 1.0791 |
| 2.4141 | 31.0 | 3286 | 2.2582 | 0.6553 | 1.0285 |
| 2.3752 | 32.0 | 3392 | 2.3374 | 0.6724 | 1.0475 |
| 2.3431 | 33.0 | 3498 | 2.2132 | 0.6385 | 1.0633 |
| 2.2754 | 34.0 | 3604 | 2.1717 | 0.6596 | 1.0601 |
| 2.2351 | 35.0 | 3710 | 2.0753 | 0.6211 | 1.0712 |
| 2.1843 | 36.0 | 3816 | 2.0063 | 0.5995 | 1.0380 |
| 2.1618 | 37.0 | 3922 | 2.0081 | 0.5767 | 1.0111 |
| 2.0953 | 38.0 | 4028 | 1.9858 | 0.5582 | 0.9953 |
| 2.0262 | 39.0 | 4134 | 1.9178 | 0.5480 | 1.0396 |
| 2.0036 | 40.0 | 4240 | 1.7771 | 0.5354 | 1.0032 |
| 1.9276 | 41.0 | 4346 | 1.6884 | 0.5147 | 0.9763 |
| 1.8669 | 42.0 | 4452 | 1.6266 | 0.4822 | 0.9272 |
| 1.7455 | 43.0 | 4558 | 1.6248 | 0.4825 | 0.9367 |
| 1.7031 | 44.0 | 4664 | 1.5797 | 0.4483 | 0.9193 |
| 1.6212 | 45.0 | 4770 | 1.4812 | 0.4446 | 0.8972 |
| 1.6112 | 46.0 | 4876 | 1.5334 | 0.4626 | 0.9098 |
| 1.5717 | 47.0 | 4982 | 1.3838 | 0.4426 | 0.9066 |
| 1.5055 | 48.0 | 5088 | 1.3911 | 0.4088 | 0.8608 |
| 1.4894 | 49.0 | 5194 | 1.5356 | 0.4221 | 0.8623 |
| 1.42 | 50.0 | 5300 | 1.3702 | 0.3925 | 0.8513 |
| 1.3449 | 51.0 | 5406 | 1.3309 | 0.3701 | 0.8434 |
| 1.2991 | 52.0 | 5512 | 1.2176 | 0.3763 | 0.8544 |
| 1.293 | 53.0 | 5618 | 1.3637 | 0.3581 | 0.8228 |
| 1.2446 | 54.0 | 5724 | 1.2283 | 0.3558 | 0.8054 |
| 1.1887 | 55.0 | 5830 | 1.1690 | 0.3459 | 0.8038 |
| 1.1893 | 56.0 | 5936 | 1.2391 | 0.3328 | 0.7959 |
| 1.1188 | 57.0 | 6042 | 1.0593 | 0.3222 | 0.7880 |
| 1.0648 | 58.0 | 6148 | 1.0447 | 0.3251 | 0.7816 |
| 1.0341 | 59.0 | 6254 | 0.9521 | 0.3026 | 0.7737 |
| 0.9995 | 60.0 | 6360 | 0.9362 | 0.2787 | 0.7358 |
| 0.9522 | 61.0 | 6466 | 0.9554 | 0.2713 | 0.7184 |
| 0.9121 | 62.0 | 6572 | 0.8750 | 0.2699 | 0.7168 |
| 0.8801 | 63.0 | 6678 | 0.8787 | 0.2670 | 0.7120 |
| 0.8557 | 64.0 | 6784 | 0.8498 | 0.2440 | 0.6756 |
| 0.8252 | 65.0 | 6890 | 0.8091 | 0.2605 | 0.6930 |
| 0.7913 | 66.0 | 6996 | 0.8008 | 0.2542 | 0.6946 |
| 0.7681 | 67.0 | 7102 | 0.8333 | 0.2431 | 0.6867 |
| 0.7617 | 68.0 | 7208 | 0.7744 | 0.2465 | 0.7041 |
| 0.7121 | 69.0 | 7314 | 0.7188 | 0.2331 | 0.6566 |
| 0.7123 | 70.0 | 7420 | 0.7451 | 0.2300 | 0.6582 |
| 0.6756 | 71.0 | 7526 | 0.6943 | 0.2246 | 0.6456 |
| 0.6525 | 72.0 | 7632 | 0.8034 | 0.2155 | 0.6392 |
| 0.6475 | 73.0 | 7738 | 0.6815 | 0.2135 | 0.6060 |
| 0.6071 | 74.0 | 7844 | 0.6793 | 0.2078 | 0.6234 |
| 0.591 | 75.0 | 7950 | 0.6706 | 0.2189 | 0.6218 |
| 0.5768 | 76.0 | 8056 | 0.7773 | 0.1941 | 0.5791 |
| 0.5588 | 77.0 | 8162 | 0.6473 | 0.2092 | 0.6440 |
| 0.5513 | 78.0 | 8268 | 0.6667 | 0.1876 | 0.5886 |
| 0.5234 | 79.0 | 8374 | 0.6126 | 0.1825 | 0.5665 |
| 0.4976 | 80.0 | 8480 | 0.6168 | 0.1847 | 0.5807 |
| 0.4795 | 81.0 | 8586 | 0.5837 | 0.1816 | 0.5759 |
| 0.4722 | 82.0 | 8692 | 0.6051 | 0.1865 | 0.5696 |
| 0.4463 | 83.0 | 8798 | 0.5976 | 0.1782 | 0.5633 |
| 0.44 | 84.0 | 8904 | 0.5775 | 0.1751 | 0.5617 |
| 0.4192 | 85.0 | 9010 | 0.5902 | 0.1734 | 0.5411 |
| 0.4093 | 86.0 | 9116 | 0.5591 | 0.1705 | 0.5411 |
| 0.3961 | 87.0 | 9222 | 0.5794 | 0.1765 | 0.5538 |
| 0.3793 | 88.0 | 9328 | 0.5513 | 0.1682 | 0.5491 |
| 0.3715 | 89.0 | 9434 | 0.5567 | 0.1640 | 0.5237 |
| 0.3556 | 90.0 | 9540 | 0.5480 | 0.1549 | 0.5047 |
| 0.3454 | 91.0 | 9646 | 0.5910 | 0.1637 | 0.5332 |
| 0.3395 | 92.0 | 9752 | 0.5943 | 0.1600 | 0.5095 |
| 0.3236 | 93.0 | 9858 | 0.5951 | 0.1520 | 0.5016 |
| 0.3165 | 94.0 | 9964 | 0.5521 | 0.1549 | 0.5095 |
| 0.2995 | 95.0 | 10070 | 0.5381 | 0.1631 | 0.5222 |
| 0.2917 | 96.0 | 10176 | 0.5067 | 0.1432 | 0.4842 |
| 0.2847 | 97.0 | 10282 | 0.5459 | 0.1526 | 0.4937 |
| 0.2719 | 98.0 | 10388 | 0.5260 | 0.1452 | 0.4953 |
| 0.2648 | 99.0 | 10494 | 0.5386 | 0.1383 | 0.4684 |
| 0.2529 | 100.0 | 10600 | 0.5313 | 0.1514 | 0.5 |
| 0.2522 | 101.0 | 10706 | 0.5077 | 0.1497 | 0.4858 |
| 0.2424 | 102.0 | 10812 | 0.5622 | 0.1398 | 0.4684 |
| 0.2334 | 103.0 | 10918 | 0.5350 | 0.1429 | 0.4873 |
| 0.2266 | 104.0 | 11024 | 0.5214 | 0.1378 | 0.4810 |
| 0.2182 | 105.0 | 11130 | 0.5040 | 0.1386 | 0.4747 |
| 0.2143 | 106.0 | 11236 | 0.5644 | 0.1406 | 0.4810 |
| 0.2094 | 107.0 | 11342 | 0.5079 | 0.1466 | 0.5 |
| 0.1945 | 108.0 | 11448 | 0.5311 | 0.1358 | 0.4731 |
| 0.1989 | 109.0 | 11554 | 0.5300 | 0.1389 | 0.4905 |
| 0.1942 | 110.0 | 11660 | 0.5337 | 0.1369 | 0.4826 |
| 0.1856 | 111.0 | 11766 | 0.4905 | 0.1364 | 0.4763 |
| 0.1842 | 112.0 | 11872 | 0.5104 | 0.1381 | 0.4794 |
| 0.1789 | 113.0 | 11978 | 0.4859 | 0.1366 | 0.4652 |
| 0.1702 | 114.0 | 12084 | 0.4777 | 0.1307 | 0.4715 |
| 0.1701 | 115.0 | 12190 | 0.4896 | 0.1295 | 0.4478 |
| 0.1638 | 116.0 | 12296 | 0.5458 | 0.1403 | 0.4715 |
| 0.1595 | 117.0 | 12402 | 0.5131 | 0.1361 | 0.4747 |
| 0.1544 | 118.0 | 12508 | 0.5148 | 0.1341 | 0.4589 |
| 0.1496 | 119.0 | 12614 | 0.4995 | 0.1312 | 0.4525 |
| 0.1513 | 120.0 | 12720 | 0.5037 | 0.1403 | 0.4684 |
| 0.145 | 121.0 | 12826 | 0.4896 | 0.1301 | 0.4573 |
| 0.1386 | 122.0 | 12932 | 0.5327 | 0.1327 | 0.4636 |
| 0.1374 | 123.0 | 13038 | 0.5229 | 0.1307 | 0.4399 |
| 0.139 | 124.0 | 13144 | 0.4882 | 0.1324 | 0.4620 |
| 0.1359 | 125.0 | 13250 | 0.4887 | 0.1284 | 0.4494 |
| 0.1304 | 126.0 | 13356 | 0.4678 | 0.1261 | 0.4541 |
| 0.1244 | 127.0 | 13462 | 0.4879 | 0.1264 | 0.4351 |
| 0.1282 | 128.0 | 13568 | 0.4782 | 0.1261 | 0.4320 |
| 0.1183 | 129.0 | 13674 | 0.5093 | 0.1227 | 0.4383 |
| 0.1213 | 130.0 | 13780 | 0.4804 | 0.1258 | 0.4525 |
| 0.1159 | 131.0 | 13886 | 0.4890 | 0.1264 | 0.4462 |
| 0.1139 | 132.0 | 13992 | 0.4912 | 0.1267 | 0.4335 |
| 0.1099 | 133.0 | 14098 | 0.5153 | 0.1241 | 0.4415 |
| 0.1134 | 134.0 | 14204 | 0.5001 | 0.1233 | 0.4193 |
| 0.1074 | 135.0 | 14310 | 0.4912 | 0.1198 | 0.4225 |
| 0.1006 | 136.0 | 14416 | 0.4858 | 0.1241 | 0.4335 |
| 0.101 | 137.0 | 14522 | 0.4895 | 0.1227 | 0.4320 |
| 0.0988 | 138.0 | 14628 | 0.4855 | 0.1292 | 0.4430 |
| 0.0995 | 139.0 | 14734 | 0.4747 | 0.1233 | 0.4272 |
| 0.0963 | 140.0 | 14840 | 0.4784 | 0.1272 | 0.4446 |
| 0.0966 | 141.0 | 14946 | 0.4826 | 0.1184 | 0.4146 |
| 0.0949 | 142.0 | 15052 | 0.4969 | 0.1235 | 0.4288 |
| 0.0913 | 143.0 | 15158 | 0.4732 | 0.1233 | 0.4288 |
| 0.0883 | 144.0 | 15264 | 0.5287 | 0.1252 | 0.4383 |
| 0.0898 | 145.0 | 15370 | 0.4946 | 0.1221 | 0.4304 |
| 0.0902 | 146.0 | 15476 | 0.4894 | 0.1233 | 0.4415 |
| 0.0884 | 147.0 | 15582 | 0.4750 | 0.1221 | 0.4256 |
| 0.0861 | 148.0 | 15688 | 0.4640 | 0.1201 | 0.4098 |
| 0.0799 | 149.0 | 15794 | 0.4692 | 0.1210 | 0.4225 |
| 0.0841 | 150.0 | 15900 | 0.4575 | 0.1250 | 0.4415 |
| 0.0828 | 151.0 | 16006 | 0.5040 | 0.1196 | 0.4114 |
| 0.0827 | 152.0 | 16112 | 0.4703 | 0.1235 | 0.4241 |
| 0.0785 | 153.0 | 16218 | 0.4681 | 0.1201 | 0.4225 |
| 0.078 | 154.0 | 16324 | 0.4794 | 0.1224 | 0.4241 |
| 0.0745 | 155.0 | 16430 | 0.4646 | 0.1207 | 0.4193 |
| 0.0759 | 156.0 | 16536 | 0.4819 | 0.1176 | 0.4082 |
| 0.076 | 157.0 | 16642 | 0.5017 | 0.1161 | 0.4035 |
| 0.0731 | 158.0 | 16748 | 0.4776 | 0.1170 | 0.4082 |
| 0.0726 | 159.0 | 16854 | 0.4798 | 0.1207 | 0.4288 |
| 0.0721 | 160.0 | 16960 | 0.5159 | 0.1178 | 0.4098 |
| 0.0694 | 161.0 | 17066 | 0.4686 | 0.1215 | 0.4177 |
| 0.0668 | 162.0 | 17172 | 0.4924 | 0.1196 | 0.4035 |
| 0.0677 | 163.0 | 17278 | 0.4899 | 0.1198 | 0.4114 |
| 0.0658 | 164.0 | 17384 | 0.4691 | 0.1215 | 0.4193 |
| 0.0629 | 165.0 | 17490 | 0.4956 | 0.1159 | 0.4003 |
| 0.0641 | 166.0 | 17596 | 0.4686 | 0.1119 | 0.4035 |
| 0.063 | 167.0 | 17702 | 0.4918 | 0.1150 | 0.3940 |
| 0.0622 | 168.0 | 17808 | 0.4633 | 0.1187 | 0.4035 |
| 0.0616 | 169.0 | 17914 | 0.4855 | 0.1198 | 0.4177 |
| 0.0644 | 170.0 | 18020 | 0.4763 | 0.1153 | 0.4035 |
| 0.0626 | 171.0 | 18126 | 0.4721 | 0.1187 | 0.4177 |
| 0.0598 | 172.0 | 18232 | 0.4763 | 0.1196 | 0.4130 |
| 0.0556 | 173.0 | 18338 | 0.4834 | 0.1204 | 0.4225 |
| 0.0589 | 174.0 | 18444 | 0.4789 | 0.1173 | 0.4130 |
| 0.058 | 175.0 | 18550 | 0.4874 | 0.1176 | 0.4066 |
| 0.057 | 176.0 | 18656 | 0.4682 | 0.1119 | 0.4003 |
| 0.0532 | 177.0 | 18762 | 0.4779 | 0.1136 | 0.4003 |
| 0.0554 | 178.0 | 18868 | 0.4796 | 0.1119 | 0.3940 |
| 0.0555 | 179.0 | 18974 | 0.4640 | 0.1187 | 0.4130 |
| 0.0558 | 180.0 | 19080 | 0.4756 | 0.1107 | 0.3924 |
| 0.0544 | 181.0 | 19186 | 0.4768 | 0.1113 | 0.3972 |
| 0.0563 | 182.0 | 19292 | 0.4632 | 0.1110 | 0.4019 |
| 0.0524 | 183.0 | 19398 | 0.4744 | 0.1130 | 0.4066 |
| 0.0509 | 184.0 | 19504 | 0.4670 | 0.1139 | 0.4035 |
| 0.0513 | 185.0 | 19610 | 0.4775 | 0.1124 | 0.3908 |
| 0.0512 | 186.0 | 19716 | 0.4669 | 0.1133 | 0.4019 |
| 0.05 | 187.0 | 19822 | 0.4625 | 0.1150 | 0.4003 |
| 0.0475 | 188.0 | 19928 | 0.4843 | 0.1139 | 0.3908 |
| 0.0505 | 189.0 | 20034 | 0.4674 | 0.1144 | 0.3972 |
| 0.0483 | 190.0 | 20140 | 0.4793 | 0.1093 | 0.3908 |
| 0.0497 | 191.0 | 20246 | 0.4608 | 0.1110 | 0.3956 |
| 0.0519 | 192.0 | 20352 | 0.4755 | 0.1107 | 0.3908 |
| 0.0476 | 193.0 | 20458 | 0.4721 | 0.1104 | 0.3987 |
| 0.0484 | 194.0 | 20564 | 0.4666 | 0.1116 | 0.3972 |
| 0.0476 | 195.0 | 20670 | 0.4717 | 0.1144 | 0.4035 |
| 0.0485 | 196.0 | 20776 | 0.4663 | 0.1161 | 0.4051 |
| 0.0444 | 197.0 | 20882 | 0.4660 | 0.1156 | 0.4035 |
| 0.0474 | 198.0 | 20988 | 0.4745 | 0.1107 | 0.3940 |
| 0.046 | 199.0 | 21094 | 0.4690 | 0.1113 | 0.4003 |
| 0.0473 | 200.0 | 21200 | 0.4693 | 0.1124 | 0.3987 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1
- Datasets 2.7.1
- Tokenizers 0.11.6
|
pozman/distilbert-base-uncased-finetuned-squad
|
pozman
|
distilbert
| 10 | 2 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,284 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1519
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2224 | 1.0 | 5533 | 1.1604 |
| 0.9577 | 2.0 | 11066 | 1.1244 |
| 0.7436 | 3.0 | 16599 | 1.1519 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
jayeshvpatil/ppo-LunarLander-v2
|
jayeshvpatil
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
camenduru/Wav2Lip
|
camenduru
| null | 50 | 0 | null | 1 | null | false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 11,770 |
# **Wav2Lip**: *Accurately Lip-syncing Videos In The Wild*
For commercial requests, please contact us at [email protected] or [email protected]. We have an HD model ready that can be used commercially.
This code is part of the paper: _A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild_ published at ACM Multimedia 2020.
[](https://paperswithcode.com/sota/lip-sync-on-lrs2?p=a-lip-sync-expert-is-all-you-need-for-speech)
[](https://paperswithcode.com/sota/lip-sync-on-lrs3?p=a-lip-sync-expert-is-all-you-need-for-speech)
[](https://paperswithcode.com/sota/lip-sync-on-lrw?p=a-lip-sync-expert-is-all-you-need-for-speech)
|📑 Original Paper|📰 Project Page|🌀 Demo|⚡ Live Testing|📔 Colab Notebook
|:-:|:-:|:-:|:-:|:-:|
[Paper](http://arxiv.org/abs/2008.10010) | [Project Page](http://cvit.iiit.ac.in/research/projects/cvit-projects/a-lip-sync-expert-is-all-you-need-for-speech-to-lip-generation-in-the-wild/) | [Demo Video](https://youtu.be/0fXaDCZNOJc) | [Interactive Demo](https://bhaasha.iiit.ac.in/lipsync) | [Colab Notebook](https://colab.research.google.com/drive/1tZpDWXz49W6wDcTprANRGLo2D_EbD5J8?usp=sharing) /[Updated Collab Notebook](https://colab.research.google.com/drive/1IjFW1cLevs6Ouyu4Yht4mnR4yeuMqO7Y#scrollTo=MH1m608OymLH)
<img src="https://drive.google.com/uc?export=view&id=1Wn0hPmpo4GRbCIJR8Tf20Akzdi1qjjG9"/>
----------
**Highlights**
----------
- Weights of the visual quality disc has been updated in readme!
- Lip-sync videos to any target speech with high accuracy :100:. Try our [interactive demo](https://bhaasha.iiit.ac.in/lipsync).
- :sparkles: Works for any identity, voice, and language. Also works for CGI faces and synthetic voices.
- Complete training code, inference code, and pretrained models are available :boom:
- Or, quick-start with the Google Colab Notebook: [Link](https://colab.research.google.com/drive/1tZpDWXz49W6wDcTprANRGLo2D_EbD5J8?usp=sharing). Checkpoints and samples are available in a Google Drive [folder](https://drive.google.com/drive/folders/1I-0dNLfFOSFwrfqjNa-SXuwaURHE5K4k?usp=sharing) as well. There is also a [tutorial video](https://www.youtube.com/watch?v=Ic0TBhfuOrA) on this, courtesy of [What Make Art](https://www.youtube.com/channel/UCmGXH-jy0o2CuhqtpxbaQgA). Also, thanks to [Eyal Gruss](https://eyalgruss.com), there is a more accessible [Google Colab notebook](https://j.mp/wav2lip) with more useful features. A tutorial collab notebook is present at this [link](https://colab.research.google.com/drive/1IjFW1cLevs6Ouyu4Yht4mnR4yeuMqO7Y#scrollTo=MH1m608OymLH).
- :fire: :fire: Several new, reliable evaluation benchmarks and metrics [[`evaluation/` folder of this repo]](https://github.com/Rudrabha/Wav2Lip/tree/master/evaluation) released. Instructions to calculate the metrics reported in the paper are also present.
--------
**Disclaimer**
--------
All results from this open-source code or our [demo website](https://bhaasha.iiit.ac.in/lipsync) should only be used for research/academic/personal purposes only. As the models are trained on the <a href="http://www.robots.ox.ac.uk/~vgg/data/lip_reading/lrs2.html">LRS2 dataset</a>, any form of commercial use is strictly prohibhited. For commercial requests please contact us directly!
Prerequisites
-------------
- `Python 3.6`
- ffmpeg: `sudo apt-get install ffmpeg`
- Install necessary packages using `pip install -r requirements.txt`. Alternatively, instructions for using a docker image is provided [here](https://gist.github.com/xenogenesi/e62d3d13dadbc164124c830e9c453668). Have a look at [this comment](https://github.com/Rudrabha/Wav2Lip/issues/131#issuecomment-725478562) and comment on [the gist](https://gist.github.com/xenogenesi/e62d3d13dadbc164124c830e9c453668) if you encounter any issues.
- Face detection [pre-trained model](https://www.adrianbulat.com/downloads/python-fan/s3fd-619a316812.pth) should be downloaded to `face_detection/detection/sfd/s3fd.pth`. Alternative [link](https://iiitaphyd-my.sharepoint.com/:u:/g/personal/prajwal_k_research_iiit_ac_in/EZsy6qWuivtDnANIG73iHjIBjMSoojcIV0NULXV-yiuiIg?e=qTasa8) if the above does not work.
Getting the weights
----------
| Model | Description | Link to the model |
| :-------------: | :---------------: | :---------------: |
| Wav2Lip | Highly accurate lip-sync | [Link](https://iiitaphyd-my.sharepoint.com/:u:/g/personal/radrabha_m_research_iiit_ac_in/Eb3LEzbfuKlJiR600lQWRxgBIY27JZg80f7V9jtMfbNDaQ?e=TBFBVW) |
| Wav2Lip + GAN | Slightly inferior lip-sync, but better visual quality | [Link](https://iiitaphyd-my.sharepoint.com/:u:/g/personal/radrabha_m_research_iiit_ac_in/EdjI7bZlgApMqsVoEUUXpLsBxqXbn5z8VTmoxp55YNDcIA?e=n9ljGW) |
| Expert Discriminator | Weights of the expert discriminator | [Link](https://iiitaphyd-my.sharepoint.com/:u:/g/personal/radrabha_m_research_iiit_ac_in/EQRvmiZg-HRAjvI6zqN9eTEBP74KefynCwPWVmF57l-AYA?e=ZRPHKP) |
| Visual Quality Discriminator | Weights of the visual disc trained in a GAN setup | [Link](https://iiitaphyd-my.sharepoint.com/:u:/g/personal/radrabha_m_research_iiit_ac_in/EQVqH88dTm1HjlK11eNba5gBbn15WMS0B0EZbDBttqrqkg?e=ic0ljo) |
Lip-syncing videos using the pre-trained models (Inference)
-------
You can lip-sync any video to any audio:
```bash
python inference.py --checkpoint_path <ckpt> --face <video.mp4> --audio <an-audio-source>
```
The result is saved (by default) in `results/result_voice.mp4`. You can specify it as an argument, similar to several other available options. The audio source can be any file supported by `FFMPEG` containing audio data: `*.wav`, `*.mp3` or even a video file, from which the code will automatically extract the audio.
##### Tips for better results:
- Experiment with the `--pads` argument to adjust the detected face bounding box. Often leads to improved results. You might need to increase the bottom padding to include the chin region. E.g. `--pads 0 20 0 0`.
- If you see the mouth position dislocated or some weird artifacts such as two mouths, then it can be because of over-smoothing the face detections. Use the `--nosmooth` argument and give another try.
- Experiment with the `--resize_factor` argument, to get a lower resolution video. Why? The models are trained on faces which were at a lower resolution. You might get better, visually pleasing results for 720p videos than for 1080p videos (in many cases, the latter works well too).
- The Wav2Lip model without GAN usually needs more experimenting with the above two to get the most ideal results, and sometimes, can give you a better result as well.
Preparing LRS2 for training
----------
Our models are trained on LRS2. See [here](#training-on-datasets-other-than-lrs2) for a few suggestions regarding training on other datasets.
##### LRS2 dataset folder structure
```
data_root (mvlrs_v1)
├── main, pretrain (we use only main folder in this work)
| ├── list of folders
| │ ├── five-digit numbered video IDs ending with (.mp4)
```
Place the LRS2 filelists (train, val, test) `.txt` files in the `filelists/` folder.
##### Preprocess the dataset for fast training
```bash
python preprocess.py --data_root data_root/main --preprocessed_root lrs2_preprocessed/
```
Additional options like `batch_size` and number of GPUs to use in parallel to use can also be set.
##### Preprocessed LRS2 folder structure
```
preprocessed_root (lrs2_preprocessed)
├── list of folders
| ├── Folders with five-digit numbered video IDs
| │ ├── *.jpg
| │ ├── audio.wav
```
Train!
----------
There are two major steps: (i) Train the expert lip-sync discriminator, (ii) Train the Wav2Lip model(s).
##### Training the expert discriminator
You can download [the pre-trained weights](#getting-the-weights) if you want to skip this step. To train it:
```bash
python color_syncnet_train.py --data_root lrs2_preprocessed/ --checkpoint_dir <folder_to_save_checkpoints>
```
##### Training the Wav2Lip models
You can either train the model without the additional visual quality disriminator (< 1 day of training) or use the discriminator (~2 days). For the former, run:
```bash
python wav2lip_train.py --data_root lrs2_preprocessed/ --checkpoint_dir <folder_to_save_checkpoints> --syncnet_checkpoint_path <path_to_expert_disc_checkpoint>
```
To train with the visual quality discriminator, you should run `hq_wav2lip_train.py` instead. The arguments for both the files are similar. In both the cases, you can resume training as well. Look at `python wav2lip_train.py --help` for more details. You can also set additional less commonly-used hyper-parameters at the bottom of the `hparams.py` file.
Training on datasets other than LRS2
------------------------------------
Training on other datasets might require modifications to the code. Please read the following before you raise an issue:
- You might not get good results by training/fine-tuning on a few minutes of a single speaker. This is a separate research problem, to which we do not have a solution yet. Thus, we would most likely not be able to resolve your issue.
- You must train the expert discriminator for your own dataset before training Wav2Lip.
- If it is your own dataset downloaded from the web, in most cases, needs to be sync-corrected.
- Be mindful of the FPS of the videos of your dataset. Changes to FPS would need significant code changes.
- The expert discriminator's eval loss should go down to ~0.25 and the Wav2Lip eval sync loss should go down to ~0.2 to get good results.
When raising an issue on this topic, please let us know that you are aware of all these points.
We have an HD model trained on a dataset allowing commercial usage. The size of the generated face will be 192 x 288 in our new model.
Evaluation
----------
Please check the `evaluation/` folder for the instructions.
License and Citation
----------
Theis repository can only be used for personal/research/non-commercial purposes. However, for commercial requests, please contact us directly at [email protected] or [email protected]. We have an HD model trained on a dataset allowing commercial usage. The size of the generated face will be 192 x 288 in our new model. Please cite the following paper if you use this repository:
```
@inproceedings{10.1145/3394171.3413532,
author = {Prajwal, K R and Mukhopadhyay, Rudrabha and Namboodiri, Vinay P. and Jawahar, C.V.},
title = {A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild},
year = {2020},
isbn = {9781450379885},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3394171.3413532},
doi = {10.1145/3394171.3413532},
booktitle = {Proceedings of the 28th ACM International Conference on Multimedia},
pages = {484–492},
numpages = {9},
keywords = {lip sync, talking face generation, video generation},
location = {Seattle, WA, USA},
series = {MM '20}
}
```
Acknowledgements
----------
Parts of the code structure is inspired by this [TTS repository](https://github.com/r9y9/deepvoice3_pytorch). We thank the author for this wonderful code. The code for Face Detection has been taken from the [face_alignment](https://github.com/1adrianb/face-alignment) repository. We thank the authors for releasing their code and models. We thank [zabique](https://github.com/zabique) for the tutorial collab notebook.
|
jaesun/a2c-AntBulletEnv-v0
|
jaesun
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AntBulletEnv-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 352 |
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Jyotiyadav/Bol-1.0
|
Jyotiyadav
|
layoutlmv3
| 12 | 112 |
transformers
| 0 |
token-classification
| true | false | false |
cc-by-nc-sa-4.0
| null |
['sroie']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,559 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bol-1.0
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the sroie dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0859
- Precision: 0.4109
- Recall: 0.6021
- F1: 0.4885
- Accuracy: 0.7992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 2.08 | 100 | 0.8091 | 0.235 | 0.2968 | 0.2623 | 0.7453 |
| No log | 4.17 | 200 | 0.6677 | 0.4073 | 0.6147 | 0.4899 | 0.7949 |
| No log | 6.25 | 300 | 0.6157 | 0.4632 | 0.6758 | 0.5497 | 0.8356 |
| No log | 8.33 | 400 | 0.7282 | 0.4379 | 0.6526 | 0.5241 | 0.8100 |
| 0.3872 | 10.42 | 500 | 0.8256 | 0.4089 | 0.6611 | 0.5052 | 0.7927 |
| 0.3872 | 12.5 | 600 | 0.7363 | 0.4711 | 0.6863 | 0.5587 | 0.8358 |
| 0.3872 | 14.58 | 700 | 0.7931 | 0.4579 | 0.6863 | 0.5493 | 0.8283 |
| 0.3872 | 16.67 | 800 | 0.8513 | 0.4553 | 0.6863 | 0.5474 | 0.8197 |
| 0.3872 | 18.75 | 900 | 0.8703 | 0.4553 | 0.6863 | 0.5474 | 0.8197 |
| 0.0068 | 20.83 | 1000 | 0.8905 | 0.4472 | 0.6779 | 0.5389 | 0.8186 |
| 0.0068 | 22.92 | 1100 | 0.8955 | 0.4665 | 0.7032 | 0.5609 | 0.8261 |
| 0.0068 | 25.0 | 1200 | 0.9589 | 0.4392 | 0.6695 | 0.5304 | 0.8089 |
| 0.0068 | 27.08 | 1300 | 0.8998 | 0.4711 | 0.6863 | 0.5587 | 0.8305 |
| 0.0068 | 29.17 | 1400 | 1.0008 | 0.4313 | 0.6611 | 0.5220 | 0.8035 |
| 0.0032 | 31.25 | 1500 | 0.9506 | 0.4448 | 0.6779 | 0.5371 | 0.8175 |
| 0.0032 | 33.33 | 1600 | 0.9497 | 0.4266 | 0.6611 | 0.5186 | 0.8240 |
| 0.0032 | 35.42 | 1700 | 0.9868 | 0.4158 | 0.6442 | 0.5054 | 0.8111 |
| 0.0032 | 37.5 | 1800 | 0.9631 | 0.4358 | 0.6863 | 0.5331 | 0.8240 |
| 0.0032 | 39.58 | 1900 | 1.0170 | 0.4251 | 0.6695 | 0.5200 | 0.8013 |
| 0.0022 | 41.67 | 2000 | 0.7666 | 0.5387 | 0.7032 | 0.6100 | 0.8757 |
| 0.0022 | 43.75 | 2100 | 1.1500 | 0.3907 | 0.6021 | 0.4739 | 0.7852 |
| 0.0022 | 45.83 | 2200 | 1.1211 | 0.3929 | 0.6021 | 0.4755 | 0.7873 |
| 0.0022 | 47.92 | 2300 | 1.1108 | 0.3972 | 0.6021 | 0.4787 | 0.7927 |
| 0.0022 | 50.0 | 2400 | 1.0858 | 0.4062 | 0.6021 | 0.4852 | 0.8013 |
| 0.0018 | 52.08 | 2500 | 1.0859 | 0.4109 | 0.6021 | 0.4885 | 0.7992 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.2.2
- Tokenizers 0.13.2
|
Isaacp/Reinforce-pixelcopter
|
Isaacp
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pixelcopter-PLE-v0', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 300 |
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jaesun/a2c-PandaReachDense-v2
|
jaesun
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 358 |
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
espnet/pengcheng_librimix_asr_train_sot_asr_conformer_wavlm_raw_en_char_sp
|
espnet
| null | 18 | 0 |
espnet
| 0 |
automatic-speech-recognition
| false | false | false |
cc-by-4.0
|
['en']
|
['librimix']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['espnet', 'audio', 'automatic-speech-recognition']
| false | true | true | 6,935 |
## ESPnet2 ASR model
### `espnet/pengcheng_librimix_asr_train_sot_asr_conformer_wavlm_raw_en_char_sp`
This model was trained by Pengcheng Guo using librimix recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout fe824770250485b77c68e8ca041922b8779b5c94
pip install -e .
cd egs2/librimix/sot_asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/pengcheng_librimix_asr_train_sot_asr_conformer_wavlm_raw_en_char_sp
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Thu Dec 29 13:36:46 CST 2022`
- python version: `3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0]`
- espnet version: `espnet 202211`
- pytorch version: `pytorch 1.12.1`
- Git hash: ``
- Commit date: ``
## asr_train_sot_asr_conformer_wavlm_raw_en_char_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_sot_asr_model_valid.acc.ave/dev|3000|123853|82.9|15.1|2.0|2.4|19.4|97.1|
|decode_sot_asr_model_valid.acc.ave/test|3000|111243|85.1|13.0|1.9|2.1|17.1|96.1|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_sot_asr_model_valid.acc.ave/dev|3000|670222|92.2|4.9|2.9|2.7|10.6|97.1|
decode_sot_asr_model_valid.acc.ave/test|3000|605408|93.2|4.1|2.6|2.3|9.1|96.1|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/tunining/train_sot_asr_conformer_wavlm.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_sot_asr_conformer_wavlm_raw_en_char_sp
ngpu: 1
seed: 0
num_workers: 8
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 2
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 38431
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 60
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: true
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 6000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_char_sp/train/speech_shape
- exp/asr_stats_raw_en_char_sp/train/text_shape.char
valid_shape_file:
- exp/asr_stats_raw_en_char_sp/valid/speech_shape
- exp/asr_stats_raw_en_char_sp/valid/text_shape.char
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_sp/wav.scp
- speech
- kaldi_ark
- - dump/raw/train_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- kaldi_ark
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0005
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 20000
token_list:
- <blank>
- <unk>
- <sc>
- <space>
- E
- T
- A
- O
- N
- I
- H
- S
- R
- D
- L
- U
- M
- C
- W
- F
- G
- Y
- P
- B
- V
- K
- ''''
- X
- J
- Q
- Z
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
joint_net_conf: null
use_preprocessor: true
token_type: char
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: wavlm_local
path_or_url: /home/work_nfs6/pcguo/asr/librimix/hub/wavlm_large.pt
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: null
specaug_conf: {}
normalize: utterance_mvn
normalize_conf: {}
model: espnet
model_conf:
ctc_weight: 0.0
lsm_weight: 0.1
length_normalized_loss: false
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 128
encoder: conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d2
normalize_before: true
macaron_style: true
rel_pos_type: latest
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
preprocessor: multi
preprocessor_conf:
speaker_change_symbol:
- <sc>
required:
- output_dir
- token_list
version: '202211'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
sweaterr/pegasus-samsum
|
sweaterr
|
pegasus
| 13 | 0 |
transformers
| 0 |
text2text-generation
| true | false | false | null | null |
['samsum']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,257 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4812
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6928 | 0.54 | 500 | 1.4812 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
YifanPan/bert-finetuned-squad
|
YifanPan
|
bert
| 12 | 9 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 954 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Duskfallcrew/finalfantasiespt1
|
Duskfallcrew
| null | 22 | 9 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image']
| false | true | true | 929 |
### Final Fantasy XIV Part One Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
If you want to donate towards costs and don't want to subscribe:
https://ko-fi.com/DUSKFALLcrew
If you want to monthly support the EARTH & DUSK media projects and not just AI:
https://www.patreon.com/earthndusk
fntsy1 (use that on your prompt)
|
yaozeguo/bert-finetuned-squad
|
yaozeguo
|
bert
| 12 | 11 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 954 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
jojoUla/bert-large-cased-sigir-support-no-label-40-sigir-tune2nd-LR100-labelled-30
|
jojoUla
|
bert
| 16 | 0 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,788 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-sigir-support-no-label-40-sigir-tune2nd-LR100-labelled-30
This model is a fine-tuned version of [jojoUla/bert-large-cased-sigir-support-no-label-40](https://huggingface.co/jojoUla/bert-large-cased-sigir-support-no-label-40) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6520
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 30
- eval_batch_size: 30
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.8321 | 1.0 | 2 | 4.3250 |
| 3.383 | 2.0 | 4 | 2.4023 |
| 1.9548 | 3.0 | 6 | 1.2925 |
| 1.4856 | 4.0 | 8 | 1.5152 |
| 0.9588 | 5.0 | 10 | 1.7731 |
| 1.2668 | 6.0 | 12 | 1.3830 |
| 0.8441 | 7.0 | 14 | 1.9760 |
| 1.0173 | 8.0 | 16 | 1.2364 |
| 0.6814 | 9.0 | 18 | 1.1771 |
| 0.9044 | 10.0 | 20 | 1.4721 |
| 0.6889 | 11.0 | 22 | 0.8518 |
| 0.5845 | 12.0 | 24 | 0.6993 |
| 0.4068 | 13.0 | 26 | 1.1771 |
| 0.5957 | 14.0 | 28 | 0.5895 |
| 0.4277 | 15.0 | 30 | 0.5326 |
| 0.3736 | 16.0 | 32 | 1.0893 |
| 0.413 | 17.0 | 34 | 1.3267 |
| 0.5718 | 18.0 | 36 | 1.0331 |
| 0.3892 | 19.0 | 38 | 1.0793 |
| 0.3913 | 20.0 | 40 | 0.8742 |
| 0.4794 | 21.0 | 42 | 1.1264 |
| 0.4626 | 22.0 | 44 | 1.1857 |
| 0.2683 | 23.0 | 46 | 1.5181 |
| 0.3436 | 24.0 | 48 | 1.4419 |
| 0.3793 | 25.0 | 50 | 1.4198 |
| 0.356 | 26.0 | 52 | 1.1776 |
| 0.2189 | 27.0 | 54 | 0.7166 |
| 0.286 | 28.0 | 56 | 0.7601 |
| 0.3681 | 29.0 | 58 | 1.2592 |
| 0.5858 | 30.0 | 60 | 0.6520 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Dainong2/bert-finetuned-squad
|
Dainong2
|
bert
| 12 | 10 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 954 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
cleanrl/UpNDown-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2
|
cleanrl
| null | 10 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['UpNDown-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,279 |
# (CleanRL) **PPO** Agent Playing **UpNDown-v5**
This is a trained model of a PPO agent playing UpNDown-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id UpNDown-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/UpNDown-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/UpNDown-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/UpNDown-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id UpNDown-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'UpNDown-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
Brain22/dqn-SpaceInvadersNoFrameskip-v4
|
Brain22
| null | 15 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,212 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Brain22 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Brain22 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Brain22
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 150000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
juanmi1234/Reinforce-CartPole
|
juanmi1234
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 286 |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
PecanPi/q-FrozenLake-v1-4x4-noSlippery
|
PecanPi
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 396 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="PecanPi/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
PecanPi/q-taxi-v3
|
PecanPi
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 363 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="PecanPi/q-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
gyeoldere/DeBERTa-finetuned-SNLI2
|
gyeoldere
|
deberta
| 11 | 2 |
transformers
| 0 | null | true | false | false |
mit
| null |
['snli']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,651 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DeBERTa-finetuned-SNLI2
This model is a fine-tuned version of [gyeoldere/test_trainer](https://huggingface.co/gyeoldere/test_trainer) on the snli dataset.
Test_trainer model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the snli dataset.
This model achieves the following results on the evaluation set:
- NLI accuracy : 0.86
- MLM accuracy : 0.68
## Model description
This model fine-tuned to perform 2 tasks simultaneously; NLI task and MLM task.
Output vector of DeBERTa processed through two different fc layer to predict.
I used layer structure introduced in BERT paper, which is implemented on huggingface transformers; DebertaForTokenClassification and DebertaForMaskedLM.
[https://huggingface.co/docs/transformers/index]
BinaryCrossEntrophyLoss are used for each class, and two losses are added to obtain final loss
final_loss = MLM_loss + NLI_loss
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
pfunk/Pong-v4-DQPN_p50_e0.50-seed1
|
pfunk
| null | 11 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pong-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 1,989 |
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p50_e0.50.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_p50_e0.50]"
python -m cleanrl_utils.enjoy --exp-name DQPN_p50_e0.50 --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p50_e0.50-seed1/raw/main/dqpn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p50_e0.50-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p50_e0.50-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_atari.py --exp-name DQPN_p50_e0.50 --start-policy-f 50000 --end-policy-f 1000 --evaluation-fraction 0.50 --target-tau 1.0 --policy-tau 1.00 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'end_policy_f': 1000,
'env_id': 'Pong-v4',
'evaluation_fraction': 0.5,
'exp_name': 'DQPN_p50_e0.50',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'policy_tau': 1.0,
'save_model': True,
'seed': 1,
'start_e': 1,
'start_policy_f': 50000,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
PecanPi/q-taxi-v3-v2
|
PecanPi
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 366 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="PecanPi/q-taxi-v3-v2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Duskfallcrew/duskfall-s-final-fantasy-pt2
|
Duskfallcrew
| null | 22 | 13 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image']
| false | true | true | 1,081 |
### Duskfall's Final Fantasy Pt2 Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
If you want to donate towards costs and don't want to subscribe:
https://ko-fi.com/DUSKFALLcrew
If you want to monthly support the EARTH & DUSK media projects and not just AI:
https://www.patreon.com/earthndusk
fantadsk2 (use that on your prompt)
|
ksing193/t5-small-finetuned-wikisql
|
ksing193
|
t5
| 12 | 4 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['wikisql']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,795 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-wikisql
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wikisql dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1245
- Rouge2 Precision: 0.8183
- Rouge2 Recall: 0.7262
- Rouge2 Fmeasure: 0.7624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.1954 | 1.0 | 4049 | 0.1575 | 0.7934 | 0.7033 | 0.7386 |
| 0.1643 | 2.0 | 8098 | 0.1374 | 0.8083 | 0.7169 | 0.7529 |
| 0.1517 | 3.0 | 12147 | 0.1296 | 0.8135 | 0.7221 | 0.7581 |
| 0.1459 | 4.0 | 16196 | 0.1256 | 0.817 | 0.7254 | 0.7614 |
| 0.1414 | 5.0 | 20245 | 0.1245 | 0.8183 | 0.7262 | 0.7624 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
juanmi1234/Reinforce-Pixelcopter-PLE-v0
|
juanmi1234
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pixelcopter-PLE-v0', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 300 |
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
7eu7d7/ML-Danbooru
|
7eu7d7
| null | 6 | 0 | null | 1 | null | false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 210 |
| model |
|--------------------------------|
| TResnet-D-FLq_ema_2-40000.ckpt |
| TResnet-D-FLq_ema_4-10000.ckpt |
| TResnet-D-FLq_ema_6-10000.ckpt |
| TResnet-D-FLq_ema_6-30000.ckpt |
|
thanat/mt5-small-finetuned-amazon-en-es
|
thanat
|
mt5
| 9 | 10 |
transformers
| 0 |
text2text-generation
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,717 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# thanat/mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the [amazon_reviews_multi](https://huggingface.co/datasets/amazon_reviews_multi) dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.0061
- Validation Loss: 3.3257
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 9672, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 9.6013 | 4.2024 | 0 |
| 5.8556 | 3.7335 | 1 |
| 5.0930 | 3.5494 | 2 |
| 4.6610 | 3.4502 | 3 |
| 4.3874 | 3.4030 | 4 |
| 4.2103 | 3.3568 | 5 |
| 4.0930 | 3.3311 | 6 |
| 4.0061 | 3.3257 | 7 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
csebuetnlp/banglat5_small
|
csebuetnlp
|
t5
| 8 | 3 |
transformers
| 0 |
text2text-generation
| true | false | false | null |
['bn']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 6,147 |
# BanglaT5
This repository contains the pretrained checkpoint of the model **BanglaT5 (small)**. This is a sequence to sequence transformer model pretrained with the ["Span Corruption"]() objective. Finetuned models using this checkpoint achieve state-of-the-art results on many of the NLG tasks in bengali.
For finetuning on different downstream tasks such as `Machine Translation`, `Abstractive Text Summarization`, `Question Answering` etc., refer to the scripts in the official GitHub [repository](https://github.com/csebuetnlp/BanglaNLG).
**Note**: This model was pretrained using a specific normalization pipeline available [here](https://github.com/csebuetnlp/normalizer). All finetuning scripts in the official GitHub repository use this normalization by default. If you need to adapt the pretrained model for a different task make sure the text units are normalized using this pipeline before tokenizing to get best results. A basic example is given below:
## Using this model in `transformers` (tested on 4.11.0.dev0)
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
from normalizer import normalize # pip install git+https://github.com/csebuetnlp/normalizer
model = AutoModelForSeq2SeqLM.from_pretrained("csebuetnlp/banglat5_small")
tokenizer = AutoTokenizer.from_pretrained("csebuetnlp/banglat5_small", use_fast=False)
input_sentence = ""
input_ids = tokenizer(normalize(input_sentence), return_tensors="pt").input_ids
generated_tokens = model.generate(input_ids)
decoded_tokens = tokenizer.batch_decode(generated_tokens)[0]
print(decoded_tokens)
```
## Benchmarks
* Supervised fine-tuning
| Model | Params | MT (SacreBLEU) | TS (ROUGE-2) | QA (EM/F1) | MD (SacreBLEU-1) | NHG (ROUGE-2) | XLS (ROUGE-2) | BNLG score |
|--------------------|------------|-----------------------|------------------------|-------------------|--------------------|----------------|----------------|---------------|
|[mT5 (base)](https://huggingface.co/google/mt5-base) | 582M | 36.6/22.5 | 10.3 | 59.0/65.3 | 17.5 | 9.6 | 2.7/0.7 | 24.9 |
|[XLM-ProphetNet](https://huggingface.co/microsoft/xprophetnet-large-wiki100-cased) | 616M | 23.3/16.4 | 7.8 | 53.0/57.3 | 20.0 | 9.5 | 6.2/2.7 | 21.8 |
|[mBART-50](https://huggingface.co/facebook/mbart-large-50) | 611M | 23.6/16.7 | 10.4 | 53.4/58.9 | 18.5 | 11.2 | 5.4/3.7 | 22.4 |
|[IndicBART](https://huggingface.co/ai4bharat/IndicBART) | 244M | 22.7/13.1 | 8.1 | 53.3/58.8 | 14.8 | 7.9 | 6.3/2.5 | 20.8 |
|[BanglaT5](https://huggingface.co/csebuetnlp/banglat5) | 247M | 38.8/25.2 | 13.7 | 68.5/74.8 | 19.0 | 13.8 | 6.4/4.0 | 29.4 |
The benchmarking datasets are as follows:
* **MT:** **[Machine Translation](https://github.com/csebuetnlp/banglanmt#datasets)**
* **TS:** **[Abstractive Text Summarization](https://huggingface.co/datasets/csebuetnlp/xlsum)**
* **QA:** **[Question Answering](https://huggingface.co/datasets/csebuetnlp/squad_bn)**
* **MD:** **[Multi Turn Dialogue Generation](https://drive.google.com/file/d/1qPmNN6qA4evbh4cD_BDDTCFOwMu4H2JS/view?usp=sharing)**
* **NHG:** **[News Headline Generation](https://huggingface.co/datasets/csebuetnlp/xlsum)**
* **XLS:** **[Cross-lingual Summarization](https://huggingface.co/datasets/csebuetnlp/CrossSum)**
## Citation
If you use this model, please cite the following paper:
```
@article{bhattacharjee2022banglanlg,
author = {Abhik Bhattacharjee and Tahmid Hasan and Wasi Uddin Ahmad and Rifat Shahriyar},
title = {BanglaNLG: Benchmarks and Resources for Evaluating Low-Resource Natural Language Generation in Bangla},
journal = {CoRR},
volume = {abs/2205.11081},
year = {2022},
url = {https://arxiv.org/abs/2205.11081},
eprinttype = {arXiv},
eprint = {2205.11081}
}
```
If you use the normalization module, please cite the following paper:
```
@inproceedings{hasan-etal-2020-low,
title = "Not Low-Resource Anymore: Aligner Ensembling, Batch Filtering, and New Datasets for {B}engali-{E}nglish Machine Translation",
author = "Hasan, Tahmid and
Bhattacharjee, Abhik and
Samin, Kazi and
Hasan, Masum and
Basak, Madhusudan and
Rahman, M. Sohel and
Shahriyar, Rifat",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.207",
doi = "10.18653/v1/2020.emnlp-main.207",
pages = "2612--2623",
abstract = "Despite being the seventh most widely spoken language in the world, Bengali has received much less attention in machine translation literature due to being low in resources. Most publicly available parallel corpora for Bengali are not large enough; and have rather poor quality, mostly because of incorrect sentence alignments resulting from erroneous sentence segmentation, and also because of a high volume of noise present in them. In this work, we build a customized sentence segmenter for Bengali and propose two novel methods for parallel corpus creation on low-resource setups: aligner ensembling and batch filtering. With the segmenter and the two methods combined, we compile a high-quality Bengali-English parallel corpus comprising of 2.75 million sentence pairs, more than 2 million of which were not available before. Training on neural models, we achieve an improvement of more than 9 BLEU score over previous approaches to Bengali-English machine translation. We also evaluate on a new test set of 1000 pairs made with extensive quality control. We release the segmenter, parallel corpus, and the evaluation set, thus elevating Bengali from its low-resource status. To the best of our knowledge, this is the first ever large scale study on Bengali-English machine translation. We believe our study will pave the way for future research on Bengali-English machine translation as well as other low-resource languages. Our data and code are available at https://github.com/csebuetnlp/banglanmt.",
}
```
|
cleanrl/DoubleDunk-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2
|
cleanrl
| null | 10 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['DoubleDunk-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,303 |
# (CleanRL) **PPO** Agent Playing **DoubleDunk-v5**
This is a trained model of a PPO agent playing DoubleDunk-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id DoubleDunk-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/DoubleDunk-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/DoubleDunk-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/DoubleDunk-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id DoubleDunk-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'DoubleDunk-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
pfunk/Pong-v4-DQPN_p10-seed1
|
pfunk
| null | 11 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pong-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 1,943 |
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p10.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_p10]"
python -m cleanrl_utils.enjoy --exp-name DQPN_p10 --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p10-seed1/raw/main/dqpn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p10-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p10-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_atari.py --exp-name DQPN_p10 --start-policy-f 10000 --end-policy-f 10000 --evaluation-fraction 1.00 --target-tau 1.0 --policy-tau 1.00 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'end_policy_f': 10000,
'env_id': 'Pong-v4',
'evaluation_fraction': 1.0,
'exp_name': 'DQPN_p10',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'policy_tau': 1.0,
'save_model': True,
'seed': 1,
'start_e': 1,
'start_policy_f': 10000,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
SandyML/ddpm-celebahq-finetuned-butterflies-2epochs
|
SandyML
| null | 6 | 0 |
diffusers
| 0 |
unconditional-image-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['pytorch', 'diffusers', 'unconditional-image-generation', 'diffusion-models-class']
| false | true | true | 345 |
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
Describe your model here
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('SandyML/ddpm-celebahq-finetuned-butterflies-2epochs')
image = pipeline().images[0]
image
```
|
Toying/distilbert-base-uncased-finetuned-emotion
|
Toying
|
distilbert
| 12 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,344 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2107
- Accuracy: 0.9265
- F1: 0.9265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.811 | 1.0 | 250 | 0.3073 | 0.905 | 0.9023 |
| 0.2402 | 2.0 | 500 | 0.2107 | 0.9265 | 0.9265 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
dfm794/poca-SoccerTwos-2-l
|
dfm794
| null | 35 | 215 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 844 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: dfm794/poca-SoccerTwos-2-l
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Ransaka/dqn-SpaceInvadersNoFrameskip-v4
|
Ransaka
| null | 15 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,214 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Ransaka -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Ransaka -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Ransaka
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Duskfallcrew/finalfantasypt3
|
Duskfallcrew
| null | 22 | 4 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image']
| false | true | true | 1,311 |
### Duskfall's Final of Fantasea Pt 3 Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
If you want to donate towards costs and don't want to subscribe:
https://ko-fi.com/DUSKFALLcrew
If you want to monthly support the EARTH & DUSK media projects and not just AI:
https://www.patreon.com/earthndusk
ftnadusk3 (use that on your prompt)
|
iamannika/bert-finetuned-squad
|
iamannika
|
bert
| 12 | 11 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 954 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Jackmin108/ppo-SnowballTarget
|
Jackmin108
| null | 20 | 0 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SnowballTarget']
| false | true | true | 857 |
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: Jackmin108/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
rim0/dreamboxmix-M
|
rim0
| null | 13 | 0 | null | 8 |
text-to-image
| false | false | false |
creativeml-openrail-m
|
['en', 'ja']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Stable Diffusion', 'text-to-image']
| false | true | true | 1,595 |
# dreamboxmix-M
dreamboxmix-M是在dreamboxmix-P上融合了[机甲v3.0AY](https://huggingface.co/zuzhe/Mecha-model),[DreamShaper](https://civitai.com/models/4384/dreamshaper),[Fantasy Background](https://civitai.com/models/5536/fantasy-background)的模型,适合用来跑机甲。
Dreamboxmix-Mは、dreamboxmix-Pをベースに[机甲v3.0AY](https://huggingface.co/zuzhe/Mecha-model),[DreamShaper](https://civitai.com/models/4384/dreamshaper),[Fantasy Background](https://civitai.com/models/5536/fantasy-background)は、ロボットを描くのは向いていると思います。
Dreamboxmix-M is merge by [机甲v3.0AY](https://huggingface.co/zuzhe/Mecha-model),[DreamShaper](https://civitai.com/models/4384/dreamshaper),[Fantasy Background](https://civitai.com/models/5536/fantasy-background) on dreamboxmix-P, and is suitable for drawing mecha.
<img src=https://huggingface.co/rim0/dreamboxmix-M/resolve/main/images/1%20(1).png>
<img src=https://huggingface.co/rim0/dreamboxmix-M/resolve/main/images/1%20(2).png>
<img src=https://huggingface.co/rim0/dreamboxmix-M/resolve/main/images/1%20(3).png>
<img src=https://huggingface.co/rim0/dreamboxmix-M/resolve/main/images/1%20(4).png>
<img src=https://huggingface.co/rim0/dreamboxmix-M/resolve/main/images/1%20(5).png>
<img src=https://huggingface.co/rim0/dreamboxmix-M/resolve/main/images/1%20(6).png>
<img src=https://huggingface.co/rim0/dreamboxmix-M/resolve/main/images/1%20(7).png>
<img src=https://huggingface.co/rim0/dreamboxmix-M/resolve/main/images/1%20(8).png>
<img src=https://huggingface.co/rim0/dreamboxmix-M/resolve/main/images/1%20(9).png>
<img src=https://huggingface.co/rim0/dreamboxmix-M/resolve/main/images/1%20(10).png>
|
jannikskytt/poca-SoccerTwos
|
jannikskytt
| null | 20 | 206 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 845 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: jannikskytt/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
kkh4162/xlm-roberta-base-finetuned-panx-de
|
kkh4162
|
xlm-roberta
| 15 | 0 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,319 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1358
- F1: 0.8638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2591 | 1.0 | 525 | 0.1621 | 0.8206 |
| 0.1276 | 2.0 | 1050 | 0.1379 | 0.8486 |
| 0.082 | 3.0 | 1575 | 0.1358 | 0.8638 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
LHTAVI/wpapstyle2023
|
LHTAVI
| null | 28 | 30 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 1,365 |
### wpapstyle2023 Dreambooth model trained by LHTAVI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
.webp)
.webp)
.webp)
.png)
.webp)
.webp)
.webp)
.webp)
.webp)
|
cleanrl/Enduro-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2
|
cleanrl
| null | 10 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Enduro-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,271 |
# (CleanRL) **PPO** Agent Playing **Enduro-v5**
This is a trained model of a PPO agent playing Enduro-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Enduro-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Enduro-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Enduro-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Enduro-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Enduro-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Enduro-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
nickwong64/bert-base-uncased-finance-sentiment
|
nickwong64
|
bert
| 8 | 10 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['cyrilzhang/financial_phrasebank_split']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-classification', 'sentiment-analysis', 'finance-sentiment-detection', 'finance-sentiment']
| false | true | true | 1,624 |
## nickwong64/bert-base-uncased-finance-sentiment
Bert is a Transformer Bidirectional Encoder based Architecture trained on MLM(Mask Language Modeling) objective.
[bert-base-uncased](https://huggingface.co/bert-base-uncased) finetuned on the [cyrilzhang/financial_phrasebank_split](https://huggingface.co/datasets/cyrilzhang/financial_phrasebank_split) dataset using HuggingFace Trainer with below training parameters.
```
learning rate 2e-5,
batch size 8,
num_train_epochs=6,
```
## Model Performance
| Epoch | Training Loss | Validation Loss | Accuracy | F1 |
| --- | --- | --- | --- | --- |
| 6 | 0.034100 | 0.954745 | 0.853608 | 0.854358 |
## How to Use the Model
```python
from transformers import pipeline
nlp = pipeline(task='text-classification',
model='nickwong64/bert-base-uncased-finance-sentiment')
p1 = "HK stocks open lower after Fed rate comments"
p2 = "US stocks end lower on earnings worries"
p3 = "Muted Fed, AI hopes send Wall Street higher"
print(nlp(p1))
print(nlp(p2))
print(nlp(p3))
"""
output:
[{'label': 'negative', 'score': 0.9991507530212402}]
[{'label': 'negative', 'score': 0.9997240900993347}]
[{'label': 'neutral', 'score': 0.9834381937980652}]
"""
```
## Dataset
[cyrilzhang/financial_phrasebank_split](https://huggingface.co/datasets/cyrilzhang/financial_phrasebank_split)
## Labels
```
{0: 'negative', 1: 'neutral', 2: 'positive'}
```
## Evaluation
```
{'test_loss': 0.9547446370124817,
'test_accuracy': 0.8536082474226804,
'test_f1': 0.8543579048224414,
'test_runtime': 4.9865,
'test_samples_per_second': 97.263,
'test_steps_per_second': 12.233}
```
|
threite/poca-SoccerTwos
|
threite
| null | 20 | 211 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 841 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: threite/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Niraya666/ppo-SnowballTarget
|
Niraya666
| null | 20 | 0 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SnowballTarget']
| false | true | true | 856 |
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: Niraya666/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sunwooooong/klue-bert-finetuned-klue-ner
|
sunwooooong
|
bert
| 12 | 10 |
transformers
| 0 |
token-classification
| true | false | false |
cc-by-sa-4.0
| null |
['klue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,307 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# klue-bert-finetuned-klue-ner
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3741
- F1: 0.3930
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5313 | 1.0 | 876 | 0.5225 | 0.2331 |
| 0.3884 | 2.0 | 1752 | 0.4197 | 0.3350 |
| 0.3136 | 3.0 | 2628 | 0.3741 | 0.3930 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Manseo/Colorful-v4.5
|
Manseo
| null | 24 | 25 |
diffusers
| 1 |
text-to-image
| false | false | false |
creativeml-openrail-m
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable diffusion', 'text-to-image', 'diffusers']
| false | true | true | 3,648 |
# **Colorful-v4.5-Plus**
**Colorful-v4.5-Plus** is a model merge between [Anything-v4.5](https://huggingface.co/andite/anything-v4.0), [AbyssOrangeMix3](https://huggingface.co/WarriorMama777/OrangeMixs) and [ProtogenInfinity](https://huggingface.co/darkstorm2150/Protogen_Infinity_Official_Release)
Colorful-v4.5 is named how it is because of the fact that it is similar to Anything-v4.5 and that it improves the bland color pallet it comes with (atleast for me), producing much livelier images. It also improves some other things like environments, fingers, facial emotions and somewhat clothing (it also fixes the purple spots 🤫)
The "Plus" in "Colorful-v4.5" has been added because the model merge has been updated to Abyss Orange Mix 3. As the name suggests, this version is better than the last one
*Technically i could name it Anything-v5.0 but that would be rather cheesy*
*The older version of the model is still in the repo if you're interested*
*It is highly recommended to run this model locally on your computer because running it from the web-ui api will produce lower quality images than intended*
# Examples:
# Colorful-v4.5-Plus:

# Colorful-v4.5:

# Anything-v4.5:

```
Prompt: masterpiece, best quality, girl, black hair, blue eyes, black t-shirt, black pants, smiling, standing up, solo, facing viewer, near blossomed tree
Other Details: Steps: 30, Sampler: DPM++ 2S a Karras, CFG scale: 8, Seed: 774768794, Size: 512x512, Model hash: b5de490700, Model: Colorful-v4.5, Denoising strength: 0.6, Hires upscale: 2, Hires steps: 30, Hires upscaler: SwinIR_4x
Negative Prompt: The negative prompt is very long and specific so it will be listed in the model's repo. ( The negative prompt comes from another model called Hentai Difussion so it will contain NSFW. A curated version of the negative prompt will also be in the repo for those who want SFW)
```
*Note: I didnt use any vae for the examples, but i did try the anything-v4.0 vae and it barely made a difference*
|
FredZhang7/anime-anything-promptgen-v2
|
FredZhang7
|
gpt2
| 12 | 70 |
transformers
| 3 |
text-generation
| true | false | false |
creativeml-openrail-m
|
['en']
|
['FredZhang7/anime-prompts-180K']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'anime', 'anything-v4', 'art', 'arxiv:2210.14140']
| false | true | true | 2,682 |
## Fast Anime PromptGen
This model was trained on a dataset of **80,000** safe anime prompts for 3 epochs. I fetched the prompts from the [Safebooru API endpoint](https://safebooru.donmai.us/posts/random.json), but only accepted unique prompts with **up_score ≥ 8** and without any [blacklisted tags](./blacklist.txt).
I didn't release the V1 model because it only generated gibberish prompts. After trying all means to correct that behavior, I eventually figured that the cause of the gibberish prompts is not from the pipeline params, model structure or training duration, but rather from the random usernames in the training data.
Here's the complete [prompt preprocessing algorithm](./preprocess.py).
## Text-to-image Examples
Prefix *1girl* | [Generated *1girl* prompts](./anime_girl_settings.txt) | Model *Anything V4*

Prefix *1boy* | [Generated *1boy* prompts](./anime_boy_settings.txt) | Model *Anything V4*

## Contrastive Search
```
pip install --upgrade transformers
```
```python
import torch
from transformers import GPT2Tokenizer, GPT2LMHeadModel, pipeline
tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2')
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
model = GPT2LMHeadModel.from_pretrained('FredZhang7/anime-anything-promptgen-v2')
prompt = r'1girl, genshin'
# generate text using fine-tuned model
nlp = pipeline('text-generation', model=model, tokenizer=tokenizer)
# generate 10 samples using contrastive search
outs = nlp(prompt, max_length=76, num_return_sequences=10, do_sample=True, repetition_penalty=1.2, temperature=0.7, top_k=4, early_stopping=True)
print('\nInput:\n' + 100 * '-')
print('\033[96m' + prompt + '\033[0m')
print('\nOutput:\n' + 100 * '-')
for i in range(len(outs)):
# remove trailing commas and double spaces
outs[i] = str(outs[i]['generated_text']).replace(' ', '').rstrip(',')
print('\033[92m' + '\n\n'.join(outs) + '\033[0m\n')
```
Output Example:

Please see [Fast GPT PromptGen](https://huggingface.co/FredZhang7/distilgpt2-stable-diffusion-v2) for more info on the pipeline parameters.
## Awesome Tips
- If you feel like a generated anime character doesn't show emotions, try emoticons like `;o`, `:o`, `;p`, `:d`, `:p`, and `;d` in the prompt.
I also use `happy smirk`, `happy smile`, `laughing closed eyes`, etc. to make the characters more lively and expressive.
- Adding `absurdres`, instead of `highres` and `masterpiece`, to a prompt can drastically increase the sharpness and resolution of a generated image.
## Danbooru
[Link to the Danbooru version](https://huggingface.co/FredZhang7/danbooru-tag-generator)
|
jancijen/PPO-LunarLander-v2
|
jancijen
| null | 12 | 1 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
atorre/poca-SoccerTwos-10M
|
atorre
| null | 21 | 207 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 844 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: atorre/poca-SoccerTwos-10M
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
espnet/pengcheng_librimix_asr_train_sot_asr_conformer_raw_en_char_sp
|
espnet
| null | 19 | 0 |
espnet
| 0 |
automatic-speech-recognition
| false | false | false |
cc-by-4.0
|
['en']
|
['librimix']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['espnet', 'audio', 'automatic-speech-recognition']
| false | true | true | 6,804 |
## ESPnet2 ASR model
### `espnet/pengcheng_librimix_asr_train_sot_asr_conformer_raw_en_char_sp`
This model was trained by Pengcheng Guo using librimix recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout fe824770250485b77c68e8ca041922b8779b5c94
pip install -e .
cd egs2/librimix/sot_asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/pengcheng_librimix_asr_train_sot_asr_conformer_raw_en_char_sp
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Mon Feb 6 12:15:26 CST 2023`
- python version: `3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0]`
- espnet version: `espnet 202211`
- pytorch version: `pytorch 1.12.1`
- Git hash: ``
- Commit date: ``
## asr_train_sot_conformer_raw_en_char_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_sot_asr_model_valid.acc.ave/dev|3000|123853|78.3|19.1|2.6|3.0|24.7|99.3|
|decode_sot_asr_model_valid.acc.ave/test|3000|111243|79.6|17.7|2.6|3.0|23.3|98.7|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_sot_asr_model_valid.acc.ave/dev|3000|670222|90.1|6.3|3.6|3.5|13.4|99.3|
|decode_sot_asr_model_valid.acc.ave/test|3000|605408|90.7|5.7|3.6|3.3|12.6|98.7|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_sot_asr_conformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_sot_asr_conformer_raw_en_char_sp
ngpu: 1
seed: 0
num_workers: 8
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 2
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 38867
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 60
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: true
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 8000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_char_sp/train/speech_shape
- exp/asr_stats_raw_en_char_sp/train/text_shape.char
valid_shape_file:
- exp/asr_stats_raw_en_char_sp/valid/speech_shape
- exp/asr_stats_raw_en_char_sp/valid/text_shape.char
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_sp/wav.scp
- speech
- kaldi_ark
- - dump/raw/train_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- kaldi_ark
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adam
optim_conf:
lr: 0.0005
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 20000
token_list:
- <blank>
- <unk>
- <sc>
- <space>
- E
- T
- A
- O
- N
- I
- H
- S
- R
- D
- L
- U
- M
- C
- W
- F
- G
- Y
- P
- B
- V
- K
- ''''
- X
- J
- Q
- Z
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
joint_net_conf: null
use_preprocessor: true
token_type: char
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
aux_ctc_tasks: []
frontend: default
frontend_conf:
fs: 16k
specaug: null
specaug_conf: {}
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_en_char_sp/train/feats_stats.npz
model: espnet
model_conf:
ctc_weight: 0.0
lsm_weight: 0.1
length_normalized_loss: false
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
rel_pos_type: latest
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
preprocessor: multi
preprocessor_conf:
speaker_change_symbol:
- <sc>
required:
- output_dir
- token_list
version: '202211'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
ChrisPreston/diff-svc_minato_aqua_user_ver
|
ChrisPreston
| null | 4 | 0 | null | 2 | null | false | false | false |
other
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,606 |
diff-svc一键包
原项目地址:https://github.com/openvpi/diff-svc
vst插件:https://github.com/zhaohui8969/VST_NetProcess-/tree/master
代码修改:@ChrisPreston
模型训练:@ChrisPreston
音源:Aqua Ch. 湊あくあ https://www.youtube.com/@MinatoAqua カバー株式会社
模型使用协议(重要):
1. 请勿用于商业目的
2. 请勿用于会影响主播本人的行为(比如冒充本人发表争议言论)
3. 请勿用于血腥、暴力、性相关、政治相关内容
4. 不允许二次分发模型
5. 非个人使用场合请注明模型作者@ChrisPreston以及diff-svc原项目
6. 允许用于个人娱乐场景下的游戏语音、直播活动,不得用于低创内容,用于直播前请与本人联系
联系方式:电邮:[email protected], b站:https://space.bilibili.com/18801308
免责声明:由于使用本模型造成的法律纠纷本人概不负责
diff-svc easy package
Original repository: https://github.com/openvpi/diff-svc
vst plugin: https://github.com/zhaohui8969/VST_NetProcess-/tree/master
Code modification: @ChrisPreston
Model Training: @ChrisPreston
Sound source: Aqua Ch. https://www.youtube.com/@MinatoAqua Cover.crop
Model usage agreement (important):
1. Do not use for commercial purposes
2. Do not use it for actions that will affect MinatoAqua (such as pretending to be herself to make controversial remarks)
3. Please do not use it for bloody, violent, sexual or political content
4. No redistribute allowed
5. Please indicate the author of the model @ChrisPreston and the original project of diff-svc for non-personal use
6. It is allowed to be used for game voice and live broadcast activities in personal entertainment scenarios. Please contact me before using it for live broadcast
Contact information: Email: [email protected], Bilibili: https://space.bilibili.com/18801308
Disclaimer: I am not responsible for any legal disputes caused by the use of this model
|
amrisaurus/pretrained-bert-uncased-90
|
amrisaurus
|
bert
| 8 | 18 |
transformers
| 0 | null | false | true | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 4,762 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# pretrained-bert-uncased-90
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.5801
- Validation Loss: 13.6573
- Epoch: 89
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 1e-04, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 8.8978 | 9.5686 | 0 |
| 7.0524 | 9.6480 | 1 |
| 6.8578 | 10.5054 | 2 |
| 6.1054 | 10.4137 | 3 |
| 6.1268 | 10.4515 | 4 |
| 5.8614 | 10.4313 | 5 |
| 5.9680 | 10.7224 | 6 |
| 5.7868 | 11.2948 | 7 |
| 5.5465 | 10.7112 | 8 |
| 5.7115 | 10.8543 | 9 |
| 5.7908 | 11.6466 | 10 |
| 5.5664 | 11.5085 | 11 |
| 5.5865 | 11.4894 | 12 |
| 5.6421 | 11.2182 | 13 |
| 5.6626 | 11.4446 | 14 |
| 5.4587 | 11.2814 | 15 |
| 5.5299 | 11.6601 | 16 |
| 5.5408 | 12.0485 | 17 |
| 5.5092 | 11.9469 | 18 |
| 5.6606 | 12.4353 | 19 |
| 5.7420 | 12.7461 | 20 |
| 5.6078 | 12.1650 | 21 |
| 5.6612 | 12.2811 | 22 |
| 5.7503 | 12.4086 | 23 |
| 5.5609 | 12.6149 | 24 |
| 5.4806 | 12.4447 | 25 |
| 5.6898 | 12.8078 | 26 |
| 5.6168 | 12.4649 | 27 |
| 5.6292 | 12.5851 | 28 |
| 5.8481 | 12.5146 | 29 |
| 5.6491 | 12.6358 | 30 |
| 5.5755 | 12.6996 | 31 |
| 5.8218 | 12.7957 | 32 |
| 5.5641 | 13.1650 | 33 |
| 5.6044 | 12.5065 | 34 |
| 5.6762 | 12.3722 | 35 |
| 5.5931 | 12.7162 | 36 |
| 5.5727 | 12.6179 | 37 |
| 5.5761 | 12.9479 | 38 |
| 5.6360 | 13.0610 | 39 |
| 5.4503 | 13.0441 | 40 |
| 5.5689 | 13.1673 | 41 |
| 5.6327 | 13.2184 | 42 |
| 5.5567 | 12.8114 | 43 |
| 5.6322 | 13.1793 | 44 |
| 5.4677 | 13.1324 | 45 |
| 5.5865 | 13.2891 | 46 |
| 5.5352 | 13.5036 | 47 |
| 5.4867 | 13.5010 | 48 |
| 5.6926 | 13.1743 | 49 |
| 5.7545 | 13.1689 | 50 |
| 5.5422 | 13.3362 | 51 |
| 5.6094 | 13.3983 | 52 |
| 5.5993 | 13.3638 | 53 |
| 5.6803 | 13.3884 | 54 |
| 5.6102 | 12.7277 | 55 |
| 5.7204 | 13.1669 | 56 |
| 5.5271 | 13.5684 | 57 |
| 5.5265 | 13.5086 | 58 |
| 5.5679 | 13.8641 | 59 |
| 5.6738 | 13.1735 | 60 |
| 5.5423 | 13.3285 | 61 |
| 5.5020 | 13.6262 | 62 |
| 5.5065 | 13.4765 | 63 |
| 5.5919 | 13.5598 | 64 |
| 5.5684 | 13.1651 | 65 |
| 5.6378 | 13.4781 | 66 |
| 5.6661 | 13.0726 | 67 |
| 5.7996 | 13.6267 | 68 |
| 5.7453 | 13.4608 | 69 |
| 5.5720 | 13.3663 | 70 |
| 5.4926 | 13.6905 | 71 |
| 5.7386 | 13.5941 | 72 |
| 5.6016 | 13.3110 | 73 |
| 5.5905 | 14.0529 | 74 |
| 5.7030 | 13.7322 | 75 |
| 5.6801 | 13.4712 | 76 |
| 5.6202 | 13.7954 | 77 |
| 5.6230 | 13.8177 | 78 |
| 5.6288 | 13.4887 | 79 |
| 5.6207 | 13.5817 | 80 |
| 5.5904 | 13.7643 | 81 |
| 5.6685 | 14.1648 | 82 |
| 5.5031 | 14.1816 | 83 |
| 5.6752 | 13.9170 | 84 |
| 5.6140 | 13.6953 | 85 |
| 5.6929 | 13.4916 | 86 |
| 5.4762 | 13.8740 | 87 |
| 5.6537 | 13.9725 | 88 |
| 5.5801 | 13.6573 | 89 |
### Framework versions
- Transformers 4.27.0.dev0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
HealthTeam/mt5-small-finetuned-MultiHead-230209-test3
|
HealthTeam
|
mt5
| 13 | 0 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,328 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-MultiHead-230209-test3
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 13.9701
- Bleu: 0.0131
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 63 | 17.2117 | 0.0072 |
| No log | 2.0 | 126 | 14.5737 | 0.0130 |
| No log | 3.0 | 189 | 13.9701 | 0.0131 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ThatGuyVanquish/mt5-small-finetuned-rabbi-kook-nave
|
ThatGuyVanquish
|
mt5
| 11 | 3 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,296 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-rabbi-kook-nave
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0 | 1.0 | 892 | nan |
| 0.0 | 2.0 | 1784 | nan |
| 0.0 | 3.0 | 2676 | nan |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.11.0
|
DeppOnHuggingFace/sd-arsstickers-128
|
DeppOnHuggingFace
| null | 6 | 2 |
diffusers
| 0 |
unconditional-image-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['pytorch', 'diffusers', 'unconditional-image-generation', 'diffusion-models-class']
| false | true | true | 427 |
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋, no I mean horrible stickers because I change the dataset.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('DeppOnHuggingFace/sd-arsstickers-128')
image = pipeline().images[0]
image
```
|
sryu1/poca-SoccerTwos
|
sryu1
| null | 23 | 201 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 839 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: sryu1/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Mustafa21/my_awesome_food_model
|
Mustafa21
|
vit
| 7 | 0 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['food101']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,449 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2335
- Accuracy: 0.985
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0523 | 1.0 | 50 | 1.9226 | 0.935 |
| 1.3718 | 2.0 | 100 | 1.3422 | 0.995 |
| 1.2298 | 3.0 | 150 | 1.2335 | 0.985 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Salesforce/blip2-flan-t5-xxl
|
Salesforce
|
blip-2
| 15 | 64 |
transformers
| 3 |
image-to-text
| true | false | false |
mit
|
['en']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['vision', 'image-to-text', 'image-captioning', 'visual-question-answering']
| false | true | true | 2,030 |
# BLIP-2, Flan T5-xxl, pre-trained only
BLIP-2 model, leveraging [Flan T5-xxl](https://huggingface.co/google/flan-t5-xxl) (a large language model).
It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2).
Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model.
The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen
while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings,
which bridge the gap between the embedding space of the image encoder and the large language model.
The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg"
alt="drawing" width="600"/>
This allows the model to be used for tasks like:
- image captioning
- visual question answering (VQA)
- chat-like conversations by feeding the image and the previous conversation as prompt to the model
## Intended uses & limitations
You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for
fine-tuned versions on a task that interests you.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example), or refer to the snippets below depending on your usecase:
#### Running the model on CPU
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import BlipProcessor, Blip2ForConditionalGeneration
processor = BlipProcessor.from_pretrained("Salesforce/blip2-flan-t5-xxl")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
#### Running the model on GPU
##### In full precision
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xxl")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
##### In half precision (`float16`)
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Bli2pProcessor.from_pretrained("Salesforce/blip2-flan-t5-xxl")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", torch_dtype=torch.float16, device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
##### In 8-bit precision (`int8`)
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate bitsandbytes
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Bli2pProcessor.from_pretrained("Salesforce/blip2-flan-t5-xxl")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", load_in_8bit=True, device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
|
XPeng2022/fotorx
|
XPeng2022
| null | 19 | 21 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 265 |
### fotorx Dreambooth model trained by XPeng2022
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
wangguan/ppo-LunarLander-v2
|
wangguan
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jason1i/poca-SoccerTwos-towards-AGI
|
jason1i
| null | 51 | 196 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 853 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: jason1i/poca-SoccerTwos-towards-AGI
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
tudnlp23g69/hw6
|
tudnlp23g69
|
bert
| 15 | 39 |
transformers
| 0 |
question-answering
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 952 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# result
This model is a fine-tuned version of [huawei-noah/TinyBERT_General_6L_768D](https://huggingface.co/huawei-noah/TinyBERT_General_6L_768D) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Iggg0r/ppo-LunarLander-v2
|
Iggg0r
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
yizhangliu/poca-SoccerTwos-v5
|
yizhangliu
| null | 22 | 199 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 847 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: yizhangliu/poca-SoccerTwos-v5
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sftvrt/wav2vec2-large-xls-r-300m-swedisch-colab
|
sftvrt
|
wav2vec2
| 13 | 1 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,370 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-swedisch-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6439
- Wer: 0.9678
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.8953 | 1.83 | 400 | 1.6439 | 0.9678 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2
|
Shushant/my_awesome_qa_model
|
Shushant
|
bert
| 14 | 1 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,259 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 5.8153
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 3 | 5.8866 |
| No log | 2.0 | 6 | 5.8367 |
| No log | 3.0 | 9 | 5.8153 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
cfalholt/A2C-AntBulletEnv-v0
|
cfalholt
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AntBulletEnv-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 352 |
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sinjy1203/ko-sbert-navernews
|
sinjy1203
|
bert
| 13 | 5 |
sentence-transformers
| 0 |
sentence-similarity
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
| false | true | true | 3,649 |
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 593 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
nikogarro/PPO-LunarLander-v2
|
nikogarro
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
|
Akanksha27/distilbert-base-uncased-finetuned-cola
|
Akanksha27
|
distilbert
| 18 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,275 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4969
- Matthews Correlation: 0.4354
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5287 | 1.0 | 535 | 0.4969 | 0.4354 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Rubywong123/PPO-LunarLander-v2
|
Rubywong123
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
threite/xlm-roberta-base-finetuned-partypredictor
|
threite
|
xlm-roberta
| 9 | 0 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,715 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-partypredictor
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6783
- Accuracy: 0.2495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:-----:|:--------:|:---------------:|
| 1.7766 | 0.76 | 5000 | 0.1331 | 1.8909 |
| 1.7572 | 1.52 | 10000 | 0.1331 | 1.7809 |
| 1.7543 | 2.28 | 15000 | 0.1031 | 1.8126 |
| 1.7273 | 3.05 | 20000 | 0.1331 | 1.8048 |
| 1.7435 | 3.81 | 25000 | 0.2675 | 1.7892 |
| 1.7606 | 4.99 | 30000 | 0.3121 | 1.7848 |
| 1.7546 | 5.82 | 35000 | 0.3121 | 1.7737 |
| 1.7417 | 6.65 | 40000 | 0.3121 | 1.7699 |
| 1.7007 | 7.48 | 45000 | 0.1529 | 1.7088 |
| 1.7542 | 7.87 | 50000 | 0.1331 | 1.8058 |
| 1.75 | 8.66 | 55000 | 0.1331 | 1.8347 |
| 1.7505 | 10.05 | 60000 | 1.8079 | 0.1231 |
| 1.7545 | 10.88 | 65000 | 1.7756 | 0.3121 |
| 1.7322 | 11.72 | 70000 | 1.7371 | 0.2707 |
| 1.7082 | 12.56 | 75000 | 1.6886 | 0.2419 |
| 1.7035 | 13.4 | 80000 | 1.6844 | 0.2638 |
| 1.6889 | 14.23 | 85000 | 1.6728 | 0.2525 |
| 1.6779 | 15.07 | 90000 | 1.6737 | 0.2490 |
| 1.6821 | 15.91 | 95000 | 1.6783 | 0.2495 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ireneisdoomed/clinical_trial_stop_reasons_custom
|
ireneisdoomed
|
bert
| 13 | 8 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,199 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clinical_trial_stop_reasons_custom
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1448
- Accuracy Thresh: 0.9570
- F1 Micro: 0.5300
- F1 Macro: 0.1254
- Confusion Matrix: [[5940 15]
[ 270 150]]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy Thresh | F1 Micro | F1 Macro | Confusion Matrix |
|:-------------:|:-----:|:----:|:---------------:|:---------------:|:--------:|:--------:|:--------------------------:|
| No log | 1.0 | 106 | 0.2812 | 0.8328 | 0.0 | 0.0 | [[5955 0]
[ 420 0]] |
| No log | 2.0 | 212 | 0.2189 | 0.9382 | 0.0 | 0.0 | [[5955 0]
[ 420 0]] |
| No log | 3.0 | 318 | 0.1840 | 0.9489 | 0.0 | 0.0 | [[5955 0]
[ 420 0]] |
| No log | 4.0 | 424 | 0.1638 | 0.9485 | 0.4940 | 0.0989 | [[5943 12]
[ 288 132]] |
| 0.239 | 5.0 | 530 | 0.1526 | 0.9533 | 0.5060 | 0.1018 | [[5943 12]
[ 277 143]] |
| 0.239 | 6.0 | 636 | 0.1467 | 0.9564 | 0.5077 | 0.1020 | [[5938 17]
[ 275 145]] |
| 0.239 | 7.0 | 742 | 0.1448 | 0.9570 | 0.5300 | 0.1254 | [[5940 15]
[ 270 150]] |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.12.1+cu102
- Datasets 2.9.0
- Tokenizers 0.13.2
|
DL82/denlip82
|
DL82
| null | 47 | 1 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image']
| false | true | true | 3,178 |
### denlip82 Dreambooth model trained by DL82 with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
denlip82 (use that on your prompt)

|
concedo/OPT-2.7B-Nerybus-Mix
|
concedo
|
opt
| 11 | 16 |
transformers
| 1 |
text-generation
| true | false | false |
other
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 3,144 |
# OPT-2.7B-Nerybus-Mix
This is an experimental model containing a ***parameter-wise 50/50 blend (weighted average)*** of the weights of *NerysV2-2.7B* and *ErebusV1-2.7B*
Preliminary testing produces pretty coherent outputs, it appears to retain the NSFWness of Erebus but with a Nerys-esque twist in terms of prose.
# License
The two models used for this blend, *NerysV2-2.7B* and *ErebusV1-2.7B* are made by **Mr. Seeker**.
- https://huggingface.co/KoboldAI/OPT-2.7B-Erebus
- https://huggingface.co/KoboldAI/OPT-2.7B-Nerys-v2
The base OPT-2.7B model is licensed under the OPT-175B license, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
# Evaluation Results
As the original datasets used for the source models are not publically available, I use my own datasets for this evaluation, which may not provide accurate comparison.
Eval parameters: 32000 characters extracted from the middle of the corpus, tested in blocks of 1024 tokens each, same dataset used for each test batch.
```
Literotica Dataset Eval (Randomly selected stories)
{'eval_loss': 2.571258306503296, 'name': 'Concedo_OPT-2.7B-Nerybus-Mix'}
{'eval_loss': 2.5491442680358887, 'name': 'KoboldAI_OPT-2.7B-Erebus'}
{'eval_loss': 2.6158597469329834, 'name': 'KoboldAI_OPT-2.7B-Nerys'}
{'eval_loss': 2.614469051361084, 'name': 'facebook_opt-2.7b'}
{'eval_loss': 2.4960227012634277, 'name': '(Unreleased 2.7B ModronAI Model)'}
ASSTR Dataset Eval (Randomly selected stories)
{'eval_loss': 2.664412498474121, 'name': 'Concedo_OPT-2.7B-Nerybus-Mix'}
{'eval_loss': 2.6451029777526855, 'name': 'KoboldAI_OPT-2.7B-Erebus'}
{'eval_loss': 2.7259647846221924, 'name': 'KoboldAI_OPT-2.7B-Nerys'}
{'eval_loss': 2.6675195693969727, 'name': 'facebook_opt-2.7b'}
{'eval_loss': 2.962111473083496, 'name': '(Unreleased 2.7B ModronAI Model)'}
Sexstories Dataset Eval (Random highly rated stories)
{'eval_loss': 2.2352423667907715, 'name': 'Concedo_OPT-2.7B-Nerybus-Mix'}
{'eval_loss': 2.194378137588501, 'name': 'KoboldAI_OPT-2.7B-Erebus'}
{'eval_loss': 2.307469129562378, 'name': 'KoboldAI_OPT-2.7B-Nerys'}
{'eval_loss': 2.293961763381958, 'name': 'facebook_opt-2.7b'}
{'eval_loss': 2.0103421211242676, 'name': '(Unreleased 2.7B ModronAI Model)'}
Harry Potter Dataset Eval (Canon books)
{'eval_loss': 2.473742961883545, 'name': 'Concedo_OPT-2.7B-Nerybus-Mix'}
{'eval_loss': 2.480600357055664, 'name': 'KoboldAI_OPT-2.7B-Erebus'}
{'eval_loss': 2.506237506866455, 'name': 'KoboldAI_OPT-2.7B-Nerys'}
{'eval_loss': 2.5074169635772705, 'name': 'facebook_opt-2.7b'}
{'eval_loss': 2.273703098297119, 'name': '(Unreleased 2.7B ModronAI Model)'}
Star Wars Dataset Eval (Rogue One Novel)
{'eval_loss': 2.5031676292419434, 'name': 'Concedo_OPT-2.7B-Nerybus-Mix'}
{'eval_loss': 2.5239150524139404, 'name': 'KoboldAI_OPT-2.7B-Erebus'}
{'eval_loss': 2.526801586151123, 'name': 'KoboldAI_OPT-2.7B-Nerys'}
{'eval_loss': 2.473283529281616, 'name': 'facebook_opt-2.7b'}
{'eval_loss': 2.955465793609619, 'name': '(Unreleased 2.7B ModronAI Model)'}
```
It is recommend to use this model with the KoboldAI software. All feedback and comments can be directed to Concedo on the KoboldAI discord.
|
plpkpjph/bert_german_test_2-finetuned-ner
|
plpkpjph
|
bert
| 10 | 10 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 954 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_german_test_2-finetuned-ner
This model is a fine-tuned version of [dbmdz/bert-base-german-cased](https://huggingface.co/dbmdz/bert-base-german-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.12.1+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ZTamas/hubert-qa-milqa-impossible
|
ZTamas
|
bert
| 13 | 9 |
transformers
| 0 |
question-answering
| true | false | false | null |
['hu']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['question-answering', 'bert']
| false | true | true | 637 |
This model is a fine-tuned version of [mcsabai/huBert-fine-tuned-hungarian-squadv2](https://huggingface.co/mcsabai/huBert-fine-tuned-hungarian-squadv2) on the milqa dataset.
How to use:
```py
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model = "ZTamas/hubert-qa-milqa-impossible",
tokenizer = "ZTamas/hubert-qa-milqa-impossible",
device = 0, #GPU selection, -1 on CPU
handle_impossible_answer = True,
max_answer_len = 50
)
predictions = qa_pipeline({
'context': context,
'question': question
})
print(predictions)
```
|
asuzuki/Reinforce-CartPole-v1
|
asuzuki
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 286 |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Duskfallcrew/duskfall-s-digital-fantasy
|
Duskfallcrew
| null | 21 | 19 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image']
| false | true | true | 1,011 |
### Duskfall's Digital Fantasy Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
All samples and info are here:
https://civitai.com/user/duskfallcrew
If you want to donate towards costs and don't want to subscribe:
https://ko-fi.com/DUSKFALLcrew
If you want to monthly support the EARTH & DUSK media projects and not just AI:
https://www.patreon.com/earthndusk
digidsk1 (use that on your prompt)
|
Galiess/a2c-AntBulletEnv-v0
|
Galiess
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AntBulletEnv-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 352 |
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ZTamas/xlm-roberta-large-squad2-qa-milqa-impossible
|
ZTamas
|
xlm-roberta
| 9 | 2 |
transformers
| 0 |
question-answering
| true | false | false | null |
['hu']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 708 |
This model is a fine-tuned version of deepset/xlm-roberta-large-squad2 on the milqa dataset.
Packages to install for large roberta model:
```py
sentencepiece==0.1.97
protobuf==3.20.0
```
How to use:
```py
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model = "ZTamas/xlm-roberta-large-squad2-qa-milqa-impossible",
tokenizer = "ZTamas/xlm-roberta-large-squad2-qa-milqa-impossible",
device = 0, #GPU selection, -1 on CPU
handle_impossible_answer = True,
max_answer_len = 50 #This can be modified
)
predictions = qa_pipeline({
'context': context,
'question': question
})
print(predictions)
```
|
DaniilSirota/ppo-Pyramids
|
DaniilSirota
| null | 16 | 0 |
ml-agents
| 1 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Pyramids']
| false | true | true | 835 |
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: DaniilSirota/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
RocioUrquijo/clasificador-languagedetection
|
RocioUrquijo
|
xlm-roberta
| 10 | 4 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['classification', 'generated_from_trainer']
| true | true | true | 958 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-languagedetection
This model is a fine-tuned version of [papluca/xlm-roberta-base-language-detection](https://huggingface.co/papluca/xlm-roberta-base-language-detection) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ZTamas/hubert-qa-milqa-impossible-long-answer
|
ZTamas
|
bert
| 9 | 7 |
transformers
| 0 |
question-answering
| true | false | false | null |
['hu']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 771 |
This model is a fine-tuned version of mcsabai/huBert-fine-tuned-hungarian-squadv2 on the milqa dataset.
How to use:
```py
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model = "ZTamas/hubert-qa-milqa-impossible-long-answer",
tokenizer = "ZTamas/hubert-qa-milqa-impossible-long-answer",
device = 0, #GPU selection, -1 on CPU
handle_impossible_answer = True,
max_answer_len = 1000 #This can be modified, but to let the model's
#answer be as long as it wants so I
#decided to add a big number
)
predictions = qa_pipeline({
'context': context,
'question': question
})
print(predictions)
```
|
NTCAL/SavedAfterTrainingTest39
|
NTCAL
|
bert
| 10 | 5 |
transformers
| 1 |
text-classification
| true | false | false |
cc-by-4.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,051 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SavedAfterTrainingTest39
This model is a fine-tuned version of [ltgoslo/norbert2](https://huggingface.co/ltgoslo/norbert2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
|
nlpso/m1_ind_layers_ref_cmbert_io_level_1
|
nlpso
|
camembert
| 13 | 4 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,385 |
# m1_ind_layers_ref_cmbert_io_level_1
## Introduction
This model is a model that was fine-tuned from [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## Experiment parameter
* Pretrained-model : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Dataset : ground-truth
* Tagging format : IO
* Recognised entities : level 1
## Load model from the Hugging Face
**Warning 1 ** : this model only recognises level-1 entities of dataset. It has to be used with [m1_ind_layers_ref_cmbert_io_level_2](https://huggingface.co/nlpso/m1_ind_layers_ref_cmbert_io_level_2) to recognise nested entities level-2.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m1_ind_layers_ref_cmbert_io_level_1")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m1_ind_layers_ref_cmbert_io_level_1")
|
nlpso/m1_ind_layers_ref_cmbert_io_level_2
|
nlpso
|
camembert
| 13 | 0 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,385 |
# m1_ind_layers_ref_cmbert_io_level_2
## Introduction
This model is a model that was fine-tuned from [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## Experiment parameter
* Pretrained-model : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Dataset : ground-truth
* Tagging format : IO
* Recognised entities : level 2
## Load model from the Hugging Face
**Warning 1 ** : this model only recognises level-2 entities of dataset. It has to be used with [m1_ind_layers_ref_cmbert_io_level_1](https://huggingface.co/nlpso/m1_ind_layers_ref_cmbert_io_level_1) to recognise nested entities level-1.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m1_ind_layers_ref_cmbert_io_level_2")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m1_ind_layers_ref_cmbert_io_level_2")
|
nlpso/m1_ind_layers_ref_cmbert_iob2_level_1
|
nlpso
|
camembert
| 13 | 2 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,397 |
# m1_ind_layers_ref_cmbert_iob2_level_1
## Introduction
This model is a model that was fine-tuned from [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## Experiment parameter
* Pretrained-model : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Dataset : ground-truth
* Tagging format : IOB2
* Recognised entities : level 1
## Load model from the Hugging Face
**Warning 1 ** : this model only recognises level-1 entities of dataset. It has to be used with [m1_ind_layers_ref_cmbert_iob2_level_2](https://huggingface.co/nlpso/m1_ind_layers_ref_cmbert_iob2_level_2) to recognise nested entities level-2.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m1_ind_layers_ref_cmbert_iob2_level_1")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m1_ind_layers_ref_cmbert_iob2_level_1")
|
pfunk/Pong-v4-DQPN_p100_e0.10-seed1
|
pfunk
| null | 11 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pong-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 1,999 |
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p100_e0.10.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_p100_e0.10]"
python -m cleanrl_utils.enjoy --exp-name DQPN_p100_e0.10 --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p100_e0.10-seed1/raw/main/dqpn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p100_e0.10-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p100_e0.10-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_atari.py --exp-name DQPN_p100_e0.10 --start-policy-f 100000 --end-policy-f 1000 --evaluation-fraction 0.10 --target-tau 1.0 --policy-tau 1.00 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'end_policy_f': 1000,
'env_id': 'Pong-v4',
'evaluation_fraction': 0.1,
'exp_name': 'DQPN_p100_e0.10',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'policy_tau': 1.0,
'save_model': True,
'seed': 1,
'start_e': 1,
'start_policy_f': 100000,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
nlpso/m1_ind_layers_ref_cmbert_iob2_level_2
|
nlpso
|
camembert
| 13 | 3 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,397 |
# m1_ind_layers_ref_cmbert_iob2_level_2
## Introduction
This model is a model that was fine-tuned from [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## Experiment parameter
* Pretrained-model : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Dataset : ground-truth
* Tagging format : IOB2
* Recognised entities : level 2
## Load model from the Hugging Face
**Warning 1 ** : this model only recognises level-2 entities of dataset. It has to be used with [m1_ind_layers_ref_cmbert_iob2_level_1](https://huggingface.co/nlpso/m1_ind_layers_ref_cmbert_iob2_level_1) to recognise nested entities level-1.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m1_ind_layers_ref_cmbert_iob2_level_2")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m1_ind_layers_ref_cmbert_iob2_level_2")
|
nlpso/m1_ind_layers_ref_ptrn_cmbert_io_level_1
|
nlpso
|
camembert
| 13 | 1 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,410 |
# m1_ind_layers_ref_ptrn_cmbert_io_level_1
## Introduction
This model is a model that was fine-tuned from [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## Experiment parameter
* Pretrained-model : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Dataset : ground-truth
* Tagging format : IO
* Recognised entities : level 1
## Load model from the Hugging Face
**Warning 1 ** : this model only recognises level-1 entities of dataset. It has to be used with [m1_ind_layers_ref_ptrn_cmbert_io_level_2](https://huggingface.co/nlpso/m1_ind_layers_ref_ptrn_cmbert_io_level_2) to recognise nested entities level-2.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m1_ind_layers_ref_ptrn_cmbert_io_level_1")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m1_ind_layers_ref_ptrn_cmbert_io_level_1")
|
nlpso/m1_ind_layers_ref_ptrn_cmbert_io_level_2
|
nlpso
|
camembert
| 13 | 0 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,410 |
# m1_ind_layers_ref_ptrn_cmbert_io_level_2
## Introduction
This model is a model that was fine-tuned from [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## Experiment parameter
* Pretrained-model : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Dataset : ground-truth
* Tagging format : IO
* Recognised entities : level 2
## Load model from the Hugging Face
**Warning 1 ** : this model only recognises level-2 entities of dataset. It has to be used with [m1_ind_layers_ref_ptrn_cmbert_io_level_1](https://huggingface.co/nlpso/m1_ind_layers_ref_ptrn_cmbert_io_level_1) to recognise nested entities level-1.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m1_ind_layers_ref_ptrn_cmbert_io_level_2")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m1_ind_layers_ref_ptrn_cmbert_io_level_2")
|
nlpso/m1_ind_layers_ref_ptrn_cmbert_iob2_level_1
|
nlpso
|
camembert
| 13 | 0 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,422 |
# m1_ind_layers_ref_ptrn_cmbert_iob2_level_1
## Introduction
This model is a model that was fine-tuned from [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## Experiment parameter
* Pretrained-model : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Dataset : ground-truth
* Tagging format : IOB2
* Recognised entities : level 1
## Load model from the Hugging Face
**Warning 1 ** : this model only recognises level-1 entities of dataset. It has to be used with [m1_ind_layers_ref_ptrn_cmbert_iob2_level_2](https://huggingface.co/nlpso/m1_ind_layers_ref_ptrn_cmbert_iob2_level_2) to recognise nested entities level-2.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m1_ind_layers_ref_ptrn_cmbert_iob2_level_1")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m1_ind_layers_ref_ptrn_cmbert_iob2_level_1")
|
nlpso/m1_ind_layers_ref_ptrn_cmbert_iob2_level_2
|
nlpso
|
camembert
| 13 | 2 |
transformers
| 0 |
token-classification
| true | false | false | null |
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,422 |
# m1_ind_layers_ref_ptrn_cmbert_iob2_level_2
## Introduction
This model is a model that was fine-tuned from [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** on a nested NER Paris trade directories dataset.
## Dataset
Abbreviation|Entity group (level)|Description
-|-|-
O |1 & 2|Outside of a named entity
PER |1|Person or company name
ACT |1 & 2|Person or company professional activity
TITREH |2|Military or civil distinction
DESC |1|Entry full description
TITREP |2|Professionnal reward
SPAT |1|Address
LOC |2|Street name
CARDINAL |2|Street number
FT |2|Geographical feature
## Experiment parameter
* Pretrained-model : [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained)
* Dataset : ground-truth
* Tagging format : IOB2
* Recognised entities : level 2
## Load model from the Hugging Face
**Warning 1 ** : this model only recognises level-2 entities of dataset. It has to be used with [m1_ind_layers_ref_ptrn_cmbert_iob2_level_1](https://huggingface.co/nlpso/m1_ind_layers_ref_ptrn_cmbert_iob2_level_1) to recognise nested entities level-1.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m1_ind_layers_ref_ptrn_cmbert_iob2_level_2")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m1_ind_layers_ref_ptrn_cmbert_iob2_level_2")
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.