modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-27 18:27:08
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-27 18:22:57
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
jordyvl/vit-small_rvl_cdip_100_examples_per_class_kd_CEKD_t1.5_a0.7
|
jordyvl
| 2023-07-11T01:00:28Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-10T23:46:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-small_rvl_cdip_100_examples_per_class_kd_CEKD_t1.5_a0.7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-small_rvl_cdip_100_examples_per_class_kd_CEKD_t1.5_a0.7
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2544
- Accuracy: 0.6375
- Brier Loss: 0.4805
- Nll: 3.0517
- F1 Micro: 0.6375
- F1 Macro: 0.6394
- Ece: 0.1654
- Aurc: 0.1376
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 25 | 3.2176 | 0.1275 | 0.9297 | 15.5568 | 0.1275 | 0.1255 | 0.1544 | 0.8595 |
| No log | 2.0 | 50 | 2.4392 | 0.405 | 0.7503 | 9.6083 | 0.405 | 0.3723 | 0.1816 | 0.3640 |
| No log | 3.0 | 75 | 1.9211 | 0.5025 | 0.6287 | 5.6023 | 0.5025 | 0.4930 | 0.1991 | 0.2451 |
| No log | 4.0 | 100 | 1.7474 | 0.5375 | 0.5956 | 4.5712 | 0.5375 | 0.5387 | 0.1677 | 0.2244 |
| No log | 5.0 | 125 | 1.7107 | 0.535 | 0.6051 | 4.3431 | 0.535 | 0.5180 | 0.1796 | 0.2269 |
| No log | 6.0 | 150 | 1.7144 | 0.545 | 0.5988 | 3.6699 | 0.545 | 0.5455 | 0.1918 | 0.2253 |
| No log | 7.0 | 175 | 1.9096 | 0.5625 | 0.6262 | 4.6856 | 0.5625 | 0.5459 | 0.1966 | 0.2362 |
| No log | 8.0 | 200 | 1.6325 | 0.575 | 0.5815 | 3.9279 | 0.575 | 0.5705 | 0.1893 | 0.2026 |
| No log | 9.0 | 225 | 1.8268 | 0.56 | 0.6088 | 4.5140 | 0.56 | 0.5482 | 0.1976 | 0.2213 |
| No log | 10.0 | 250 | 1.9253 | 0.5575 | 0.6493 | 4.2860 | 0.5575 | 0.5427 | 0.2286 | 0.2445 |
| No log | 11.0 | 275 | 1.6941 | 0.5725 | 0.5940 | 3.9317 | 0.5725 | 0.5827 | 0.2019 | 0.2232 |
| No log | 12.0 | 300 | 1.8197 | 0.5575 | 0.6138 | 4.7928 | 0.5575 | 0.5476 | 0.2079 | 0.2240 |
| No log | 13.0 | 325 | 1.8958 | 0.54 | 0.6508 | 4.2978 | 0.54 | 0.5338 | 0.2379 | 0.2357 |
| No log | 14.0 | 350 | 1.8939 | 0.535 | 0.6522 | 4.5557 | 0.535 | 0.5143 | 0.2324 | 0.2350 |
| No log | 15.0 | 375 | 1.8018 | 0.585 | 0.6042 | 4.4728 | 0.585 | 0.5829 | 0.2205 | 0.2182 |
| No log | 16.0 | 400 | 1.7645 | 0.5975 | 0.5978 | 3.9939 | 0.5975 | 0.5992 | 0.2130 | 0.1927 |
| No log | 17.0 | 425 | 1.6392 | 0.5925 | 0.5842 | 3.6783 | 0.5925 | 0.6039 | 0.1986 | 0.2017 |
| No log | 18.0 | 450 | 1.6124 | 0.5875 | 0.5761 | 4.0535 | 0.5875 | 0.5721 | 0.2060 | 0.1792 |
| No log | 19.0 | 475 | 1.7517 | 0.585 | 0.6102 | 3.9076 | 0.585 | 0.5786 | 0.2082 | 0.2071 |
| 0.6436 | 20.0 | 500 | 1.7467 | 0.5575 | 0.6166 | 3.5052 | 0.5575 | 0.5476 | 0.2252 | 0.2247 |
| 0.6436 | 21.0 | 525 | 1.6719 | 0.5825 | 0.5745 | 4.1235 | 0.5825 | 0.5877 | 0.1831 | 0.1723 |
| 0.6436 | 22.0 | 550 | 1.4222 | 0.605 | 0.5237 | 3.2051 | 0.605 | 0.6083 | 0.1813 | 0.1559 |
| 0.6436 | 23.0 | 575 | 1.6436 | 0.595 | 0.5701 | 4.3949 | 0.595 | 0.5834 | 0.1921 | 0.1901 |
| 0.6436 | 24.0 | 600 | 1.4244 | 0.6075 | 0.5197 | 3.3207 | 0.6075 | 0.6100 | 0.1548 | 0.1616 |
| 0.6436 | 25.0 | 625 | 1.4567 | 0.6075 | 0.5356 | 3.5288 | 0.6075 | 0.6107 | 0.1768 | 0.1652 |
| 0.6436 | 26.0 | 650 | 1.5889 | 0.595 | 0.5587 | 4.1521 | 0.595 | 0.5907 | 0.1943 | 0.1768 |
| 0.6436 | 27.0 | 675 | 1.4828 | 0.5725 | 0.5532 | 3.4259 | 0.5725 | 0.5720 | 0.2125 | 0.1803 |
| 0.6436 | 28.0 | 700 | 1.4671 | 0.5975 | 0.5509 | 3.2612 | 0.5975 | 0.6006 | 0.1983 | 0.1797 |
| 0.6436 | 29.0 | 725 | 1.4049 | 0.6225 | 0.5273 | 3.3136 | 0.6225 | 0.6237 | 0.1995 | 0.1600 |
| 0.6436 | 30.0 | 750 | 1.4039 | 0.6175 | 0.5208 | 3.2588 | 0.6175 | 0.6063 | 0.1770 | 0.1534 |
| 0.6436 | 31.0 | 775 | 1.4333 | 0.6 | 0.5378 | 3.6417 | 0.6 | 0.5995 | 0.1899 | 0.1632 |
| 0.6436 | 32.0 | 800 | 1.3311 | 0.64 | 0.5032 | 3.0056 | 0.64 | 0.6394 | 0.1699 | 0.1476 |
| 0.6436 | 33.0 | 825 | 1.3361 | 0.61 | 0.5079 | 3.2304 | 0.61 | 0.6123 | 0.1536 | 0.1517 |
| 0.6436 | 34.0 | 850 | 1.2984 | 0.64 | 0.4982 | 3.1446 | 0.64 | 0.6444 | 0.1636 | 0.1424 |
| 0.6436 | 35.0 | 875 | 1.3153 | 0.6275 | 0.4995 | 3.0722 | 0.6275 | 0.6288 | 0.1634 | 0.1486 |
| 0.6436 | 36.0 | 900 | 1.2773 | 0.6375 | 0.4880 | 2.7136 | 0.6375 | 0.6422 | 0.1606 | 0.1411 |
| 0.6436 | 37.0 | 925 | 1.2881 | 0.64 | 0.4946 | 3.0452 | 0.64 | 0.6437 | 0.1732 | 0.1440 |
| 0.6436 | 38.0 | 950 | 1.2609 | 0.64 | 0.4824 | 2.7407 | 0.64 | 0.6430 | 0.1485 | 0.1424 |
| 0.6436 | 39.0 | 975 | 1.2685 | 0.645 | 0.4869 | 2.7203 | 0.645 | 0.6484 | 0.1680 | 0.1398 |
| 0.0861 | 40.0 | 1000 | 1.2546 | 0.635 | 0.4808 | 2.7042 | 0.635 | 0.6356 | 0.1669 | 0.1416 |
| 0.0861 | 41.0 | 1025 | 1.2599 | 0.6425 | 0.4858 | 2.6880 | 0.6425 | 0.6457 | 0.1539 | 0.1387 |
| 0.0861 | 42.0 | 1050 | 1.2413 | 0.635 | 0.4783 | 2.8343 | 0.635 | 0.6361 | 0.1679 | 0.1369 |
| 0.0861 | 43.0 | 1075 | 1.2670 | 0.6325 | 0.4901 | 2.8366 | 0.6325 | 0.6337 | 0.1501 | 0.1399 |
| 0.0861 | 44.0 | 1100 | 1.2793 | 0.63 | 0.4919 | 3.1711 | 0.63 | 0.6309 | 0.1672 | 0.1465 |
| 0.0861 | 45.0 | 1125 | 1.2555 | 0.635 | 0.4844 | 2.9284 | 0.635 | 0.6379 | 0.1791 | 0.1401 |
| 0.0861 | 46.0 | 1150 | 1.2491 | 0.635 | 0.4806 | 2.8475 | 0.635 | 0.6358 | 0.1611 | 0.1392 |
| 0.0861 | 47.0 | 1175 | 1.2533 | 0.6325 | 0.4837 | 2.8229 | 0.6325 | 0.6352 | 0.1623 | 0.1378 |
| 0.0861 | 48.0 | 1200 | 1.2602 | 0.635 | 0.4857 | 2.9963 | 0.635 | 0.6368 | 0.1535 | 0.1426 |
| 0.0861 | 49.0 | 1225 | 1.2598 | 0.635 | 0.4848 | 2.8569 | 0.635 | 0.6370 | 0.1718 | 0.1389 |
| 0.0861 | 50.0 | 1250 | 1.2577 | 0.6225 | 0.4839 | 2.8645 | 0.6225 | 0.6237 | 0.1678 | 0.1420 |
| 0.0861 | 51.0 | 1275 | 1.2547 | 0.63 | 0.4817 | 2.8344 | 0.63 | 0.6314 | 0.1721 | 0.1399 |
| 0.0861 | 52.0 | 1300 | 1.2525 | 0.64 | 0.4819 | 2.7720 | 0.64 | 0.6411 | 0.1567 | 0.1378 |
| 0.0861 | 53.0 | 1325 | 1.2627 | 0.6325 | 0.4854 | 2.9202 | 0.6325 | 0.6337 | 0.1688 | 0.1406 |
| 0.0861 | 54.0 | 1350 | 1.2565 | 0.63 | 0.4836 | 2.8392 | 0.63 | 0.6320 | 0.1612 | 0.1404 |
| 0.0861 | 55.0 | 1375 | 1.2514 | 0.6325 | 0.4813 | 2.9887 | 0.6325 | 0.6343 | 0.1652 | 0.1386 |
| 0.0861 | 56.0 | 1400 | 1.2541 | 0.6275 | 0.4822 | 2.9067 | 0.6275 | 0.6296 | 0.1649 | 0.1401 |
| 0.0861 | 57.0 | 1425 | 1.2529 | 0.64 | 0.4810 | 2.9166 | 0.64 | 0.6432 | 0.1765 | 0.1372 |
| 0.0861 | 58.0 | 1450 | 1.2464 | 0.6275 | 0.4799 | 2.9713 | 0.6275 | 0.6291 | 0.1653 | 0.1401 |
| 0.0861 | 59.0 | 1475 | 1.2576 | 0.63 | 0.4826 | 2.9124 | 0.63 | 0.6323 | 0.1557 | 0.1397 |
| 0.0496 | 60.0 | 1500 | 1.2494 | 0.63 | 0.4804 | 2.8355 | 0.63 | 0.6317 | 0.1672 | 0.1390 |
| 0.0496 | 61.0 | 1525 | 1.2496 | 0.6325 | 0.4803 | 2.9091 | 0.6325 | 0.6352 | 0.1510 | 0.1383 |
| 0.0496 | 62.0 | 1550 | 1.2592 | 0.6375 | 0.4838 | 2.8980 | 0.6375 | 0.6384 | 0.1758 | 0.1398 |
| 0.0496 | 63.0 | 1575 | 1.2504 | 0.63 | 0.4806 | 2.9843 | 0.63 | 0.6316 | 0.1691 | 0.1391 |
| 0.0496 | 64.0 | 1600 | 1.2528 | 0.6325 | 0.4810 | 2.9045 | 0.6325 | 0.6349 | 0.1737 | 0.1388 |
| 0.0496 | 65.0 | 1625 | 1.2589 | 0.6425 | 0.4833 | 2.9817 | 0.6425 | 0.6447 | 0.1719 | 0.1380 |
| 0.0496 | 66.0 | 1650 | 1.2531 | 0.63 | 0.4811 | 2.9027 | 0.63 | 0.6321 | 0.1751 | 0.1391 |
| 0.0496 | 67.0 | 1675 | 1.2520 | 0.635 | 0.4808 | 2.9794 | 0.635 | 0.6379 | 0.1715 | 0.1378 |
| 0.0496 | 68.0 | 1700 | 1.2543 | 0.64 | 0.4815 | 2.9771 | 0.64 | 0.6420 | 0.1562 | 0.1380 |
| 0.0496 | 69.0 | 1725 | 1.2538 | 0.6325 | 0.4808 | 2.9080 | 0.6325 | 0.6345 | 0.1681 | 0.1385 |
| 0.0496 | 70.0 | 1750 | 1.2543 | 0.6325 | 0.4813 | 2.9102 | 0.6325 | 0.6347 | 0.1725 | 0.1390 |
| 0.0496 | 71.0 | 1775 | 1.2534 | 0.6325 | 0.4809 | 2.9778 | 0.6325 | 0.6353 | 0.1495 | 0.1385 |
| 0.0496 | 72.0 | 1800 | 1.2539 | 0.6375 | 0.4809 | 2.9024 | 0.6375 | 0.6394 | 0.1588 | 0.1381 |
| 0.0496 | 73.0 | 1825 | 1.2531 | 0.635 | 0.4806 | 2.9812 | 0.635 | 0.6378 | 0.1552 | 0.1380 |
| 0.0496 | 74.0 | 1850 | 1.2531 | 0.635 | 0.4805 | 2.9783 | 0.635 | 0.6377 | 0.1700 | 0.1380 |
| 0.0496 | 75.0 | 1875 | 1.2533 | 0.6375 | 0.4809 | 2.9772 | 0.6375 | 0.6400 | 0.1645 | 0.1372 |
| 0.0496 | 76.0 | 1900 | 1.2539 | 0.6375 | 0.4808 | 2.9777 | 0.6375 | 0.6393 | 0.1675 | 0.1376 |
| 0.0496 | 77.0 | 1925 | 1.2537 | 0.635 | 0.4808 | 2.9832 | 0.635 | 0.6375 | 0.1648 | 0.1381 |
| 0.0496 | 78.0 | 1950 | 1.2539 | 0.6375 | 0.4807 | 2.9769 | 0.6375 | 0.6394 | 0.1636 | 0.1374 |
| 0.0496 | 79.0 | 1975 | 1.2534 | 0.6375 | 0.4805 | 2.9796 | 0.6375 | 0.6399 | 0.1599 | 0.1375 |
| 0.048 | 80.0 | 2000 | 1.2537 | 0.6375 | 0.4806 | 3.0539 | 0.6375 | 0.6399 | 0.1657 | 0.1375 |
| 0.048 | 81.0 | 2025 | 1.2535 | 0.6375 | 0.4805 | 3.0534 | 0.6375 | 0.6399 | 0.1728 | 0.1375 |
| 0.048 | 82.0 | 2050 | 1.2539 | 0.6375 | 0.4806 | 2.9831 | 0.6375 | 0.6393 | 0.1674 | 0.1375 |
| 0.048 | 83.0 | 2075 | 1.2542 | 0.6375 | 0.4807 | 3.0538 | 0.6375 | 0.6399 | 0.1674 | 0.1375 |
| 0.048 | 84.0 | 2100 | 1.2539 | 0.6375 | 0.4805 | 3.0531 | 0.6375 | 0.6394 | 0.1564 | 0.1375 |
| 0.048 | 85.0 | 2125 | 1.2542 | 0.6375 | 0.4806 | 3.0531 | 0.6375 | 0.6393 | 0.1676 | 0.1376 |
| 0.048 | 86.0 | 2150 | 1.2541 | 0.6375 | 0.4806 | 3.0527 | 0.6375 | 0.6399 | 0.1691 | 0.1375 |
| 0.048 | 87.0 | 2175 | 1.2542 | 0.6375 | 0.4805 | 3.0525 | 0.6375 | 0.6394 | 0.1677 | 0.1376 |
| 0.048 | 88.0 | 2200 | 1.2542 | 0.6375 | 0.4806 | 3.0525 | 0.6375 | 0.6393 | 0.1651 | 0.1375 |
| 0.048 | 89.0 | 2225 | 1.2543 | 0.6375 | 0.4805 | 3.0525 | 0.6375 | 0.6394 | 0.1601 | 0.1375 |
| 0.048 | 90.0 | 2250 | 1.2543 | 0.6375 | 0.4805 | 3.0521 | 0.6375 | 0.6394 | 0.1661 | 0.1375 |
| 0.048 | 91.0 | 2275 | 1.2541 | 0.6375 | 0.4805 | 3.0521 | 0.6375 | 0.6394 | 0.1665 | 0.1376 |
| 0.048 | 92.0 | 2300 | 1.2542 | 0.6375 | 0.4805 | 3.0521 | 0.6375 | 0.6394 | 0.1638 | 0.1375 |
| 0.048 | 93.0 | 2325 | 1.2544 | 0.6375 | 0.4805 | 3.0518 | 0.6375 | 0.6394 | 0.1671 | 0.1376 |
| 0.048 | 94.0 | 2350 | 1.2543 | 0.6375 | 0.4805 | 3.0519 | 0.6375 | 0.6394 | 0.1601 | 0.1376 |
| 0.048 | 95.0 | 2375 | 1.2544 | 0.6375 | 0.4805 | 3.0518 | 0.6375 | 0.6394 | 0.1638 | 0.1376 |
| 0.048 | 96.0 | 2400 | 1.2544 | 0.6375 | 0.4805 | 3.0518 | 0.6375 | 0.6394 | 0.1638 | 0.1376 |
| 0.048 | 97.0 | 2425 | 1.2544 | 0.6375 | 0.4805 | 3.0517 | 0.6375 | 0.6394 | 0.1655 | 0.1376 |
| 0.048 | 98.0 | 2450 | 1.2544 | 0.6375 | 0.4805 | 3.0517 | 0.6375 | 0.6394 | 0.1638 | 0.1376 |
| 0.048 | 99.0 | 2475 | 1.2544 | 0.6375 | 0.4805 | 3.0517 | 0.6375 | 0.6394 | 0.1654 | 0.1376 |
| 0.0478 | 100.0 | 2500 | 1.2544 | 0.6375 | 0.4805 | 3.0517 | 0.6375 | 0.6394 | 0.1654 | 0.1376 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.12.0
- Tokenizers 0.12.1
|
MaitreHibou/ppo-SnowballTarget
|
MaitreHibou
| 2023-07-11T01:00:11Z | 16 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-07-11T01:00:06Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: MaitreHibou/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
hopkins/strict-small-4
|
hopkins
| 2023-07-11T00:43:51Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-13T21:25:31Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: strict-small-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# strict-small-4
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8588
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 9
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.9925 | 1.83 | 1000 | 4.2033 |
| 3.7647 | 3.67 | 2000 | 3.9152 |
| 3.3569 | 5.5 | 3000 | 3.8495 |
| 3.0079 | 7.34 | 4000 | 3.8588 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
casque/CrystalMaidenv0.2
|
casque
| 2023-07-11T00:42:48Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-11T00:39:34Z |
---
license: creativeml-openrail-m
---
|
ALM-AHME/swinv2-large-patch4-window12to16-192to256-22kto1k-ft-finetuned-LungCancer-LC25000-AH
|
ALM-AHME
| 2023-07-11T00:40:15Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swinv2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-10T02:43:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swinv2-large-patch4-window12to16-192to256-22kto1k-ft-finetuned-LungCancer-LC25000-AH
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: Augmented-Final
split: train
args: Augmented-Final
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-large-patch4-window12to16-192to256-22kto1k-ft-finetuned-LungCancer-LC25000-AH
This model is a fine-tuned version of [microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft](https://huggingface.co/microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0002
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.5
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0929 | 1.0 | 281 | 0.0919 | 0.9657 |
| 0.0908 | 2.0 | 562 | 0.0127 | 0.9967 |
| 0.0525 | 3.0 | 843 | 0.0133 | 0.9947 |
| 0.1301 | 4.0 | 1125 | 0.0270 | 0.9927 |
| 0.0624 | 5.0 | 1406 | 0.0064 | 0.9973 |
| 0.0506 | 6.0 | 1687 | 0.0025 | 0.999 |
| 0.0001 | 6.99 | 1967 | 0.0002 | 1.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
foster123/test
|
foster123
| 2023-07-11T00:39:29Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-10T06:23:46Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
layoric/openllama-7b-qlora-orca
|
layoric
| 2023-07-11T00:31:19Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-09T23:58:03Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
bobobert4/poca-SoccerTwos
|
bobobert4
| 2023-07-11T00:18:04Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-07-11T00:16:06Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: bobobert4/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jz0214/sd-class-butterflies-64
|
jz0214
| 2023-07-10T23:52:24Z | 30 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-07-10T23:50:42Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('jz0214/sd-class-butterflies-64')
image = pipeline().images[0]
image
```
|
shenyichong/ppo-LunarLander-v2
|
shenyichong
| 2023-07-10T23:37:07Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T23:36:50Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.84 +/- 7.90
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
JBJoyce/whisper-large-v2-finetuned-gtzan
|
JBJoyce
| 2023-07-10T23:32:02Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-10T19:35:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: whisper-large-v2-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-v2-finetuned-gtzan
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7142
- Accuracy: 0.9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0464 | 1.0 | 449 | 1.6761 | 0.42 |
| 0.9369 | 2.0 | 899 | 1.0398 | 0.74 |
| 1.0591 | 3.0 | 1348 | 1.0710 | 0.78 |
| 0.0632 | 4.0 | 1798 | 0.6605 | 0.86 |
| 0.0022 | 5.0 | 2247 | 1.0940 | 0.82 |
| 0.0004 | 6.0 | 2697 | 0.7089 | 0.92 |
| 0.0004 | 7.0 | 3146 | 0.6176 | 0.92 |
| 0.0005 | 8.0 | 3596 | 0.6688 | 0.9 |
| 0.0002 | 9.0 | 4045 | 0.7052 | 0.9 |
| 0.0002 | 9.99 | 4490 | 0.7142 | 0.9 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jz0214/sd-class-butterflies-32
|
jz0214
| 2023-07-10T23:09:47Z | 30 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-07-10T23:08:46Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('jz0214/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
wesley7137/fal-7B-shard-quantum
|
wesley7137
| 2023-07-10T22:53:05Z | 0 | 0 |
peft
|
[
"peft",
"pytorch",
"RefinedWebModel",
"custom_code",
"region:us"
] | null | 2023-07-10T22:04:14Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
jordyvl/vit-small_tobacco3482_kd_CEKD_t5.0_a0.9
|
jordyvl
| 2023-07-10T22:40:13Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-10T22:00:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-small_tobacco3482_kd_CEKD_t5.0_a0.9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-small_tobacco3482_kd_CEKD_t5.0_a0.9
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5373
- Accuracy: 0.85
- Brier Loss: 0.2432
- Nll: 1.1157
- F1 Micro: 0.85
- F1 Macro: 0.8450
- Ece: 0.1621
- Aurc: 0.0427
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 2.1036 | 0.215 | 0.8753 | 5.3195 | 0.2150 | 0.1264 | 0.2571 | 0.6923 |
| No log | 2.0 | 14 | 1.6952 | 0.405 | 0.7407 | 3.4929 | 0.405 | 0.2416 | 0.2907 | 0.4040 |
| No log | 3.0 | 21 | 1.1843 | 0.62 | 0.5633 | 2.0113 | 0.62 | 0.5725 | 0.2740 | 0.2014 |
| No log | 4.0 | 28 | 0.8797 | 0.71 | 0.4080 | 1.7043 | 0.7100 | 0.6683 | 0.2024 | 0.1125 |
| No log | 5.0 | 35 | 0.8570 | 0.715 | 0.3837 | 1.6476 | 0.715 | 0.7280 | 0.2189 | 0.1079 |
| No log | 6.0 | 42 | 0.7484 | 0.775 | 0.3285 | 1.5962 | 0.775 | 0.7668 | 0.1873 | 0.0816 |
| No log | 7.0 | 49 | 0.7337 | 0.79 | 0.3131 | 1.5377 | 0.79 | 0.7779 | 0.1904 | 0.0771 |
| No log | 8.0 | 56 | 0.6709 | 0.795 | 0.3012 | 1.2156 | 0.795 | 0.7776 | 0.1939 | 0.0761 |
| No log | 9.0 | 63 | 0.6901 | 0.795 | 0.3069 | 1.4725 | 0.795 | 0.7916 | 0.1882 | 0.0769 |
| No log | 10.0 | 70 | 0.7960 | 0.75 | 0.3586 | 1.4426 | 0.75 | 0.7406 | 0.1868 | 0.0976 |
| No log | 11.0 | 77 | 0.7489 | 0.77 | 0.3296 | 1.6202 | 0.7700 | 0.7794 | 0.2020 | 0.0878 |
| No log | 12.0 | 84 | 0.7068 | 0.785 | 0.3270 | 1.4127 | 0.785 | 0.7812 | 0.1922 | 0.0759 |
| No log | 13.0 | 91 | 0.6687 | 0.79 | 0.3050 | 1.3820 | 0.79 | 0.7945 | 0.1818 | 0.0625 |
| No log | 14.0 | 98 | 0.6052 | 0.79 | 0.2854 | 1.0602 | 0.79 | 0.7716 | 0.1702 | 0.0590 |
| No log | 15.0 | 105 | 0.6369 | 0.795 | 0.2959 | 1.0580 | 0.795 | 0.7953 | 0.1709 | 0.0603 |
| No log | 16.0 | 112 | 0.6204 | 0.81 | 0.2816 | 1.1886 | 0.81 | 0.8050 | 0.1657 | 0.0702 |
| No log | 17.0 | 119 | 0.5648 | 0.83 | 0.2475 | 1.2506 | 0.83 | 0.8241 | 0.1347 | 0.0612 |
| No log | 18.0 | 126 | 0.5849 | 0.83 | 0.2672 | 1.2245 | 0.83 | 0.8155 | 0.1646 | 0.0601 |
| No log | 19.0 | 133 | 0.5536 | 0.835 | 0.2475 | 1.0514 | 0.835 | 0.8254 | 0.1683 | 0.0531 |
| No log | 20.0 | 140 | 0.5689 | 0.835 | 0.2513 | 1.2369 | 0.835 | 0.8437 | 0.1722 | 0.0489 |
| No log | 21.0 | 147 | 0.5540 | 0.83 | 0.2485 | 1.2139 | 0.83 | 0.8165 | 0.1641 | 0.0608 |
| No log | 22.0 | 154 | 0.5352 | 0.835 | 0.2402 | 1.0108 | 0.835 | 0.8295 | 0.1408 | 0.0430 |
| No log | 23.0 | 161 | 0.5380 | 0.84 | 0.2403 | 1.2280 | 0.8400 | 0.8347 | 0.1405 | 0.0436 |
| No log | 24.0 | 168 | 0.5422 | 0.835 | 0.2471 | 1.0204 | 0.835 | 0.8324 | 0.1606 | 0.0445 |
| No log | 25.0 | 175 | 0.5342 | 0.85 | 0.2404 | 1.0767 | 0.85 | 0.8487 | 0.1469 | 0.0432 |
| No log | 26.0 | 182 | 0.5374 | 0.84 | 0.2429 | 1.0774 | 0.8400 | 0.8334 | 0.1420 | 0.0462 |
| No log | 27.0 | 189 | 0.5311 | 0.85 | 0.2395 | 1.0748 | 0.85 | 0.8487 | 0.1439 | 0.0446 |
| No log | 28.0 | 196 | 0.5298 | 0.85 | 0.2384 | 1.1337 | 0.85 | 0.8487 | 0.1570 | 0.0437 |
| No log | 29.0 | 203 | 0.5387 | 0.845 | 0.2435 | 1.1319 | 0.845 | 0.8424 | 0.1539 | 0.0458 |
| No log | 30.0 | 210 | 0.5361 | 0.85 | 0.2430 | 1.0648 | 0.85 | 0.8450 | 0.1679 | 0.0431 |
| No log | 31.0 | 217 | 0.5339 | 0.85 | 0.2413 | 1.0676 | 0.85 | 0.8487 | 0.1646 | 0.0428 |
| No log | 32.0 | 224 | 0.5345 | 0.85 | 0.2421 | 1.0709 | 0.85 | 0.8487 | 0.1476 | 0.0440 |
| No log | 33.0 | 231 | 0.5343 | 0.85 | 0.2421 | 1.1236 | 0.85 | 0.8450 | 0.1621 | 0.0431 |
| No log | 34.0 | 238 | 0.5353 | 0.845 | 0.2426 | 1.1244 | 0.845 | 0.8424 | 0.1710 | 0.0428 |
| No log | 35.0 | 245 | 0.5346 | 0.85 | 0.2423 | 1.0649 | 0.85 | 0.8487 | 0.1520 | 0.0440 |
| No log | 36.0 | 252 | 0.5356 | 0.855 | 0.2422 | 1.1241 | 0.855 | 0.8517 | 0.1814 | 0.0429 |
| No log | 37.0 | 259 | 0.5357 | 0.85 | 0.2426 | 1.1237 | 0.85 | 0.8450 | 0.1670 | 0.0425 |
| No log | 38.0 | 266 | 0.5356 | 0.845 | 0.2426 | 1.1226 | 0.845 | 0.8419 | 0.1607 | 0.0435 |
| No log | 39.0 | 273 | 0.5347 | 0.855 | 0.2420 | 1.0739 | 0.855 | 0.8517 | 0.1597 | 0.0427 |
| No log | 40.0 | 280 | 0.5356 | 0.855 | 0.2423 | 1.1203 | 0.855 | 0.8517 | 0.1676 | 0.0435 |
| No log | 41.0 | 287 | 0.5365 | 0.85 | 0.2431 | 1.1199 | 0.85 | 0.8450 | 0.1780 | 0.0429 |
| No log | 42.0 | 294 | 0.5356 | 0.85 | 0.2426 | 1.1173 | 0.85 | 0.8450 | 0.1653 | 0.0430 |
| No log | 43.0 | 301 | 0.5363 | 0.85 | 0.2428 | 1.1189 | 0.85 | 0.8450 | 0.1550 | 0.0435 |
| No log | 44.0 | 308 | 0.5345 | 0.85 | 0.2418 | 1.1193 | 0.85 | 0.8450 | 0.1590 | 0.0428 |
| No log | 45.0 | 315 | 0.5374 | 0.85 | 0.2435 | 1.1202 | 0.85 | 0.8450 | 0.1633 | 0.0435 |
| No log | 46.0 | 322 | 0.5355 | 0.85 | 0.2423 | 1.1183 | 0.85 | 0.8450 | 0.1564 | 0.0428 |
| No log | 47.0 | 329 | 0.5354 | 0.85 | 0.2425 | 1.1176 | 0.85 | 0.8450 | 0.1509 | 0.0429 |
| No log | 48.0 | 336 | 0.5369 | 0.85 | 0.2433 | 1.1177 | 0.85 | 0.8450 | 0.1517 | 0.0432 |
| No log | 49.0 | 343 | 0.5361 | 0.85 | 0.2428 | 1.1182 | 0.85 | 0.8450 | 0.1490 | 0.0428 |
| No log | 50.0 | 350 | 0.5364 | 0.85 | 0.2431 | 1.1179 | 0.85 | 0.8450 | 0.1654 | 0.0430 |
| No log | 51.0 | 357 | 0.5365 | 0.85 | 0.2428 | 1.1185 | 0.85 | 0.8450 | 0.1729 | 0.0432 |
| No log | 52.0 | 364 | 0.5364 | 0.85 | 0.2430 | 1.1165 | 0.85 | 0.8450 | 0.1614 | 0.0429 |
| No log | 53.0 | 371 | 0.5362 | 0.85 | 0.2429 | 1.1167 | 0.85 | 0.8450 | 0.1694 | 0.0430 |
| No log | 54.0 | 378 | 0.5369 | 0.85 | 0.2432 | 1.1170 | 0.85 | 0.8450 | 0.1597 | 0.0432 |
| No log | 55.0 | 385 | 0.5368 | 0.85 | 0.2430 | 1.1168 | 0.85 | 0.8450 | 0.1670 | 0.0429 |
| No log | 56.0 | 392 | 0.5367 | 0.85 | 0.2430 | 1.1180 | 0.85 | 0.8450 | 0.1619 | 0.0430 |
| No log | 57.0 | 399 | 0.5364 | 0.85 | 0.2429 | 1.1163 | 0.85 | 0.8450 | 0.1649 | 0.0429 |
| No log | 58.0 | 406 | 0.5364 | 0.85 | 0.2430 | 1.1156 | 0.85 | 0.8450 | 0.1611 | 0.0429 |
| No log | 59.0 | 413 | 0.5365 | 0.85 | 0.2428 | 1.1163 | 0.85 | 0.8450 | 0.1591 | 0.0429 |
| No log | 60.0 | 420 | 0.5364 | 0.85 | 0.2429 | 1.1155 | 0.85 | 0.8450 | 0.1588 | 0.0429 |
| No log | 61.0 | 427 | 0.5370 | 0.85 | 0.2432 | 1.1158 | 0.85 | 0.8450 | 0.1772 | 0.0432 |
| No log | 62.0 | 434 | 0.5367 | 0.85 | 0.2429 | 1.1167 | 0.85 | 0.8450 | 0.1622 | 0.0429 |
| No log | 63.0 | 441 | 0.5362 | 0.85 | 0.2428 | 1.1162 | 0.85 | 0.8450 | 0.1503 | 0.0428 |
| No log | 64.0 | 448 | 0.5372 | 0.85 | 0.2433 | 1.1161 | 0.85 | 0.8450 | 0.1616 | 0.0432 |
| No log | 65.0 | 455 | 0.5371 | 0.85 | 0.2431 | 1.1162 | 0.85 | 0.8450 | 0.1499 | 0.0429 |
| No log | 66.0 | 462 | 0.5367 | 0.85 | 0.2430 | 1.1160 | 0.85 | 0.8450 | 0.1591 | 0.0427 |
| No log | 67.0 | 469 | 0.5367 | 0.85 | 0.2430 | 1.1164 | 0.85 | 0.8450 | 0.1562 | 0.0428 |
| No log | 68.0 | 476 | 0.5368 | 0.85 | 0.2430 | 1.1168 | 0.85 | 0.8450 | 0.1556 | 0.0427 |
| No log | 69.0 | 483 | 0.5368 | 0.85 | 0.2431 | 1.1158 | 0.85 | 0.8450 | 0.1593 | 0.0428 |
| No log | 70.0 | 490 | 0.5372 | 0.85 | 0.2432 | 1.1162 | 0.85 | 0.8450 | 0.1628 | 0.0428 |
| No log | 71.0 | 497 | 0.5371 | 0.85 | 0.2432 | 1.1163 | 0.85 | 0.8450 | 0.1599 | 0.0429 |
| 0.1708 | 72.0 | 504 | 0.5370 | 0.85 | 0.2430 | 1.1161 | 0.85 | 0.8450 | 0.1559 | 0.0430 |
| 0.1708 | 73.0 | 511 | 0.5372 | 0.85 | 0.2433 | 1.1154 | 0.85 | 0.8450 | 0.1556 | 0.0428 |
| 0.1708 | 74.0 | 518 | 0.5370 | 0.85 | 0.2429 | 1.1165 | 0.85 | 0.8450 | 0.1540 | 0.0428 |
| 0.1708 | 75.0 | 525 | 0.5371 | 0.85 | 0.2431 | 1.1161 | 0.85 | 0.8450 | 0.1616 | 0.0427 |
| 0.1708 | 76.0 | 532 | 0.5369 | 0.85 | 0.2431 | 1.1161 | 0.85 | 0.8450 | 0.1619 | 0.0427 |
| 0.1708 | 77.0 | 539 | 0.5369 | 0.85 | 0.2430 | 1.1156 | 0.85 | 0.8450 | 0.1623 | 0.0429 |
| 0.1708 | 78.0 | 546 | 0.5372 | 0.85 | 0.2432 | 1.1158 | 0.85 | 0.8450 | 0.1619 | 0.0427 |
| 0.1708 | 79.0 | 553 | 0.5375 | 0.85 | 0.2433 | 1.1162 | 0.85 | 0.8450 | 0.1688 | 0.0429 |
| 0.1708 | 80.0 | 560 | 0.5372 | 0.85 | 0.2432 | 1.1160 | 0.85 | 0.8450 | 0.1623 | 0.0429 |
| 0.1708 | 81.0 | 567 | 0.5373 | 0.85 | 0.2432 | 1.1162 | 0.85 | 0.8450 | 0.1620 | 0.0428 |
| 0.1708 | 82.0 | 574 | 0.5374 | 0.85 | 0.2433 | 1.1160 | 0.85 | 0.8450 | 0.1622 | 0.0428 |
| 0.1708 | 83.0 | 581 | 0.5372 | 0.85 | 0.2432 | 1.1159 | 0.85 | 0.8450 | 0.1622 | 0.0428 |
| 0.1708 | 84.0 | 588 | 0.5371 | 0.85 | 0.2431 | 1.1157 | 0.85 | 0.8450 | 0.1621 | 0.0427 |
| 0.1708 | 85.0 | 595 | 0.5372 | 0.85 | 0.2432 | 1.1158 | 0.85 | 0.8450 | 0.1687 | 0.0426 |
| 0.1708 | 86.0 | 602 | 0.5372 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1619 | 0.0426 |
| 0.1708 | 87.0 | 609 | 0.5374 | 0.85 | 0.2432 | 1.1159 | 0.85 | 0.8450 | 0.1687 | 0.0428 |
| 0.1708 | 88.0 | 616 | 0.5373 | 0.85 | 0.2432 | 1.1160 | 0.85 | 0.8450 | 0.1620 | 0.0427 |
| 0.1708 | 89.0 | 623 | 0.5373 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1620 | 0.0427 |
| 0.1708 | 90.0 | 630 | 0.5373 | 0.85 | 0.2432 | 1.1156 | 0.85 | 0.8450 | 0.1620 | 0.0427 |
| 0.1708 | 91.0 | 637 | 0.5372 | 0.85 | 0.2432 | 1.1156 | 0.85 | 0.8450 | 0.1620 | 0.0427 |
| 0.1708 | 92.0 | 644 | 0.5373 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1620 | 0.0427 |
| 0.1708 | 93.0 | 651 | 0.5372 | 0.85 | 0.2432 | 1.1156 | 0.85 | 0.8450 | 0.1620 | 0.0427 |
| 0.1708 | 94.0 | 658 | 0.5373 | 0.85 | 0.2432 | 1.1158 | 0.85 | 0.8450 | 0.1620 | 0.0427 |
| 0.1708 | 95.0 | 665 | 0.5373 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1621 | 0.0427 |
| 0.1708 | 96.0 | 672 | 0.5372 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1621 | 0.0427 |
| 0.1708 | 97.0 | 679 | 0.5372 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1620 | 0.0427 |
| 0.1708 | 98.0 | 686 | 0.5373 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1621 | 0.0427 |
| 0.1708 | 99.0 | 693 | 0.5373 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1621 | 0.0427 |
| 0.1708 | 100.0 | 700 | 0.5373 | 0.85 | 0.2432 | 1.1157 | 0.85 | 0.8450 | 0.1621 | 0.0427 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
maidacundo/falcon_qlora_sql_r2
|
maidacundo
| 2023-07-10T22:30:14Z | 0 | 0 | null |
[
"generated_from_trainer",
"dataset:spider",
"base_model:tiiuae/falcon-7b",
"base_model:finetune:tiiuae/falcon-7b",
"license:apache-2.0",
"region:us"
] | null | 2023-07-10T09:40:03Z |
---
license: apache-2.0
base_model: tiiuae/falcon-7b
tags:
- generated_from_trainer
datasets:
- spider
model-index:
- name: falcon_qlora_sql_r2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon_qlora_sql_r2
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on the spider dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1735
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 43.7
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2993 | 0.23 | 100 | 0.2863 |
| 0.8003 | 0.46 | 200 | 0.3358 |
| 0.1872 | 0.68 | 300 | 0.2424 |
| 0.1267 | 0.91 | 400 | 0.2362 |
| 0.2214 | 1.14 | 500 | 0.2564 |
| 0.2885 | 1.37 | 600 | 0.2187 |
| 0.1654 | 1.6 | 700 | 0.1988 |
| 0.1633 | 1.83 | 800 | 0.2062 |
| 0.0381 | 2.05 | 900 | 0.1868 |
| 0.0633 | 2.28 | 1000 | 0.1767 |
| 0.163 | 2.51 | 1100 | 0.1861 |
| 0.1718 | 2.74 | 1200 | 0.1875 |
| 0.1743 | 2.97 | 1300 | 0.1854 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jordyvl/vit-small_rvl_cdip_100_examples_per_class_kd_MSE
|
jordyvl
| 2023-07-10T22:30:03Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-10T21:13:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-small_rvl_cdip_100_examples_per_class_kd_MSE
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-small_rvl_cdip_100_examples_per_class_kd_MSE
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4673
- Accuracy: 0.6425
- Brier Loss: 0.4763
- Nll: 3.0680
- F1 Micro: 0.6425
- F1 Macro: 0.6485
- Ece: 0.1946
- Aurc: 0.1381
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 25 | 4.4851 | 0.06 | 0.9565 | 13.8276 | 0.06 | 0.0556 | 0.1688 | 0.9385 |
| No log | 2.0 | 50 | 3.5619 | 0.3775 | 0.7827 | 6.2649 | 0.3775 | 0.3611 | 0.2331 | 0.3882 |
| No log | 3.0 | 75 | 2.8990 | 0.5025 | 0.6453 | 4.7376 | 0.5025 | 0.4858 | 0.1689 | 0.2658 |
| No log | 4.0 | 100 | 2.5972 | 0.515 | 0.5980 | 4.4210 | 0.515 | 0.4895 | 0.1605 | 0.2249 |
| No log | 5.0 | 125 | 2.4353 | 0.56 | 0.5762 | 3.4885 | 0.56 | 0.5566 | 0.1548 | 0.2100 |
| No log | 6.0 | 150 | 2.4157 | 0.5475 | 0.5864 | 3.8261 | 0.5475 | 0.5323 | 0.1837 | 0.2167 |
| No log | 7.0 | 175 | 2.1786 | 0.6075 | 0.5203 | 3.4565 | 0.6075 | 0.6103 | 0.1403 | 0.1670 |
| No log | 8.0 | 200 | 2.1082 | 0.63 | 0.5040 | 3.3570 | 0.63 | 0.6246 | 0.1580 | 0.1530 |
| No log | 9.0 | 225 | 2.0472 | 0.625 | 0.5042 | 3.8572 | 0.625 | 0.6184 | 0.1552 | 0.1530 |
| No log | 10.0 | 250 | 2.0589 | 0.6025 | 0.5468 | 3.5723 | 0.6025 | 0.5982 | 0.1781 | 0.1785 |
| No log | 11.0 | 275 | 1.8965 | 0.65 | 0.4755 | 3.4466 | 0.65 | 0.6497 | 0.1605 | 0.1475 |
| No log | 12.0 | 300 | 1.9014 | 0.6325 | 0.5066 | 3.0881 | 0.6325 | 0.6359 | 0.1658 | 0.1591 |
| No log | 13.0 | 325 | 1.7904 | 0.6175 | 0.5162 | 3.4673 | 0.6175 | 0.6141 | 0.1525 | 0.1598 |
| No log | 14.0 | 350 | 1.8624 | 0.625 | 0.5173 | 3.6824 | 0.625 | 0.6179 | 0.1567 | 0.1624 |
| No log | 15.0 | 375 | 1.7083 | 0.6625 | 0.4817 | 3.1296 | 0.6625 | 0.6686 | 0.1651 | 0.1405 |
| No log | 16.0 | 400 | 1.8848 | 0.59 | 0.5478 | 4.3761 | 0.59 | 0.5913 | 0.2083 | 0.1696 |
| No log | 17.0 | 425 | 1.7238 | 0.6125 | 0.5229 | 3.1232 | 0.6125 | 0.6052 | 0.1833 | 0.1553 |
| No log | 18.0 | 450 | 1.7126 | 0.625 | 0.5152 | 2.9267 | 0.625 | 0.6284 | 0.1747 | 0.1565 |
| No log | 19.0 | 475 | 1.6459 | 0.6275 | 0.5024 | 2.9078 | 0.6275 | 0.6219 | 0.1766 | 0.1527 |
| 1.0542 | 20.0 | 500 | 1.6029 | 0.6275 | 0.4855 | 3.0931 | 0.6275 | 0.6316 | 0.1720 | 0.1414 |
| 1.0542 | 21.0 | 525 | 1.6566 | 0.6525 | 0.4847 | 3.0998 | 0.6525 | 0.6479 | 0.1558 | 0.1438 |
| 1.0542 | 22.0 | 550 | 1.6169 | 0.645 | 0.4894 | 3.0081 | 0.645 | 0.6471 | 0.1687 | 0.1400 |
| 1.0542 | 23.0 | 575 | 1.5322 | 0.6525 | 0.4557 | 3.3587 | 0.6525 | 0.6520 | 0.1428 | 0.1247 |
| 1.0542 | 24.0 | 600 | 1.5991 | 0.6475 | 0.4787 | 2.9349 | 0.6475 | 0.6444 | 0.1580 | 0.1450 |
| 1.0542 | 25.0 | 625 | 1.5625 | 0.6375 | 0.4926 | 3.0245 | 0.6375 | 0.6378 | 0.1641 | 0.1433 |
| 1.0542 | 26.0 | 650 | 1.5366 | 0.64 | 0.4884 | 3.3388 | 0.64 | 0.6461 | 0.1595 | 0.1453 |
| 1.0542 | 27.0 | 675 | 1.5686 | 0.65 | 0.4765 | 3.5120 | 0.65 | 0.6504 | 0.1625 | 0.1359 |
| 1.0542 | 28.0 | 700 | 1.5562 | 0.6475 | 0.4817 | 3.0348 | 0.6475 | 0.6488 | 0.1459 | 0.1388 |
| 1.0542 | 29.0 | 725 | 1.5213 | 0.6475 | 0.4719 | 3.2628 | 0.6475 | 0.6475 | 0.1634 | 0.1326 |
| 1.0542 | 30.0 | 750 | 1.5492 | 0.6675 | 0.4730 | 3.1693 | 0.6675 | 0.6679 | 0.1469 | 0.1415 |
| 1.0542 | 31.0 | 775 | 1.5311 | 0.65 | 0.4896 | 3.0881 | 0.65 | 0.6504 | 0.1815 | 0.1380 |
| 1.0542 | 32.0 | 800 | 1.5556 | 0.6475 | 0.4821 | 3.1829 | 0.6475 | 0.6491 | 0.1640 | 0.1405 |
| 1.0542 | 33.0 | 825 | 1.5471 | 0.6375 | 0.4846 | 3.4190 | 0.6375 | 0.6407 | 0.1628 | 0.1415 |
| 1.0542 | 34.0 | 850 | 1.4809 | 0.6575 | 0.4714 | 2.9136 | 0.6575 | 0.6612 | 0.1729 | 0.1338 |
| 1.0542 | 35.0 | 875 | 1.5256 | 0.66 | 0.4773 | 3.2303 | 0.66 | 0.6650 | 0.1746 | 0.1368 |
| 1.0542 | 36.0 | 900 | 1.4929 | 0.6675 | 0.4671 | 3.2360 | 0.6675 | 0.6698 | 0.1698 | 0.1309 |
| 1.0542 | 37.0 | 925 | 1.4923 | 0.645 | 0.4880 | 3.0567 | 0.645 | 0.6564 | 0.1764 | 0.1395 |
| 1.0542 | 38.0 | 950 | 1.5038 | 0.665 | 0.4672 | 3.2116 | 0.665 | 0.6661 | 0.1588 | 0.1343 |
| 1.0542 | 39.0 | 975 | 1.4708 | 0.6625 | 0.4669 | 3.1420 | 0.6625 | 0.6675 | 0.1683 | 0.1301 |
| 0.0522 | 40.0 | 1000 | 1.5153 | 0.6475 | 0.4865 | 3.1796 | 0.6475 | 0.6447 | 0.1639 | 0.1400 |
| 0.0522 | 41.0 | 1025 | 1.4705 | 0.6575 | 0.4642 | 3.2196 | 0.6575 | 0.6626 | 0.1440 | 0.1308 |
| 0.0522 | 42.0 | 1050 | 1.4844 | 0.6575 | 0.4722 | 3.2445 | 0.6575 | 0.6595 | 0.1746 | 0.1328 |
| 0.0522 | 43.0 | 1075 | 1.4957 | 0.6425 | 0.4828 | 3.1456 | 0.6425 | 0.6468 | 0.1499 | 0.1417 |
| 0.0522 | 44.0 | 1100 | 1.5179 | 0.645 | 0.4910 | 3.3921 | 0.645 | 0.6470 | 0.1861 | 0.1433 |
| 0.0522 | 45.0 | 1125 | 1.4878 | 0.6425 | 0.4839 | 3.2139 | 0.6425 | 0.6478 | 0.1720 | 0.1403 |
| 0.0522 | 46.0 | 1150 | 1.4666 | 0.655 | 0.4741 | 2.9333 | 0.655 | 0.6601 | 0.1813 | 0.1347 |
| 0.0522 | 47.0 | 1175 | 1.4954 | 0.6575 | 0.4776 | 3.2102 | 0.6575 | 0.6604 | 0.1842 | 0.1390 |
| 0.0522 | 48.0 | 1200 | 1.4976 | 0.645 | 0.4856 | 3.1539 | 0.645 | 0.6493 | 0.1549 | 0.1407 |
| 0.0522 | 49.0 | 1225 | 1.4772 | 0.64 | 0.4780 | 2.9845 | 0.64 | 0.6445 | 0.1826 | 0.1388 |
| 0.0522 | 50.0 | 1250 | 1.4584 | 0.65 | 0.4703 | 3.0776 | 0.65 | 0.6533 | 0.1685 | 0.1352 |
| 0.0522 | 51.0 | 1275 | 1.4828 | 0.6325 | 0.4844 | 3.1425 | 0.6325 | 0.6377 | 0.1641 | 0.1409 |
| 0.0522 | 52.0 | 1300 | 1.4676 | 0.6525 | 0.4737 | 3.1483 | 0.6525 | 0.6565 | 0.1773 | 0.1358 |
| 0.0522 | 53.0 | 1325 | 1.4675 | 0.6475 | 0.4791 | 3.1411 | 0.6475 | 0.6515 | 0.1820 | 0.1388 |
| 0.0522 | 54.0 | 1350 | 1.4724 | 0.645 | 0.4764 | 3.0744 | 0.645 | 0.6499 | 0.1847 | 0.1382 |
| 0.0522 | 55.0 | 1375 | 1.4689 | 0.6425 | 0.4769 | 3.2256 | 0.6425 | 0.6476 | 0.1839 | 0.1376 |
| 0.0522 | 56.0 | 1400 | 1.4660 | 0.6425 | 0.4760 | 2.9907 | 0.6425 | 0.6479 | 0.1906 | 0.1378 |
| 0.0522 | 57.0 | 1425 | 1.4663 | 0.645 | 0.4757 | 3.0722 | 0.645 | 0.6514 | 0.1705 | 0.1367 |
| 0.0522 | 58.0 | 1450 | 1.4678 | 0.65 | 0.4770 | 3.0710 | 0.65 | 0.6546 | 0.1794 | 0.1371 |
| 0.0522 | 59.0 | 1475 | 1.4717 | 0.64 | 0.4786 | 3.0737 | 0.64 | 0.6455 | 0.1889 | 0.1392 |
| 0.0064 | 60.0 | 1500 | 1.4691 | 0.645 | 0.4768 | 3.0688 | 0.645 | 0.6499 | 0.1815 | 0.1378 |
| 0.0064 | 61.0 | 1525 | 1.4689 | 0.64 | 0.4767 | 3.0688 | 0.64 | 0.6452 | 0.1846 | 0.1382 |
| 0.0064 | 62.0 | 1550 | 1.4689 | 0.64 | 0.4770 | 3.0674 | 0.64 | 0.6455 | 0.1937 | 0.1383 |
| 0.0064 | 63.0 | 1575 | 1.4687 | 0.6425 | 0.4767 | 3.0700 | 0.6425 | 0.6485 | 0.1897 | 0.1381 |
| 0.0064 | 64.0 | 1600 | 1.4674 | 0.6425 | 0.4764 | 3.0675 | 0.6425 | 0.6472 | 0.1855 | 0.1375 |
| 0.0064 | 65.0 | 1625 | 1.4681 | 0.6425 | 0.4766 | 3.0694 | 0.6425 | 0.6485 | 0.1917 | 0.1381 |
| 0.0064 | 66.0 | 1650 | 1.4681 | 0.6425 | 0.4766 | 3.0687 | 0.6425 | 0.6472 | 0.1905 | 0.1378 |
| 0.0064 | 67.0 | 1675 | 1.4667 | 0.645 | 0.4757 | 3.0681 | 0.645 | 0.6505 | 0.1899 | 0.1375 |
| 0.0064 | 68.0 | 1700 | 1.4683 | 0.6425 | 0.4771 | 3.0686 | 0.6425 | 0.6474 | 0.1871 | 0.1379 |
| 0.0064 | 69.0 | 1725 | 1.4672 | 0.64 | 0.4760 | 3.0679 | 0.64 | 0.6455 | 0.1932 | 0.1380 |
| 0.0064 | 70.0 | 1750 | 1.4673 | 0.6425 | 0.4763 | 3.0683 | 0.6425 | 0.6474 | 0.1955 | 0.1376 |
| 0.0064 | 71.0 | 1775 | 1.4676 | 0.645 | 0.4763 | 3.0680 | 0.645 | 0.6505 | 0.1921 | 0.1376 |
| 0.0064 | 72.0 | 1800 | 1.4674 | 0.6425 | 0.4763 | 3.0683 | 0.6425 | 0.6474 | 0.1946 | 0.1376 |
| 0.0064 | 73.0 | 1825 | 1.4675 | 0.6425 | 0.4763 | 3.0682 | 0.6425 | 0.6474 | 0.1946 | 0.1377 |
| 0.0064 | 74.0 | 1850 | 1.4674 | 0.6425 | 0.4763 | 3.0682 | 0.6425 | 0.6485 | 0.1945 | 0.1380 |
| 0.0064 | 75.0 | 1875 | 1.4674 | 0.64 | 0.4763 | 3.0680 | 0.64 | 0.6455 | 0.1960 | 0.1380 |
| 0.0064 | 76.0 | 1900 | 1.4675 | 0.64 | 0.4764 | 3.0682 | 0.64 | 0.6455 | 0.1972 | 0.1381 |
| 0.0064 | 77.0 | 1925 | 1.4675 | 0.6425 | 0.4763 | 3.0681 | 0.6425 | 0.6485 | 0.1947 | 0.1380 |
| 0.0064 | 78.0 | 1950 | 1.4674 | 0.6425 | 0.4763 | 3.0681 | 0.6425 | 0.6485 | 0.1958 | 0.1381 |
| 0.0064 | 79.0 | 1975 | 1.4674 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6474 | 0.1935 | 0.1376 |
| 0.0 | 80.0 | 2000 | 1.4673 | 0.6425 | 0.4763 | 3.0681 | 0.6425 | 0.6485 | 0.1958 | 0.1380 |
| 0.0 | 81.0 | 2025 | 1.4674 | 0.6425 | 0.4763 | 3.0681 | 0.6425 | 0.6485 | 0.1946 | 0.1380 |
| 0.0 | 82.0 | 2050 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1935 | 0.1380 |
| 0.0 | 83.0 | 2075 | 1.4674 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 84.0 | 2100 | 1.4674 | 0.6425 | 0.4763 | 3.0681 | 0.6425 | 0.6485 | 0.1958 | 0.1381 |
| 0.0 | 85.0 | 2125 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 86.0 | 2150 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 87.0 | 2175 | 1.4673 | 0.6425 | 0.4763 | 3.0681 | 0.6425 | 0.6485 | 0.1958 | 0.1381 |
| 0.0 | 88.0 | 2200 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 89.0 | 2225 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 90.0 | 2250 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 91.0 | 2275 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 92.0 | 2300 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 93.0 | 2325 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 94.0 | 2350 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1909 | 0.1381 |
| 0.0 | 95.0 | 2375 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 96.0 | 2400 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 97.0 | 2425 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 98.0 | 2450 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 99.0 | 2475 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
| 0.0 | 100.0 | 2500 | 1.4673 | 0.6425 | 0.4763 | 3.0680 | 0.6425 | 0.6485 | 0.1946 | 0.1381 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.12.0
- Tokenizers 0.12.1
|
MnLgt/swivel_inversion
|
MnLgt
| 2023-07-10T22:11:42Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-07-10T22:11:41Z |
---
license: mit
---
### swivel_inversion on Stable Diffusion
This is the `<swivel-chair>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:

























|
carova/ppo-LunarLander-v2
|
carova
| 2023-07-10T22:11:10Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T17:56:56Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 233.36 +/- 68.33
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
TheBloke/airochronos-33B-GGML
|
TheBloke
| 2023-07-10T22:07:18Z | 0 | 18 | null |
[
"license:other",
"region:us"
] | null | 2023-07-10T21:14:18Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Henk717's Airochronos 33B GGML
These files are GGML format model files for [Henk717's Airochronos 33B](https://huggingface.co/Henk717/airochronos-33B).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with full GPU acceleration out of the box. Especially good for story telling.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with GPU acceleration via the c_transformers backend.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI. Supports full GPU accel on macOS. Also supports Windows, without GPU accel.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Requires extra steps to enable GPU accel via llama.cpp backend.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with LangChain support and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with OpenAI-compatible API server.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/airochronos-33B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/airochronos-33B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Henk717/airochronos-33B)
## Prompt template: Alpaca
```Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction: {prompt}
### Response:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
These are guaranteed to be compatible with any UIs, tools and libraries released since late May. They may be phased out soon, as they are largely superseded by the new k-quant methods.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python, ctransformers, rustformers and most others. For compatibility with other tools and libraries, please check their documentation.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| airochronos-33b.ggmlv3.q2_K.bin | q2_K | 2 | 13.71 GB| 16.21 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| airochronos-33b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 14.06 GB| 16.56 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| airochronos-33b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 15.72 GB| 18.22 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| airochronos-33b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 17.28 GB| 19.78 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| airochronos-33b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 18.36 GB| 20.86 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| airochronos-33b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 19.62 GB| 22.12 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| airochronos-33b.ggmlv3.q4_0.bin | q4_0 | 4 | 18.30 GB| 20.80 GB | Original quant method, 4-bit. |
| airochronos-33b.ggmlv3.q4_1.bin | q4_1 | 4 | 20.33 GB| 22.83 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| airochronos-33b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 22.40 GB| 24.90 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| airochronos-33b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 23.05 GB| 25.55 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| airochronos-33b.ggmlv3.q5_0.bin | q5_0 | 5 | 22.37 GB| 24.87 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| airochronos-33b.ggmlv3.q5_1.bin | q5_1 | 5 | 24.40 GB| 26.90 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| airochronos-33b.ggmlv3.q6_K.bin | q6_K | 6 | 26.69 GB| 29.19 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| airochronos-33b.ggmlv3.q8_0.bin | q8_0 | 8 | 34.56 GB| 37.06 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m airochronos-33b.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Henk717's Airochronos 33B
After the initial experiment with chronoboros-33B it was evident that the merge was to unpredictable to be useful, testing the individual models it became clear that the bias should be weighted towards Chronos.
This is the new release of the merge with 75% chronos 33B, and 25% airoboros-1.4 33B.
Model has been tested with the Alpaca prompting format combined with KoboldAI Lite's instruct and chat modes, as well as regular story writing.
It has also been tested on basic reasoning tasks, but has not seen much testing for factual information.
|
jordyvl/vit-small_tobacco3482_kd_CEKD_t5.0_a0.7
|
jordyvl
| 2023-07-10T21:59:33Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-10T21:19:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-small_tobacco3482_kd_CEKD_t5.0_a0.7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-small_tobacco3482_kd_CEKD_t5.0_a0.7
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4918
- Accuracy: 0.85
- Brier Loss: 0.2583
- Nll: 1.0894
- F1 Micro: 0.85
- F1 Macro: 0.8374
- Ece: 0.1917
- Aurc: 0.0470
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 1.8329 | 0.225 | 0.8761 | 5.2731 | 0.225 | 0.1384 | 0.2607 | 0.6977 |
| No log | 2.0 | 14 | 1.4785 | 0.405 | 0.7460 | 3.4067 | 0.405 | 0.2289 | 0.3097 | 0.4085 |
| No log | 3.0 | 21 | 1.0406 | 0.6 | 0.5725 | 1.8722 | 0.6 | 0.5345 | 0.3050 | 0.2010 |
| No log | 4.0 | 28 | 0.8087 | 0.725 | 0.4192 | 1.6096 | 0.7250 | 0.6767 | 0.2345 | 0.1149 |
| No log | 5.0 | 35 | 0.7666 | 0.735 | 0.3731 | 1.6189 | 0.735 | 0.7350 | 0.2377 | 0.1011 |
| No log | 6.0 | 42 | 0.6960 | 0.78 | 0.3413 | 1.5230 | 0.78 | 0.7592 | 0.2295 | 0.0868 |
| No log | 7.0 | 49 | 0.6490 | 0.805 | 0.3110 | 1.4861 | 0.805 | 0.7864 | 0.2138 | 0.0785 |
| No log | 8.0 | 56 | 0.6238 | 0.795 | 0.3069 | 1.2098 | 0.795 | 0.7816 | 0.2065 | 0.0698 |
| No log | 9.0 | 63 | 0.5755 | 0.83 | 0.2866 | 1.1943 | 0.83 | 0.8117 | 0.1937 | 0.0694 |
| No log | 10.0 | 70 | 0.6360 | 0.77 | 0.3164 | 1.2608 | 0.7700 | 0.7550 | 0.1785 | 0.0677 |
| No log | 11.0 | 77 | 0.6548 | 0.785 | 0.3335 | 1.4895 | 0.785 | 0.7707 | 0.2281 | 0.0885 |
| No log | 12.0 | 84 | 0.5847 | 0.805 | 0.3002 | 1.4317 | 0.805 | 0.7807 | 0.2264 | 0.0756 |
| No log | 13.0 | 91 | 0.5956 | 0.81 | 0.3040 | 1.2590 | 0.81 | 0.7928 | 0.2241 | 0.0556 |
| No log | 14.0 | 98 | 0.5692 | 0.81 | 0.3025 | 1.2119 | 0.81 | 0.8043 | 0.2235 | 0.0665 |
| No log | 15.0 | 105 | 0.5223 | 0.83 | 0.2762 | 1.1162 | 0.83 | 0.8221 | 0.1798 | 0.0552 |
| No log | 16.0 | 112 | 0.4981 | 0.84 | 0.2523 | 1.0864 | 0.8400 | 0.8372 | 0.1868 | 0.0396 |
| No log | 17.0 | 119 | 0.5207 | 0.805 | 0.2741 | 1.0416 | 0.805 | 0.7897 | 0.1960 | 0.0551 |
| No log | 18.0 | 126 | 0.5165 | 0.84 | 0.2723 | 1.1596 | 0.8400 | 0.8325 | 0.1942 | 0.0506 |
| No log | 19.0 | 133 | 0.4979 | 0.845 | 0.2573 | 1.2329 | 0.845 | 0.8297 | 0.1825 | 0.0444 |
| No log | 20.0 | 140 | 0.4953 | 0.855 | 0.2565 | 1.1213 | 0.855 | 0.8442 | 0.1844 | 0.0474 |
| No log | 21.0 | 147 | 0.5296 | 0.82 | 0.2792 | 1.0000 | 0.82 | 0.8218 | 0.1768 | 0.0523 |
| No log | 22.0 | 154 | 0.5027 | 0.835 | 0.2625 | 0.9926 | 0.835 | 0.8238 | 0.2035 | 0.0481 |
| No log | 23.0 | 161 | 0.5027 | 0.84 | 0.2642 | 1.0500 | 0.8400 | 0.8299 | 0.1616 | 0.0482 |
| No log | 24.0 | 168 | 0.5017 | 0.84 | 0.2616 | 1.0560 | 0.8400 | 0.8314 | 0.1819 | 0.0497 |
| No log | 25.0 | 175 | 0.4942 | 0.85 | 0.2594 | 1.1003 | 0.85 | 0.8407 | 0.1793 | 0.0483 |
| No log | 26.0 | 182 | 0.4943 | 0.83 | 0.2586 | 1.0436 | 0.83 | 0.8140 | 0.1869 | 0.0518 |
| No log | 27.0 | 189 | 0.4950 | 0.835 | 0.2613 | 1.0817 | 0.835 | 0.8224 | 0.2039 | 0.0504 |
| No log | 28.0 | 196 | 0.4957 | 0.85 | 0.2599 | 1.1109 | 0.85 | 0.8309 | 0.2058 | 0.0485 |
| No log | 29.0 | 203 | 0.4956 | 0.845 | 0.2599 | 1.0914 | 0.845 | 0.8304 | 0.1916 | 0.0492 |
| No log | 30.0 | 210 | 0.4893 | 0.84 | 0.2561 | 1.0890 | 0.8400 | 0.8214 | 0.2071 | 0.0482 |
| No log | 31.0 | 217 | 0.4920 | 0.835 | 0.2587 | 1.0907 | 0.835 | 0.8270 | 0.2031 | 0.0482 |
| No log | 32.0 | 224 | 0.4927 | 0.83 | 0.2601 | 1.0879 | 0.83 | 0.8157 | 0.2093 | 0.0500 |
| No log | 33.0 | 231 | 0.4925 | 0.835 | 0.2593 | 1.0886 | 0.835 | 0.8270 | 0.1810 | 0.0484 |
| No log | 34.0 | 238 | 0.4909 | 0.845 | 0.2578 | 1.0871 | 0.845 | 0.8304 | 0.1916 | 0.0478 |
| No log | 35.0 | 245 | 0.4927 | 0.845 | 0.2591 | 1.0866 | 0.845 | 0.8378 | 0.1943 | 0.0473 |
| No log | 36.0 | 252 | 0.4919 | 0.85 | 0.2581 | 1.0891 | 0.85 | 0.8342 | 0.2193 | 0.0475 |
| No log | 37.0 | 259 | 0.4908 | 0.845 | 0.2579 | 1.0867 | 0.845 | 0.8346 | 0.2215 | 0.0474 |
| No log | 38.0 | 266 | 0.4929 | 0.85 | 0.2590 | 1.0873 | 0.85 | 0.8407 | 0.1884 | 0.0471 |
| No log | 39.0 | 273 | 0.4913 | 0.85 | 0.2584 | 1.0861 | 0.85 | 0.8374 | 0.1944 | 0.0474 |
| No log | 40.0 | 280 | 0.4933 | 0.835 | 0.2595 | 1.0871 | 0.835 | 0.8248 | 0.1893 | 0.0491 |
| No log | 41.0 | 287 | 0.4936 | 0.84 | 0.2599 | 1.0863 | 0.8400 | 0.8276 | 0.1860 | 0.0486 |
| No log | 42.0 | 294 | 0.4911 | 0.85 | 0.2580 | 1.0861 | 0.85 | 0.8374 | 0.2186 | 0.0474 |
| No log | 43.0 | 301 | 0.4915 | 0.85 | 0.2581 | 1.0860 | 0.85 | 0.8374 | 0.2023 | 0.0475 |
| No log | 44.0 | 308 | 0.4921 | 0.85 | 0.2586 | 1.0874 | 0.85 | 0.8374 | 0.2013 | 0.0477 |
| No log | 45.0 | 315 | 0.4915 | 0.85 | 0.2583 | 1.0862 | 0.85 | 0.8374 | 0.1941 | 0.0475 |
| No log | 46.0 | 322 | 0.4918 | 0.85 | 0.2584 | 1.0878 | 0.85 | 0.8374 | 0.1852 | 0.0473 |
| No log | 47.0 | 329 | 0.4916 | 0.85 | 0.2583 | 1.0873 | 0.85 | 0.8374 | 0.2089 | 0.0473 |
| No log | 48.0 | 336 | 0.4921 | 0.85 | 0.2586 | 1.0879 | 0.85 | 0.8374 | 0.2026 | 0.0477 |
| No log | 49.0 | 343 | 0.4918 | 0.845 | 0.2584 | 1.0884 | 0.845 | 0.8282 | 0.1963 | 0.0478 |
| No log | 50.0 | 350 | 0.4922 | 0.85 | 0.2587 | 1.0871 | 0.85 | 0.8374 | 0.2102 | 0.0474 |
| No log | 51.0 | 357 | 0.4920 | 0.85 | 0.2585 | 1.0879 | 0.85 | 0.8374 | 0.2095 | 0.0474 |
| No log | 52.0 | 364 | 0.4926 | 0.85 | 0.2589 | 1.0878 | 0.85 | 0.8374 | 0.2022 | 0.0477 |
| No log | 53.0 | 371 | 0.4920 | 0.85 | 0.2586 | 1.0888 | 0.85 | 0.8374 | 0.2027 | 0.0475 |
| No log | 54.0 | 378 | 0.4921 | 0.85 | 0.2586 | 1.0886 | 0.85 | 0.8374 | 0.2020 | 0.0474 |
| No log | 55.0 | 385 | 0.4921 | 0.85 | 0.2587 | 1.0890 | 0.85 | 0.8374 | 0.1929 | 0.0471 |
| No log | 56.0 | 392 | 0.4925 | 0.85 | 0.2589 | 1.0881 | 0.85 | 0.8374 | 0.1946 | 0.0473 |
| No log | 57.0 | 399 | 0.4917 | 0.85 | 0.2583 | 1.0893 | 0.85 | 0.8374 | 0.1932 | 0.0472 |
| No log | 58.0 | 406 | 0.4921 | 0.85 | 0.2586 | 1.0877 | 0.85 | 0.8374 | 0.1948 | 0.0476 |
| No log | 59.0 | 413 | 0.4917 | 0.85 | 0.2583 | 1.0883 | 0.85 | 0.8374 | 0.1931 | 0.0472 |
| No log | 60.0 | 420 | 0.4918 | 0.85 | 0.2583 | 1.0882 | 0.85 | 0.8374 | 0.1945 | 0.0475 |
| No log | 61.0 | 427 | 0.4916 | 0.85 | 0.2582 | 1.0883 | 0.85 | 0.8374 | 0.1936 | 0.0472 |
| No log | 62.0 | 434 | 0.4920 | 0.85 | 0.2586 | 1.0882 | 0.85 | 0.8374 | 0.1942 | 0.0473 |
| No log | 63.0 | 441 | 0.4922 | 0.85 | 0.2587 | 1.0889 | 0.85 | 0.8374 | 0.1935 | 0.0473 |
| No log | 64.0 | 448 | 0.4921 | 0.85 | 0.2586 | 1.0885 | 0.85 | 0.8374 | 0.1848 | 0.0473 |
| No log | 65.0 | 455 | 0.4916 | 0.85 | 0.2582 | 1.0887 | 0.85 | 0.8374 | 0.1848 | 0.0474 |
| No log | 66.0 | 462 | 0.4917 | 0.85 | 0.2583 | 1.0883 | 0.85 | 0.8374 | 0.1849 | 0.0472 |
| No log | 67.0 | 469 | 0.4917 | 0.85 | 0.2584 | 1.0887 | 0.85 | 0.8374 | 0.1848 | 0.0472 |
| No log | 68.0 | 476 | 0.4920 | 0.85 | 0.2585 | 1.0888 | 0.85 | 0.8374 | 0.2011 | 0.0471 |
| No log | 69.0 | 483 | 0.4918 | 0.85 | 0.2584 | 1.0889 | 0.85 | 0.8374 | 0.2007 | 0.0471 |
| No log | 70.0 | 490 | 0.4919 | 0.85 | 0.2584 | 1.0886 | 0.85 | 0.8374 | 0.1848 | 0.0474 |
| No log | 71.0 | 497 | 0.4920 | 0.85 | 0.2585 | 1.0888 | 0.85 | 0.8374 | 0.1940 | 0.0474 |
| 0.1824 | 72.0 | 504 | 0.4919 | 0.85 | 0.2584 | 1.0889 | 0.85 | 0.8374 | 0.2011 | 0.0471 |
| 0.1824 | 73.0 | 511 | 0.4917 | 0.85 | 0.2583 | 1.0887 | 0.85 | 0.8374 | 0.1848 | 0.0472 |
| 0.1824 | 74.0 | 518 | 0.4920 | 0.85 | 0.2585 | 1.0890 | 0.85 | 0.8374 | 0.1848 | 0.0472 |
| 0.1824 | 75.0 | 525 | 0.4920 | 0.85 | 0.2585 | 1.0892 | 0.85 | 0.8374 | 0.1846 | 0.0472 |
| 0.1824 | 76.0 | 532 | 0.4918 | 0.85 | 0.2583 | 1.0889 | 0.85 | 0.8374 | 0.1930 | 0.0472 |
| 0.1824 | 77.0 | 539 | 0.4917 | 0.85 | 0.2582 | 1.0891 | 0.85 | 0.8374 | 0.2005 | 0.0472 |
| 0.1824 | 78.0 | 546 | 0.4919 | 0.85 | 0.2584 | 1.0892 | 0.85 | 0.8374 | 0.1928 | 0.0472 |
| 0.1824 | 79.0 | 553 | 0.4920 | 0.85 | 0.2585 | 1.0893 | 0.85 | 0.8374 | 0.1845 | 0.0473 |
| 0.1824 | 80.0 | 560 | 0.4919 | 0.85 | 0.2584 | 1.0890 | 0.85 | 0.8374 | 0.1929 | 0.0473 |
| 0.1824 | 81.0 | 567 | 0.4920 | 0.85 | 0.2585 | 1.0892 | 0.85 | 0.8374 | 0.1925 | 0.0471 |
| 0.1824 | 82.0 | 574 | 0.4920 | 0.85 | 0.2585 | 1.0895 | 0.85 | 0.8374 | 0.1844 | 0.0471 |
| 0.1824 | 83.0 | 581 | 0.4919 | 0.85 | 0.2584 | 1.0892 | 0.85 | 0.8374 | 0.1916 | 0.0471 |
| 0.1824 | 84.0 | 588 | 0.4918 | 0.85 | 0.2584 | 1.0890 | 0.85 | 0.8374 | 0.1926 | 0.0471 |
| 0.1824 | 85.0 | 595 | 0.4918 | 0.85 | 0.2584 | 1.0892 | 0.85 | 0.8374 | 0.1844 | 0.0471 |
| 0.1824 | 86.0 | 602 | 0.4918 | 0.85 | 0.2584 | 1.0893 | 0.85 | 0.8374 | 0.1927 | 0.0472 |
| 0.1824 | 87.0 | 609 | 0.4918 | 0.85 | 0.2584 | 1.0895 | 0.85 | 0.8374 | 0.1844 | 0.0471 |
| 0.1824 | 88.0 | 616 | 0.4918 | 0.85 | 0.2584 | 1.0892 | 0.85 | 0.8374 | 0.1844 | 0.0471 |
| 0.1824 | 89.0 | 623 | 0.4918 | 0.85 | 0.2583 | 1.0895 | 0.85 | 0.8374 | 0.1917 | 0.0471 |
| 0.1824 | 90.0 | 630 | 0.4919 | 0.85 | 0.2584 | 1.0892 | 0.85 | 0.8374 | 0.1998 | 0.0471 |
| 0.1824 | 91.0 | 637 | 0.4919 | 0.85 | 0.2584 | 1.0894 | 0.85 | 0.8374 | 0.1916 | 0.0471 |
| 0.1824 | 92.0 | 644 | 0.4918 | 0.85 | 0.2583 | 1.0895 | 0.85 | 0.8374 | 0.1917 | 0.0470 |
| 0.1824 | 93.0 | 651 | 0.4918 | 0.85 | 0.2583 | 1.0893 | 0.85 | 0.8374 | 0.1917 | 0.0471 |
| 0.1824 | 94.0 | 658 | 0.4918 | 0.85 | 0.2583 | 1.0894 | 0.85 | 0.8374 | 0.1844 | 0.0471 |
| 0.1824 | 95.0 | 665 | 0.4918 | 0.85 | 0.2583 | 1.0894 | 0.85 | 0.8374 | 0.1917 | 0.0470 |
| 0.1824 | 96.0 | 672 | 0.4918 | 0.85 | 0.2583 | 1.0894 | 0.85 | 0.8374 | 0.1917 | 0.0470 |
| 0.1824 | 97.0 | 679 | 0.4918 | 0.85 | 0.2583 | 1.0895 | 0.85 | 0.8374 | 0.1916 | 0.0471 |
| 0.1824 | 98.0 | 686 | 0.4918 | 0.85 | 0.2583 | 1.0895 | 0.85 | 0.8374 | 0.1917 | 0.0470 |
| 0.1824 | 99.0 | 693 | 0.4918 | 0.85 | 0.2583 | 1.0894 | 0.85 | 0.8374 | 0.1917 | 0.0470 |
| 0.1824 | 100.0 | 700 | 0.4918 | 0.85 | 0.2583 | 1.0894 | 0.85 | 0.8374 | 0.1917 | 0.0470 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
umanlp/babelbert-ft-xlm-r
|
umanlp
| 2023-07-10T21:57:04Z | 160 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-07-07T21:22:09Z |
This model is one of the artifacts of the paper [Massively Multilingual Lexical Specialization of Multilingual Transformers](https://aclanthology.org/2023.acl-long.426/).
It was obtained by fine-tuning the representations of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the dataset [babelbert-dataset](https://huggingface.co/datasets/umanlp/babelbert-dataset).
|
BernardOng/Banking-FT-Bong-v1
|
BernardOng
| 2023-07-10T21:29:24Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-05-30T02:19:43Z |
---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [h2oai/h2ogpt-oig-oasst1-512-6.9b](https://huggingface.co/h2oai/h2ogpt-oig-oasst1-512-6.9b)
- Caution: This is only an experimental model used mainly for research and testing purposes. It is not meant for production use.
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.28.1
pip install accelerate==0.18.0
pip install torch==2.0.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="BernardOng/Banking-FT-Bong-v1",
torch_dtype=torch.float16,
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=2,
temperature=float(8.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|>
```
Alternatively, if you prefer to not use `trust_remote_code=True` you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"BernardOng/Banking-FT-Bong-v1",
use_fast=True,
padding_side="left"
)
model = AutoModelForCausalLM.from_pretrained(
"BernardOng/Banking-FT-Bong-v1",
torch_dtype=torch.float16,
device_map={"": "cuda:0"}
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=2,
temperature=float(8.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "BernardOng/Banking-FT-Bong-v1" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?<|endoftext|><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(model_name)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
**inputs,
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=2,
temperature=float(8.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Model Architecture
```
GPTNeoXForCausalLM(
(gpt_neox): GPTNeoXModel(
(embed_in): Embedding(50432, 4096)
(layers): ModuleList(
(0-31): 32 x GPTNeoXLayer(
(input_layernorm): LayerNorm((4096,), eps=1e-05, elementwise_affine=True)
(post_attention_layernorm): LayerNorm((4096,), eps=1e-05, elementwise_affine=True)
(attention): GPTNeoXAttention(
(rotary_emb): RotaryEmbedding()
(query_key_value): Linear(in_features=4096, out_features=12288, bias=True)
(dense): Linear(in_features=4096, out_features=4096, bias=True)
)
(mlp): GPTNeoXMLP(
(dense_h_to_4h): Linear(in_features=4096, out_features=16384, bias=True)
(dense_4h_to_h): Linear(in_features=16384, out_features=4096, bias=True)
(act): GELUActivation()
)
)
)
(final_layer_norm): LayerNorm((4096,), eps=1e-05, elementwise_affine=True)
)
(embed_out): Linear(in_features=4096, out_features=50432, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Model Validation
Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness).
```bash
CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=BernardOng/Banking-FT-Bong-v1 --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log
```
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
vk21/ppo-PyramidRND-unit5
|
vk21
| 2023-07-10T21:25:11Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-07-10T21:25:05Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: vk21/ppo-PyramidRND-unit5
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
voyzan/unit1-bonus1-Huggy-A01
|
voyzan
| 2023-07-10T21:19:37Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-10T21:19:36Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: voyzan/unit1-bonus1-Huggy-A01
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jordyvl/vit-tiny_rvl_cdip_100_examples_per_class_kd_MSE
|
jordyvl
| 2023-07-10T21:13:05Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-10T20:08:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-tiny_rvl_cdip_100_examples_per_class_kd_MSE
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-tiny_rvl_cdip_100_examples_per_class_kd_MSE
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7723
- Accuracy: 0.6025
- Brier Loss: 0.5295
- Nll: 3.6748
- F1 Micro: 0.6025
- F1 Macro: 0.6055
- Ece: 0.1688
- Aurc: 0.1708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 25 | 4.7870 | 0.065 | 0.9655 | 17.0930 | 0.065 | 0.0550 | 0.1747 | 0.9357 |
| No log | 2.0 | 50 | 3.9498 | 0.205 | 0.8858 | 9.5780 | 0.205 | 0.1863 | 0.1692 | 0.6618 |
| No log | 3.0 | 75 | 3.3698 | 0.3675 | 0.7672 | 6.4908 | 0.3675 | 0.3392 | 0.1676 | 0.4195 |
| No log | 4.0 | 100 | 2.9935 | 0.4075 | 0.6958 | 5.5595 | 0.4075 | 0.3820 | 0.1828 | 0.3327 |
| No log | 5.0 | 125 | 2.8351 | 0.455 | 0.6591 | 4.8619 | 0.455 | 0.4351 | 0.1561 | 0.2833 |
| No log | 6.0 | 150 | 2.8196 | 0.4725 | 0.6595 | 4.7785 | 0.4725 | 0.4367 | 0.1808 | 0.2790 |
| No log | 7.0 | 175 | 2.6352 | 0.5075 | 0.6234 | 4.9881 | 0.5075 | 0.4886 | 0.1563 | 0.2493 |
| No log | 8.0 | 200 | 2.5325 | 0.525 | 0.6162 | 4.3297 | 0.525 | 0.5026 | 0.1724 | 0.2365 |
| No log | 9.0 | 225 | 2.5459 | 0.53 | 0.6099 | 5.1608 | 0.53 | 0.5148 | 0.1944 | 0.2350 |
| No log | 10.0 | 250 | 2.5573 | 0.5325 | 0.6161 | 5.4495 | 0.5325 | 0.5212 | 0.2052 | 0.2397 |
| No log | 11.0 | 275 | 2.3199 | 0.5675 | 0.5828 | 4.1247 | 0.5675 | 0.5626 | 0.1849 | 0.2071 |
| No log | 12.0 | 300 | 2.2917 | 0.565 | 0.5758 | 4.1738 | 0.565 | 0.5694 | 0.1992 | 0.2023 |
| No log | 13.0 | 325 | 2.2744 | 0.555 | 0.5974 | 4.2323 | 0.555 | 0.5544 | 0.1982 | 0.2203 |
| No log | 14.0 | 350 | 2.1638 | 0.5625 | 0.5807 | 4.2049 | 0.5625 | 0.5629 | 0.1868 | 0.2049 |
| No log | 15.0 | 375 | 2.1934 | 0.5575 | 0.5903 | 4.3813 | 0.5575 | 0.5614 | 0.1868 | 0.2022 |
| No log | 16.0 | 400 | 2.1092 | 0.5625 | 0.5702 | 3.6094 | 0.5625 | 0.5700 | 0.1846 | 0.2011 |
| No log | 17.0 | 425 | 2.0379 | 0.5875 | 0.5642 | 4.4351 | 0.5875 | 0.5822 | 0.2036 | 0.1959 |
| No log | 18.0 | 450 | 2.0303 | 0.5825 | 0.5558 | 3.6847 | 0.5825 | 0.5820 | 0.1684 | 0.1881 |
| No log | 19.0 | 475 | 2.0506 | 0.57 | 0.5749 | 4.0014 | 0.57 | 0.5708 | 0.1725 | 0.2027 |
| 1.5026 | 20.0 | 500 | 1.9932 | 0.5875 | 0.5524 | 3.8003 | 0.5875 | 0.5914 | 0.1843 | 0.1831 |
| 1.5026 | 21.0 | 525 | 2.0131 | 0.565 | 0.5643 | 4.0681 | 0.565 | 0.5635 | 0.1776 | 0.1957 |
| 1.5026 | 22.0 | 550 | 2.0162 | 0.5725 | 0.5712 | 3.7068 | 0.5725 | 0.5766 | 0.1934 | 0.1955 |
| 1.5026 | 23.0 | 575 | 1.9093 | 0.605 | 0.5381 | 3.7930 | 0.605 | 0.6032 | 0.1539 | 0.1749 |
| 1.5026 | 24.0 | 600 | 1.9607 | 0.575 | 0.5561 | 4.5740 | 0.575 | 0.5789 | 0.1782 | 0.1902 |
| 1.5026 | 25.0 | 625 | 1.8971 | 0.5825 | 0.5408 | 3.7290 | 0.5825 | 0.5754 | 0.1836 | 0.1751 |
| 1.5026 | 26.0 | 650 | 1.9217 | 0.5775 | 0.5537 | 3.8085 | 0.5775 | 0.5844 | 0.1725 | 0.1843 |
| 1.5026 | 27.0 | 675 | 1.9493 | 0.585 | 0.5606 | 3.6743 | 0.585 | 0.5953 | 0.1755 | 0.1882 |
| 1.5026 | 28.0 | 700 | 1.8884 | 0.585 | 0.5437 | 3.7865 | 0.585 | 0.5828 | 0.1801 | 0.1822 |
| 1.5026 | 29.0 | 725 | 1.9242 | 0.585 | 0.5479 | 3.9607 | 0.585 | 0.5856 | 0.1619 | 0.1817 |
| 1.5026 | 30.0 | 750 | 1.8767 | 0.5975 | 0.5470 | 3.7995 | 0.5975 | 0.5966 | 0.1599 | 0.1790 |
| 1.5026 | 31.0 | 775 | 1.8723 | 0.5925 | 0.5337 | 3.8962 | 0.5925 | 0.5972 | 0.1678 | 0.1729 |
| 1.5026 | 32.0 | 800 | 1.9093 | 0.585 | 0.5545 | 3.8776 | 0.585 | 0.5830 | 0.1902 | 0.1841 |
| 1.5026 | 33.0 | 825 | 1.8667 | 0.595 | 0.5363 | 3.8926 | 0.595 | 0.5917 | 0.1772 | 0.1745 |
| 1.5026 | 34.0 | 850 | 1.8403 | 0.59 | 0.5521 | 3.8560 | 0.59 | 0.5953 | 0.1711 | 0.1800 |
| 1.5026 | 35.0 | 875 | 1.8464 | 0.5925 | 0.5380 | 4.0376 | 0.5925 | 0.5970 | 0.1719 | 0.1756 |
| 1.5026 | 36.0 | 900 | 1.8441 | 0.5975 | 0.5411 | 3.7193 | 0.5975 | 0.6008 | 0.1569 | 0.1753 |
| 1.5026 | 37.0 | 925 | 1.8599 | 0.5875 | 0.5402 | 3.9139 | 0.5875 | 0.5908 | 0.1779 | 0.1789 |
| 1.5026 | 38.0 | 950 | 1.8559 | 0.6 | 0.5458 | 3.8970 | 0.6 | 0.5991 | 0.1583 | 0.1804 |
| 1.5026 | 39.0 | 975 | 1.8285 | 0.61 | 0.5370 | 3.6292 | 0.61 | 0.6155 | 0.1623 | 0.1722 |
| 0.0745 | 40.0 | 1000 | 1.8309 | 0.5975 | 0.5432 | 3.6865 | 0.5975 | 0.6017 | 0.1663 | 0.1821 |
| 0.0745 | 41.0 | 1025 | 1.8237 | 0.59 | 0.5348 | 3.6213 | 0.59 | 0.5921 | 0.1695 | 0.1738 |
| 0.0745 | 42.0 | 1050 | 1.8421 | 0.605 | 0.5360 | 3.8592 | 0.605 | 0.6048 | 0.1601 | 0.1743 |
| 0.0745 | 43.0 | 1075 | 1.8158 | 0.5975 | 0.5300 | 3.4537 | 0.5975 | 0.5953 | 0.1696 | 0.1707 |
| 0.0745 | 44.0 | 1100 | 1.8238 | 0.5875 | 0.5358 | 3.7706 | 0.5875 | 0.5923 | 0.1797 | 0.1754 |
| 0.0745 | 45.0 | 1125 | 1.8214 | 0.595 | 0.5463 | 3.4742 | 0.595 | 0.5981 | 0.1800 | 0.1770 |
| 0.0745 | 46.0 | 1150 | 1.8162 | 0.5925 | 0.5317 | 3.9260 | 0.5925 | 0.5950 | 0.1646 | 0.1733 |
| 0.0745 | 47.0 | 1175 | 1.8050 | 0.5975 | 0.5392 | 3.8322 | 0.5975 | 0.5979 | 0.1794 | 0.1763 |
| 0.0745 | 48.0 | 1200 | 1.8214 | 0.5975 | 0.5347 | 3.7965 | 0.5975 | 0.6009 | 0.1555 | 0.1746 |
| 0.0745 | 49.0 | 1225 | 1.7813 | 0.6 | 0.5294 | 3.8398 | 0.6 | 0.6005 | 0.1674 | 0.1688 |
| 0.0745 | 50.0 | 1250 | 1.8179 | 0.6075 | 0.5336 | 3.4690 | 0.6075 | 0.6112 | 0.1743 | 0.1748 |
| 0.0745 | 51.0 | 1275 | 1.7953 | 0.595 | 0.5380 | 3.7781 | 0.595 | 0.5990 | 0.1380 | 0.1727 |
| 0.0745 | 52.0 | 1300 | 1.7897 | 0.6 | 0.5323 | 3.7412 | 0.6 | 0.6013 | 0.1603 | 0.1707 |
| 0.0745 | 53.0 | 1325 | 1.8072 | 0.59 | 0.5428 | 3.5993 | 0.59 | 0.5947 | 0.1571 | 0.1773 |
| 0.0745 | 54.0 | 1350 | 1.7834 | 0.605 | 0.5219 | 3.7600 | 0.605 | 0.6049 | 0.1563 | 0.1671 |
| 0.0745 | 55.0 | 1375 | 1.7920 | 0.595 | 0.5361 | 3.5986 | 0.595 | 0.5978 | 0.1512 | 0.1717 |
| 0.0745 | 56.0 | 1400 | 1.8074 | 0.5925 | 0.5387 | 3.5383 | 0.5925 | 0.5962 | 0.1669 | 0.1741 |
| 0.0745 | 57.0 | 1425 | 1.7893 | 0.605 | 0.5346 | 3.6929 | 0.605 | 0.6039 | 0.1641 | 0.1681 |
| 0.0745 | 58.0 | 1450 | 1.7787 | 0.6 | 0.5317 | 3.7652 | 0.6 | 0.6004 | 0.1850 | 0.1726 |
| 0.0745 | 59.0 | 1475 | 1.7888 | 0.595 | 0.5323 | 3.4558 | 0.595 | 0.5975 | 0.1797 | 0.1732 |
| 0.0231 | 60.0 | 1500 | 1.8064 | 0.58 | 0.5332 | 3.7773 | 0.58 | 0.5839 | 0.1819 | 0.1762 |
| 0.0231 | 61.0 | 1525 | 1.7795 | 0.6075 | 0.5298 | 3.7998 | 0.6075 | 0.6086 | 0.1678 | 0.1704 |
| 0.0231 | 62.0 | 1550 | 1.7826 | 0.595 | 0.5318 | 3.6741 | 0.595 | 0.5916 | 0.1550 | 0.1715 |
| 0.0231 | 63.0 | 1575 | 1.7704 | 0.5925 | 0.5325 | 3.5942 | 0.5925 | 0.5941 | 0.1619 | 0.1712 |
| 0.0231 | 64.0 | 1600 | 1.7901 | 0.6025 | 0.5289 | 3.4459 | 0.6025 | 0.6054 | 0.2022 | 0.1712 |
| 0.0231 | 65.0 | 1625 | 1.7944 | 0.59 | 0.5381 | 3.7591 | 0.59 | 0.5910 | 0.1599 | 0.1756 |
| 0.0231 | 66.0 | 1650 | 1.7721 | 0.605 | 0.5256 | 3.5227 | 0.605 | 0.6045 | 0.1525 | 0.1677 |
| 0.0231 | 67.0 | 1675 | 1.7779 | 0.5975 | 0.5306 | 3.6792 | 0.5975 | 0.5994 | 0.1667 | 0.1714 |
| 0.0231 | 68.0 | 1700 | 1.7724 | 0.6 | 0.5250 | 3.7552 | 0.6 | 0.6022 | 0.1818 | 0.1683 |
| 0.0231 | 69.0 | 1725 | 1.7765 | 0.6025 | 0.5283 | 3.4264 | 0.6025 | 0.6019 | 0.1671 | 0.1700 |
| 0.0231 | 70.0 | 1750 | 1.7784 | 0.6 | 0.5276 | 3.6887 | 0.6 | 0.6053 | 0.1715 | 0.1703 |
| 0.0231 | 71.0 | 1775 | 1.7659 | 0.6 | 0.5282 | 3.6051 | 0.6 | 0.6006 | 0.1722 | 0.1691 |
| 0.0231 | 72.0 | 1800 | 1.7882 | 0.5975 | 0.5329 | 3.5950 | 0.5975 | 0.6016 | 0.1981 | 0.1716 |
| 0.0231 | 73.0 | 1825 | 1.7678 | 0.6 | 0.5287 | 3.6691 | 0.6 | 0.6032 | 0.1733 | 0.1696 |
| 0.0231 | 74.0 | 1850 | 1.7716 | 0.6 | 0.5286 | 3.7576 | 0.6 | 0.6013 | 0.1734 | 0.1692 |
| 0.0231 | 75.0 | 1875 | 1.7704 | 0.6 | 0.5299 | 3.5917 | 0.6 | 0.6016 | 0.1645 | 0.1709 |
| 0.0231 | 76.0 | 1900 | 1.7729 | 0.6 | 0.5298 | 3.6758 | 0.6 | 0.6024 | 0.1766 | 0.1710 |
| 0.0231 | 77.0 | 1925 | 1.7749 | 0.6 | 0.5308 | 3.6022 | 0.6 | 0.6030 | 0.1604 | 0.1717 |
| 0.0231 | 78.0 | 1950 | 1.7720 | 0.6 | 0.5294 | 3.6759 | 0.6 | 0.6017 | 0.1786 | 0.1708 |
| 0.0231 | 79.0 | 1975 | 1.7734 | 0.6025 | 0.5288 | 3.6765 | 0.6025 | 0.6048 | 0.1673 | 0.1698 |
| 0.0059 | 80.0 | 2000 | 1.7709 | 0.6 | 0.5286 | 3.6755 | 0.6 | 0.6020 | 0.1749 | 0.1704 |
| 0.0059 | 81.0 | 2025 | 1.7730 | 0.6 | 0.5295 | 3.6760 | 0.6 | 0.6020 | 0.1677 | 0.1708 |
| 0.0059 | 82.0 | 2050 | 1.7723 | 0.6025 | 0.5295 | 3.6756 | 0.6025 | 0.6055 | 0.1626 | 0.1708 |
| 0.0059 | 83.0 | 2075 | 1.7721 | 0.6025 | 0.5295 | 3.6741 | 0.6025 | 0.6055 | 0.1709 | 0.1708 |
| 0.0059 | 84.0 | 2100 | 1.7725 | 0.6025 | 0.5297 | 3.6747 | 0.6025 | 0.6048 | 0.1627 | 0.1709 |
| 0.0059 | 85.0 | 2125 | 1.7724 | 0.6025 | 0.5295 | 3.6751 | 0.6025 | 0.6055 | 0.1639 | 0.1707 |
| 0.0059 | 86.0 | 2150 | 1.7724 | 0.6025 | 0.5296 | 3.6751 | 0.6025 | 0.6055 | 0.1630 | 0.1708 |
| 0.0059 | 87.0 | 2175 | 1.7724 | 0.6025 | 0.5295 | 3.6749 | 0.6025 | 0.6055 | 0.1638 | 0.1707 |
| 0.0059 | 88.0 | 2200 | 1.7722 | 0.6025 | 0.5295 | 3.6752 | 0.6025 | 0.6055 | 0.1645 | 0.1708 |
| 0.0059 | 89.0 | 2225 | 1.7723 | 0.6025 | 0.5295 | 3.6747 | 0.6025 | 0.6055 | 0.1639 | 0.1708 |
| 0.0059 | 90.0 | 2250 | 1.7723 | 0.6025 | 0.5294 | 3.6750 | 0.6025 | 0.6055 | 0.1643 | 0.1708 |
| 0.0059 | 91.0 | 2275 | 1.7723 | 0.6025 | 0.5294 | 3.6750 | 0.6025 | 0.6055 | 0.1643 | 0.1708 |
| 0.0059 | 92.0 | 2300 | 1.7723 | 0.6025 | 0.5295 | 3.6747 | 0.6025 | 0.6055 | 0.1639 | 0.1708 |
| 0.0059 | 93.0 | 2325 | 1.7723 | 0.6025 | 0.5295 | 3.6749 | 0.6025 | 0.6055 | 0.1637 | 0.1707 |
| 0.0059 | 94.0 | 2350 | 1.7722 | 0.6025 | 0.5295 | 3.6749 | 0.6025 | 0.6055 | 0.1688 | 0.1708 |
| 0.0059 | 95.0 | 2375 | 1.7723 | 0.6025 | 0.5295 | 3.6748 | 0.6025 | 0.6055 | 0.1643 | 0.1708 |
| 0.0059 | 96.0 | 2400 | 1.7723 | 0.6025 | 0.5294 | 3.6748 | 0.6025 | 0.6055 | 0.1643 | 0.1707 |
| 0.0059 | 97.0 | 2425 | 1.7723 | 0.6025 | 0.5295 | 3.6748 | 0.6025 | 0.6055 | 0.1688 | 0.1708 |
| 0.0059 | 98.0 | 2450 | 1.7723 | 0.6025 | 0.5295 | 3.6749 | 0.6025 | 0.6055 | 0.1643 | 0.1708 |
| 0.0059 | 99.0 | 2475 | 1.7723 | 0.6025 | 0.5295 | 3.6749 | 0.6025 | 0.6055 | 0.1688 | 0.1708 |
| 0.0 | 100.0 | 2500 | 1.7723 | 0.6025 | 0.5295 | 3.6748 | 0.6025 | 0.6055 | 0.1688 | 0.1708 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.12.0
- Tokenizers 0.12.1
|
ruggedmug/ppo-Huggy
|
ruggedmug
| 2023-07-10T21:08:52Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-10T21:08:43Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ruggedmug/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Belphegor/ppo-LunarLander-v2
|
Belphegor
| 2023-07-10T21:08:44Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T21:08:27Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 268.37 +/- 18.85
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
skrl/IsaacGymEnvs-FactoryTaskNutBoltScrew-PPO
|
skrl
| 2023-07-10T21:06:55Z | 0 | 0 |
skrl
|
[
"skrl",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T19:47:47Z |
---
library_name: skrl
tags:
- deep-reinforcement-learning
- reinforcement-learning
- skrl
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -8.89 +/- 10.3
name: Total reward (mean)
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: IsaacGymEnvs-FactoryTaskNutBoltScrew
type: IsaacGymEnvs-FactoryTaskNutBoltScrew
---
<!-- ---
torch: -21.51 +/- 14.99
jax: -35.77 +/- 0.39
numpy: -8.89 +/- 10.3
--- -->
# IsaacGymEnvs-FactoryTaskNutBoltScrew-PPO
Trained agent for [NVIDIA Isaac Gym Preview](https://github.com/NVIDIA-Omniverse/IsaacGymEnvs) environments.
- **Task:** FactoryTaskNutBoltScrew
- **Agent:** [PPO](https://skrl.readthedocs.io/en/latest/api/agents/ppo.html)
# Usage (with skrl)
Note: Visit the skrl [Examples](https://skrl.readthedocs.io/en/latest/intro/examples.html) section to access the scripts.
* PyTorch
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-FactoryTaskNutBoltScrew-PPO", filename="agent.pt")
agent.load(path)
```
* JAX
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-FactoryTaskNutBoltScrew-PPO", filename="agent.pickle")
agent.load(path)
```
# Hyperparameters
Note: Undefined parameters keep their values by default.
```python
# https://skrl.readthedocs.io/en/latest/api/agents/ppo.html#configuration-and-hyperparameters
cfg = PPO_DEFAULT_CONFIG.copy()
cfg["rollouts"] = 128 # memory_size
cfg["learning_epochs"] = 8
cfg["mini_batches"] = 32 # 128 * 128 / 512
cfg["discount_factor"] = 0.99
cfg["lambda"] = 0.95
cfg["learning_rate"] = 1e-4
cfg["random_timesteps"] = 0
cfg["learning_starts"] = 0
cfg["grad_norm_clip"] = 0
cfg["ratio_clip"] = 0.2
cfg["value_clip"] = 0.2
cfg["clip_predicted_values"] = True
cfg["entropy_loss_scale"] = 0.0
cfg["value_loss_scale"] = 1.0
cfg["kl_threshold"] = 0.016
cfg["rewards_shaper"] = None
cfg["state_preprocessor"] = RunningStandardScaler
cfg["state_preprocessor_kwargs"] = {"size": env.observation_space, "device": device}
cfg["value_preprocessor"] = RunningStandardScaler
cfg["value_preprocessor_kwargs"] = {"size": 1, "device": device}
```
|
WALIDALI/bekinorrev
|
WALIDALI
| 2023-07-10T21:00:29Z | 5 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-10T20:57:08Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### bekinorrev Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Henk717/chronoboros-33B
|
Henk717
| 2023-07-10T20:48:47Z | 1,410 | 9 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-09T21:00:09Z |
---
license: other
---
This model was the result of a 50/50 average weight merge between Airoboros-33B-1.4 and Chronos-33B.
After prolonged testing we concluded that while this merge is highly flexible and capable of many different tasks, it has to much variation in how it answers to be reliable.
Because of this the model relies on some luck to get good results, and is therefore not recommended to people seeking a consistent experience, or people sensitive to anticipation based addictions.
If you would like an improved version of this model that is more stable check out my Airochronos-33B merge.
|
voyzan/unit1-lunar_lander_v2-A02
|
voyzan
| 2023-07-10T20:47:07Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T20:46:49Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.03 +/- 23.01
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jliu596/flappybirdknockoff
|
jliu596
| 2023-07-10T20:45:22Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T20:40:49Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: flappybirdknockoff
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 13.40 +/- 11.34
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jordyvl/vit-small_tobacco3482_kd_CEKD_t2.5_a0.9
|
jordyvl
| 2023-07-10T20:38:28Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-10T19:59:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-small_tobacco3482_kd_CEKD_t2.5_a0.9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-small_tobacco3482_kd_CEKD_t2.5_a0.9
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5446
- Accuracy: 0.85
- Brier Loss: 0.2446
- Nll: 1.0816
- F1 Micro: 0.85
- F1 Macro: 0.8348
- Ece: 0.1474
- Aurc: 0.0436
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 2.1216 | 0.215 | 0.8751 | 5.2864 | 0.2150 | 0.1264 | 0.2697 | 0.6907 |
| No log | 2.0 | 14 | 1.7056 | 0.405 | 0.7400 | 3.5721 | 0.405 | 0.2275 | 0.2995 | 0.4011 |
| No log | 3.0 | 21 | 1.1857 | 0.62 | 0.5612 | 2.0143 | 0.62 | 0.5712 | 0.2994 | 0.2024 |
| No log | 4.0 | 28 | 0.8767 | 0.705 | 0.4085 | 1.6918 | 0.705 | 0.6436 | 0.2231 | 0.1152 |
| No log | 5.0 | 35 | 0.8620 | 0.72 | 0.3878 | 1.7931 | 0.72 | 0.7294 | 0.2233 | 0.1076 |
| No log | 6.0 | 42 | 0.7517 | 0.775 | 0.3252 | 1.5573 | 0.775 | 0.7600 | 0.1970 | 0.0790 |
| No log | 7.0 | 49 | 0.7280 | 0.79 | 0.3175 | 1.5140 | 0.79 | 0.7742 | 0.1903 | 0.0826 |
| No log | 8.0 | 56 | 0.6848 | 0.8 | 0.2942 | 1.4438 | 0.8000 | 0.7902 | 0.1828 | 0.0866 |
| No log | 9.0 | 63 | 0.6744 | 0.81 | 0.2889 | 1.4703 | 0.81 | 0.7969 | 0.1989 | 0.0692 |
| No log | 10.0 | 70 | 0.8432 | 0.74 | 0.3859 | 1.3134 | 0.74 | 0.7206 | 0.1959 | 0.1051 |
| No log | 11.0 | 77 | 0.7424 | 0.765 | 0.3294 | 1.5162 | 0.765 | 0.7792 | 0.2005 | 0.1048 |
| No log | 12.0 | 84 | 0.6953 | 0.79 | 0.3194 | 1.2233 | 0.79 | 0.7850 | 0.1800 | 0.0922 |
| No log | 13.0 | 91 | 0.5703 | 0.845 | 0.2538 | 1.2355 | 0.845 | 0.8372 | 0.1739 | 0.0447 |
| No log | 14.0 | 98 | 0.6439 | 0.795 | 0.2924 | 1.2777 | 0.795 | 0.7743 | 0.1771 | 0.0534 |
| No log | 15.0 | 105 | 0.5895 | 0.825 | 0.2650 | 1.2086 | 0.825 | 0.8071 | 0.1665 | 0.0566 |
| No log | 16.0 | 112 | 0.5973 | 0.81 | 0.2753 | 1.0959 | 0.81 | 0.8013 | 0.1839 | 0.0534 |
| No log | 17.0 | 119 | 0.5825 | 0.795 | 0.2722 | 1.1565 | 0.795 | 0.7886 | 0.1855 | 0.0534 |
| No log | 18.0 | 126 | 0.5854 | 0.845 | 0.2661 | 1.1223 | 0.845 | 0.8424 | 0.1981 | 0.0549 |
| No log | 19.0 | 133 | 0.5514 | 0.82 | 0.2553 | 0.9585 | 0.82 | 0.8150 | 0.1600 | 0.0481 |
| No log | 20.0 | 140 | 0.5600 | 0.835 | 0.2443 | 1.2692 | 0.835 | 0.8232 | 0.1657 | 0.0469 |
| No log | 21.0 | 147 | 0.5592 | 0.845 | 0.2473 | 1.1658 | 0.845 | 0.8331 | 0.1683 | 0.0493 |
| No log | 22.0 | 154 | 0.5507 | 0.845 | 0.2411 | 1.1403 | 0.845 | 0.8311 | 0.1797 | 0.0450 |
| No log | 23.0 | 161 | 0.5305 | 0.84 | 0.2361 | 1.1509 | 0.8400 | 0.8287 | 0.1650 | 0.0409 |
| No log | 24.0 | 168 | 0.5352 | 0.835 | 0.2378 | 1.2208 | 0.835 | 0.8201 | 0.1515 | 0.0420 |
| No log | 25.0 | 175 | 0.5425 | 0.845 | 0.2420 | 1.1208 | 0.845 | 0.8321 | 0.1776 | 0.0430 |
| No log | 26.0 | 182 | 0.5396 | 0.84 | 0.2409 | 1.1230 | 0.8400 | 0.8286 | 0.1647 | 0.0446 |
| No log | 27.0 | 189 | 0.5436 | 0.85 | 0.2401 | 1.1179 | 0.85 | 0.8387 | 0.1568 | 0.0427 |
| No log | 28.0 | 196 | 0.5373 | 0.835 | 0.2415 | 1.1092 | 0.835 | 0.8141 | 0.1641 | 0.0427 |
| No log | 29.0 | 203 | 0.5420 | 0.845 | 0.2436 | 1.0988 | 0.845 | 0.8326 | 0.1551 | 0.0444 |
| No log | 30.0 | 210 | 0.5413 | 0.845 | 0.2420 | 1.1064 | 0.845 | 0.8312 | 0.1486 | 0.0440 |
| No log | 31.0 | 217 | 0.5411 | 0.84 | 0.2418 | 1.1024 | 0.8400 | 0.8286 | 0.1565 | 0.0435 |
| No log | 32.0 | 224 | 0.5426 | 0.845 | 0.2429 | 1.0993 | 0.845 | 0.8322 | 0.1631 | 0.0433 |
| No log | 33.0 | 231 | 0.5424 | 0.85 | 0.2426 | 1.0989 | 0.85 | 0.8348 | 0.1615 | 0.0436 |
| No log | 34.0 | 238 | 0.5406 | 0.84 | 0.2419 | 1.0979 | 0.8400 | 0.8251 | 0.1640 | 0.0440 |
| No log | 35.0 | 245 | 0.5438 | 0.85 | 0.2436 | 1.0953 | 0.85 | 0.8348 | 0.1595 | 0.0438 |
| No log | 36.0 | 252 | 0.5429 | 0.85 | 0.2429 | 1.0970 | 0.85 | 0.8348 | 0.1495 | 0.0433 |
| No log | 37.0 | 259 | 0.5431 | 0.85 | 0.2427 | 1.0951 | 0.85 | 0.8348 | 0.1617 | 0.0435 |
| No log | 38.0 | 266 | 0.5424 | 0.85 | 0.2426 | 1.0959 | 0.85 | 0.8348 | 0.1587 | 0.0434 |
| No log | 39.0 | 273 | 0.5428 | 0.85 | 0.2432 | 1.0924 | 0.85 | 0.8348 | 0.1512 | 0.0433 |
| No log | 40.0 | 280 | 0.5437 | 0.85 | 0.2438 | 1.0911 | 0.85 | 0.8348 | 0.1726 | 0.0438 |
| No log | 41.0 | 287 | 0.5438 | 0.85 | 0.2434 | 1.0925 | 0.85 | 0.8348 | 0.1704 | 0.0433 |
| No log | 42.0 | 294 | 0.5428 | 0.85 | 0.2432 | 1.0927 | 0.85 | 0.8348 | 0.1585 | 0.0436 |
| No log | 43.0 | 301 | 0.5455 | 0.85 | 0.2443 | 1.0907 | 0.85 | 0.8348 | 0.1756 | 0.0437 |
| No log | 44.0 | 308 | 0.5427 | 0.85 | 0.2433 | 1.0908 | 0.85 | 0.8348 | 0.1616 | 0.0433 |
| No log | 45.0 | 315 | 0.5456 | 0.85 | 0.2446 | 1.0878 | 0.85 | 0.8348 | 0.1767 | 0.0437 |
| No log | 46.0 | 322 | 0.5439 | 0.85 | 0.2438 | 1.0895 | 0.85 | 0.8348 | 0.1503 | 0.0435 |
| No log | 47.0 | 329 | 0.5448 | 0.85 | 0.2443 | 1.0891 | 0.85 | 0.8348 | 0.1674 | 0.0439 |
| No log | 48.0 | 336 | 0.5440 | 0.85 | 0.2437 | 1.0898 | 0.85 | 0.8348 | 0.1768 | 0.0437 |
| No log | 49.0 | 343 | 0.5443 | 0.85 | 0.2441 | 1.0883 | 0.85 | 0.8348 | 0.1433 | 0.0432 |
| No log | 50.0 | 350 | 0.5449 | 0.85 | 0.2444 | 1.0877 | 0.85 | 0.8348 | 0.1722 | 0.0436 |
| No log | 51.0 | 357 | 0.5443 | 0.85 | 0.2442 | 1.0871 | 0.85 | 0.8348 | 0.1606 | 0.0434 |
| No log | 52.0 | 364 | 0.5453 | 0.85 | 0.2444 | 1.0865 | 0.85 | 0.8348 | 0.1729 | 0.0436 |
| No log | 53.0 | 371 | 0.5433 | 0.845 | 0.2438 | 1.0873 | 0.845 | 0.8287 | 0.1570 | 0.0434 |
| No log | 54.0 | 378 | 0.5453 | 0.85 | 0.2447 | 1.0854 | 0.85 | 0.8348 | 0.1606 | 0.0435 |
| No log | 55.0 | 385 | 0.5438 | 0.85 | 0.2439 | 1.0868 | 0.85 | 0.8348 | 0.1721 | 0.0434 |
| No log | 56.0 | 392 | 0.5455 | 0.85 | 0.2447 | 1.0853 | 0.85 | 0.8348 | 0.1710 | 0.0437 |
| No log | 57.0 | 399 | 0.5435 | 0.85 | 0.2439 | 1.0864 | 0.85 | 0.8348 | 0.1540 | 0.0434 |
| No log | 58.0 | 406 | 0.5451 | 0.85 | 0.2447 | 1.0844 | 0.85 | 0.8348 | 0.1636 | 0.0436 |
| No log | 59.0 | 413 | 0.5442 | 0.85 | 0.2441 | 1.0858 | 0.85 | 0.8348 | 0.1556 | 0.0435 |
| No log | 60.0 | 420 | 0.5453 | 0.85 | 0.2447 | 1.0843 | 0.85 | 0.8348 | 0.1717 | 0.0437 |
| No log | 61.0 | 427 | 0.5439 | 0.85 | 0.2442 | 1.0847 | 0.85 | 0.8348 | 0.1541 | 0.0432 |
| No log | 62.0 | 434 | 0.5455 | 0.85 | 0.2449 | 1.0839 | 0.85 | 0.8348 | 0.1550 | 0.0435 |
| No log | 63.0 | 441 | 0.5446 | 0.85 | 0.2445 | 1.0843 | 0.85 | 0.8348 | 0.1553 | 0.0435 |
| No log | 64.0 | 448 | 0.5448 | 0.85 | 0.2446 | 1.0833 | 0.85 | 0.8348 | 0.1634 | 0.0435 |
| No log | 65.0 | 455 | 0.5443 | 0.85 | 0.2443 | 1.0847 | 0.85 | 0.8348 | 0.1554 | 0.0435 |
| No log | 66.0 | 462 | 0.5448 | 0.85 | 0.2447 | 1.0831 | 0.85 | 0.8348 | 0.1547 | 0.0436 |
| No log | 67.0 | 469 | 0.5452 | 0.85 | 0.2448 | 1.0828 | 0.85 | 0.8348 | 0.1563 | 0.0436 |
| No log | 68.0 | 476 | 0.5443 | 0.85 | 0.2444 | 1.0834 | 0.85 | 0.8348 | 0.1472 | 0.0434 |
| No log | 69.0 | 483 | 0.5447 | 0.85 | 0.2445 | 1.0832 | 0.85 | 0.8348 | 0.1632 | 0.0434 |
| No log | 70.0 | 490 | 0.5447 | 0.85 | 0.2446 | 1.0831 | 0.85 | 0.8348 | 0.1559 | 0.0435 |
| No log | 71.0 | 497 | 0.5447 | 0.85 | 0.2446 | 1.0829 | 0.85 | 0.8348 | 0.1473 | 0.0435 |
| 0.1823 | 72.0 | 504 | 0.5443 | 0.85 | 0.2444 | 1.0828 | 0.85 | 0.8348 | 0.1559 | 0.0434 |
| 0.1823 | 73.0 | 511 | 0.5447 | 0.85 | 0.2447 | 1.0825 | 0.85 | 0.8348 | 0.1472 | 0.0434 |
| 0.1823 | 74.0 | 518 | 0.5444 | 0.85 | 0.2444 | 1.0829 | 0.85 | 0.8348 | 0.1559 | 0.0436 |
| 0.1823 | 75.0 | 525 | 0.5446 | 0.85 | 0.2445 | 1.0829 | 0.85 | 0.8348 | 0.1557 | 0.0435 |
| 0.1823 | 76.0 | 532 | 0.5448 | 0.85 | 0.2445 | 1.0825 | 0.85 | 0.8348 | 0.1559 | 0.0435 |
| 0.1823 | 77.0 | 539 | 0.5443 | 0.85 | 0.2444 | 1.0827 | 0.85 | 0.8348 | 0.1558 | 0.0435 |
| 0.1823 | 78.0 | 546 | 0.5446 | 0.85 | 0.2446 | 1.0824 | 0.85 | 0.8348 | 0.1560 | 0.0436 |
| 0.1823 | 79.0 | 553 | 0.5450 | 0.85 | 0.2448 | 1.0821 | 0.85 | 0.8348 | 0.1637 | 0.0436 |
| 0.1823 | 80.0 | 560 | 0.5447 | 0.85 | 0.2446 | 1.0823 | 0.85 | 0.8348 | 0.1638 | 0.0436 |
| 0.1823 | 81.0 | 567 | 0.5446 | 0.85 | 0.2446 | 1.0820 | 0.85 | 0.8348 | 0.1560 | 0.0435 |
| 0.1823 | 82.0 | 574 | 0.5447 | 0.85 | 0.2446 | 1.0819 | 0.85 | 0.8348 | 0.1561 | 0.0435 |
| 0.1823 | 83.0 | 581 | 0.5448 | 0.85 | 0.2446 | 1.0822 | 0.85 | 0.8348 | 0.1550 | 0.0436 |
| 0.1823 | 84.0 | 588 | 0.5445 | 0.85 | 0.2446 | 1.0819 | 0.85 | 0.8348 | 0.1551 | 0.0435 |
| 0.1823 | 85.0 | 595 | 0.5446 | 0.85 | 0.2446 | 1.0818 | 0.85 | 0.8348 | 0.1560 | 0.0436 |
| 0.1823 | 86.0 | 602 | 0.5446 | 0.85 | 0.2446 | 1.0818 | 0.85 | 0.8348 | 0.1560 | 0.0435 |
| 0.1823 | 87.0 | 609 | 0.5448 | 0.85 | 0.2447 | 1.0820 | 0.85 | 0.8348 | 0.1560 | 0.0435 |
| 0.1823 | 88.0 | 616 | 0.5447 | 0.85 | 0.2446 | 1.0819 | 0.85 | 0.8348 | 0.1551 | 0.0435 |
| 0.1823 | 89.0 | 623 | 0.5446 | 0.85 | 0.2446 | 1.0819 | 0.85 | 0.8348 | 0.1560 | 0.0435 |
| 0.1823 | 90.0 | 630 | 0.5446 | 0.85 | 0.2446 | 1.0816 | 0.85 | 0.8348 | 0.1638 | 0.0436 |
| 0.1823 | 91.0 | 637 | 0.5446 | 0.85 | 0.2445 | 1.0817 | 0.85 | 0.8348 | 0.1474 | 0.0435 |
| 0.1823 | 92.0 | 644 | 0.5445 | 0.85 | 0.2445 | 1.0818 | 0.85 | 0.8348 | 0.1551 | 0.0436 |
| 0.1823 | 93.0 | 651 | 0.5447 | 0.85 | 0.2446 | 1.0818 | 0.85 | 0.8348 | 0.1560 | 0.0436 |
| 0.1823 | 94.0 | 658 | 0.5447 | 0.85 | 0.2446 | 1.0816 | 0.85 | 0.8348 | 0.1561 | 0.0436 |
| 0.1823 | 95.0 | 665 | 0.5447 | 0.85 | 0.2446 | 1.0816 | 0.85 | 0.8348 | 0.1550 | 0.0435 |
| 0.1823 | 96.0 | 672 | 0.5446 | 0.85 | 0.2446 | 1.0816 | 0.85 | 0.8348 | 0.1474 | 0.0436 |
| 0.1823 | 97.0 | 679 | 0.5446 | 0.85 | 0.2446 | 1.0817 | 0.85 | 0.8348 | 0.1551 | 0.0436 |
| 0.1823 | 98.0 | 686 | 0.5446 | 0.85 | 0.2446 | 1.0817 | 0.85 | 0.8348 | 0.1474 | 0.0436 |
| 0.1823 | 99.0 | 693 | 0.5446 | 0.85 | 0.2446 | 1.0816 | 0.85 | 0.8348 | 0.1474 | 0.0436 |
| 0.1823 | 100.0 | 700 | 0.5446 | 0.85 | 0.2446 | 1.0816 | 0.85 | 0.8348 | 0.1474 | 0.0436 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
vk21/ppo-SnowballTarget-unit5
|
vk21
| 2023-07-10T20:34:28Z | 10 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-07-10T20:34:22Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: vk21/ppo-SnowballTarget-unit5
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DarkAirforce/dqn-SpaceInvadersNoFrameskip-v4
|
DarkAirforce
| 2023-07-10T20:33:23Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-07T19:24:04Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 534.00 +/- 175.24
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga DarkAirforce -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga DarkAirforce -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga DarkAirforce
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
skrl/IsaacGymEnvs-FactoryTaskNutBoltPlace-PPO
|
skrl
| 2023-07-10T20:15:49Z | 0 | 0 |
skrl
|
[
"skrl",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T19:47:18Z |
---
library_name: skrl
tags:
- deep-reinforcement-learning
- reinforcement-learning
- skrl
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -38.54 +/- 17.49
name: Total reward (mean)
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: IsaacGymEnvs-FactoryTaskNutBoltPlace
type: IsaacGymEnvs-FactoryTaskNutBoltPlace
---
<!-- ---
torch: -38.54 +/- 17.49
jax: -60.9 +/- 0.84
numpy: -58.9 +/- 1.8
--- -->
# IsaacGymEnvs-FactoryTaskNutBoltPlace-PPO
Trained agent for [NVIDIA Isaac Gym Preview](https://github.com/NVIDIA-Omniverse/IsaacGymEnvs) environments.
- **Task:** FactoryTaskNutBoltPlace
- **Agent:** [PPO](https://skrl.readthedocs.io/en/latest/api/agents/ppo.html)
# Usage (with skrl)
Note: Visit the skrl [Examples](https://skrl.readthedocs.io/en/latest/intro/examples.html) section to access the scripts.
* PyTorch
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-FactoryTaskNutBoltPlace-PPO", filename="agent.pt")
agent.load(path)
```
* JAX
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-FactoryTaskNutBoltPlace-PPO", filename="agent.pickle")
agent.load(path)
```
# Hyperparameters
Note: Undefined parameters keep their values by default.
```python
# https://skrl.readthedocs.io/en/latest/api/agents/ppo.html#configuration-and-hyperparameters
cfg = PPO_DEFAULT_CONFIG.copy()
cfg["rollouts"] = 120 # memory_size
cfg["learning_epochs"] = 8
cfg["mini_batches"] = 30 # 120 * 128 / 512
cfg["discount_factor"] = 0.99
cfg["lambda"] = 0.95
cfg["learning_rate"] = 1e-4
cfg["random_timesteps"] = 0
cfg["learning_starts"] = 0
cfg["grad_norm_clip"] = 0
cfg["ratio_clip"] = 0.2
cfg["value_clip"] = 0.2
cfg["clip_predicted_values"] = True
cfg["entropy_loss_scale"] = 0.0
cfg["value_loss_scale"] = 1.0
cfg["kl_threshold"] = 0.016
cfg["rewards_shaper"] = None
cfg["state_preprocessor"] = RunningStandardScaler
cfg["state_preprocessor_kwargs"] = {"size": env.observation_space, "device": device}
cfg["value_preprocessor"] = RunningStandardScaler
cfg["value_preprocessor_kwargs"] = {"size": 1, "device": device}
```
|
ALM-AHME/convnextv2-large-1k-224-finetuned-Lesion-Classification-HAM10000-AH-60-20-20-V2
|
ALM-AHME
| 2023-07-10T20:09:00Z | 213 | 0 |
transformers
|
[
"transformers",
"pytorch",
"convnextv2",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-10T20:08:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: convnextv2-large-1k-224-finetuned-Lesion-Classification-HAM10000-AH-60-20-20-V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnextv2-large-1k-224-finetuned-Lesion-Classification-HAM10000-AH-60-20-20-V2
This model is a fine-tuned version of [ALM-AHME/convnextv2-large-1k-224-finetuned-Lesion-Classification-HAM10000-AH-60-20-20](https://huggingface.co/ALM-AHME/convnextv2-large-1k-224-finetuned-Lesion-Classification-HAM10000-AH-60-20-20) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.9
- num_epochs: 12
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ruggedmug/ppo-LunarLander-v2
|
ruggedmug
| 2023-07-10T20:06:30Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-07T20:09:57Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 280.76 +/- 15.33
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
BigSalmon/InformalToFormalLincoln103Paraphrase
|
BigSalmon
| 2023-07-10T19:36:48Z | 209 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-08T22:41:06Z |
data: https://github.com/BigSalmon2/InformalToFormalDataset
Text Generation Informal Formal
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln103Paraphrase")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln103Paraphrase")
```
```
Demo:
https://huggingface.co/spaces/BigSalmon/FormalInformalConciseWordy
```
```
prompt = """informal english: corn fields are all across illinois, visible once you leave chicago.\nTranslated into the Style of Abraham Lincoln:"""
input_ids = tokenizer.encode(prompt, return_tensors='pt')
outputs = model.generate(input_ids=input_ids,
max_length=10 + len(prompt),
temperature=1.0,
top_k=50,
top_p=0.95,
do_sample=True,
num_return_sequences=5,
early_stopping=True)
for i in range(5):
print(tokenizer.decode(outputs[i]))
```
Most likely outputs (Disclaimer: I highly recommend using this over just generating):
```
prompt = """informal english: corn fields are all across illinois, visible once you leave chicago.\nTranslated into the Style of Abraham Lincoln:"""
text = tokenizer.encode(prompt)
myinput, past_key_values = torch.tensor([text]), None
myinput = myinput
myinput= myinput.to(device)
logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False)
logits = logits[0,-1]
probabilities = torch.nn.functional.softmax(logits)
best_logits, best_indices = logits.topk(250)
best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
text.append(best_indices[0].item())
best_probabilities = probabilities[best_indices].tolist()
words = []
print(best_words)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
original: microsoft word's [MASK] pricing invites competition.
Translated into the Style of Abraham Lincoln: microsoft word's unconscionable pricing invites competition.
***
original: the library’s quiet atmosphere encourages visitors to [blank] in their work.
Translated into the Style of Abraham Lincoln: the library’s quiet atmosphere encourages visitors to immerse themselves in their work.
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- nebraska
- unicamerical legislature
- different from federal house and senate
text: featuring a unicameral legislature, nebraska's political system stands in stark contrast to the federal model, comprised of a house and senate.
***
- penny has practically no value
- should be taken out of circulation
- just as other coins have been in us history
- lost use
- value not enough
- to make environmental consequences worthy
text: all but valueless, the penny should be retired. as with other coins in american history, it has become defunct. too minute to warrant the environmental consequences of its production, it has outlived its usefulness.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence.
```
ngos are characterized by:
□ voluntary citizens' group that is organized on a local, national or international level
□ encourage political participation
□ often serve humanitarian functions
□ work for social, economic, or environmental change
***
what are the drawbacks of living near an airbnb?
□ noise
□ parking
□ traffic
□ security
□ strangers
***
```
```
original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung.
adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung.
***
original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark.
adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark.
***
original:
```
```
original: had trouble deciding.
translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation.
***
original:
```
```
input: not loyal
1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ).
***
input:
```
```
first: ( was complicit in / was involved in ).
antonym: ( was blameless / was not an accomplice to / had no hand in / was uninvolved in ).
***
first: ( have no qualms about / see no issue with ).
antonym: ( are deeply troubled by / harbor grave reservations about / have a visceral aversion to / take ( umbrage at / exception to ) / are wary of ).
***
first: ( do not see eye to eye / disagree often ).
antonym: ( are in sync / are united / have excellent rapport / are like-minded / are in step / are of one mind / are in lockstep / operate in perfect harmony / march in lockstep ).
***
first:
```
```
stiff with competition, law school {A} is the launching pad for countless careers, {B} is a crowded field, {C} ranks among the most sought-after professional degrees, {D} is a professional proving ground.
***
languishing in viewership, saturday night live {A} is due for a creative renaissance, {B} is no longer a ratings juggernaut, {C} has been eclipsed by its imitators, {C} can still find its mojo.
***
dubbed the "manhattan of the south," atlanta {A} is a bustling metropolis, {B} is known for its vibrant downtown, {C} is a city of rich history, {D} is the pride of georgia.
***
embattled by scandal, harvard {A} is feeling the heat, {B} cannot escape the media glare, {C} is facing its most intense scrutiny yet, {D} is in the spotlight for all the wrong reasons.
```
Infill / Infilling / Masking / Phrase Masking (Works pretty decently actually, especially when you use logprobs code from above):
```
his contention [blank] by the evidence [sep] was refuted [answer]
***
few sights are as [blank] new york city as the colorful, flashing signage of its bodegas [sep] synonymous with [answer]
***
when rick won the lottery, all of his distant relatives [blank] his winnings [sep] clamored for [answer]
***
the library’s quiet atmosphere encourages visitors to [blank] in their work [sep] immerse themselves [answer]
***
the joy of sport is that no two games are alike. for every exhilarating experience, however, there is an interminable one. the national pastime, unfortunately, has a penchant for the latter. what begins as a summer evening at the ballpark can quickly devolve into a game of tedium. the primary culprit is the [blank] of play. from batters readjusting their gloves to fielders spitting on their mitts, the action is [blank] unnecessary interruptions. the sport's future is [blank] if these tendencies are not addressed [sep] plodding pace [answer] riddled with [answer] bleak [answer]
***
microsoft word's [blank] pricing [blank] competition [sep] unconscionable [answer] invites [answer]
***
```
```
original: microsoft word's [MASK] pricing invites competition.
Translated into the Style of Abraham Lincoln: microsoft word's unconscionable pricing invites competition.
***
original: the library’s quiet atmosphere encourages visitors to [blank] in their work.
Translated into the Style of Abraham Lincoln: the library’s quiet atmosphere encourages visitors to immerse themselves in their work.
```
Backwards
```
Essay Intro (National Parks):
text: tourists are at ease in the national parks, ( swept up in the beauty of their natural splendor ).
***
Essay Intro (D.C. Statehood):
washington, d.c. is a city of outsize significance, ( ground zero for the nation's political life / center stage for the nation's political machinations ).
```
```
topic: the Golden State Warriors.
characterization 1: the reigning kings of the NBA.
characterization 2: possessed of a remarkable cohesion.
characterization 3: helmed by superstar Stephen Curry.
characterization 4: perched atop the league’s hierarchy.
characterization 5: boasting a litany of hall-of-famers.
***
topic: emojis.
characterization 1: shorthand for a digital generation.
characterization 2: more versatile than words.
characterization 3: the latest frontier in language.
characterization 4: a form of self-expression.
characterization 5: quintessentially millennial.
characterization 6: reflective of a tech-centric world.
***
topic:
```
```
regular: illinois went against the census' population-loss prediction by getting more residents.
VBG: defying the census' prediction of population loss, illinois experienced growth.
***
regular: microsoft word’s high pricing increases the likelihood of competition.
VBG: extortionately priced, microsoft word is inviting competition.
***
regular:
```
```
source: badminton should be more popular in the US.
QUERY: Based on the given topic, can you develop a story outline?
target: (1) games played with racquets are popular, (2) just look at tennis and ping pong, (3) but badminton underappreciated, (4) fun, fast-paced, competitive, (5) needs to be marketed more
text: the sporting arena is dominated by games that are played with racquets. tennis and ping pong, in particular, are immensely popular. somewhat curiously, however, badminton is absent from this pantheon. exciting, fast-paced, and competitive, it is an underappreciated pastime. all that it lacks is more effective marketing.
***
source: movies in theaters should be free.
QUERY: Based on the given topic, can you develop a story outline?
target: (1) movies provide vital life lessons, (2) many venues charge admission, (3) those without much money
text: the lessons that movies impart are far from trivial. the vast catalogue of cinematic classics is replete with inspiring sagas of friendship, bravery, and tenacity. it is regrettable, then, that admission to theaters is not free. in their current form, the doors of this most vital of institutions are closed to those who lack the means to pay.
***
source:
```
```
in the private sector, { transparency } is vital to the business’s credibility. the { disclosure of information } can be the difference between success and failure.
***
the labor market is changing, with { remote work } now the norm. this { flexible employment } allows the individual to design their own schedule.
***
the { cubicle } is the locus of countless grievances. many complain that the { enclosed workspace } restricts their freedom of movement.
***
```
```
it would be natural to assume that americans, as a people whose ancestors { immigrated to this country }, would be sympathetic to those seeking to do likewise.
question: what does “do likewise” mean in the above context?
(a) make the same journey
(b) share in the promise of the american dream
(c) start anew in the land of opportunity
(d) make landfall on the united states
***
in the private sector, { transparency } is vital to the business’s credibility. this orientation can be the difference between success and failure.
question: what does “this orientation” mean in the above context?
(a) visible business practices
(b) candor with the public
(c) open, honest communication
(d) culture of accountability
```
```
example: suppose you are a teacher. further suppose you want to tell an accurate telling of history. then suppose a parent takes offense. they do so in the name of name of their kid. this happens a lot.
text: educators' responsibility to remain true to the historical record often clashes with the parent's desire to shelter their child from uncomfortable realities.
***
example: suppose you are a student at college. now suppose you have to buy textbooks. that is going to be worth hundreds of dollars. given how much you already spend on tuition, that is going to hard cost to bear.
text: the exorbitant cost of textbooks, which often reaches hundreds of dollars, imposes a sizable financial burden on the already-strapped college student.
```
```
clarify: international ( {working together} / cooperation ) is called for when ( {issue go beyond lots of borders} / an issue transcends borders / a given matter has transnational implications ).
```
```
description: when someone thinks that their view is the only right one.
synonyms: intolerant, opinionated, narrow-minded, insular, self-righteous.
***
description: when you put something off.
synonyms: shelve, defer, table, postpone.
```
```
organic sentence: crowdfunding is about winner of best ideas and it can test an entrepreneur’s idea.
rewrite phrases: meritocratic, viability, vision
rewritten with phrases: the meritocratic nature of crowdfunding empowers entrepreneurs to test their vision's viability.
```
```
essence: when someone's views are keeping within reasonable.
refine: the senator's voting record is ( moderate / centrist / pragmatic / balanced / fair-minded / even-handed ).
***
essence: when things are worked through in a petty way.
refine: the propensity of the u.s. congress to settle every dispute by way of ( mudslinging / bickering / demagoguery / name-calling / finger-pointing / vilification ) is appalling.
```
```
description: when someone thinks that their view is the only right one.
synonyms: intolerant, opinionated, narrow-minded, insular, self-righteous.
***
description: when you put something off.
synonyms: shelve, defer, table, postpone.
```
```
organic sentence: crowdfunding is about winner of best ideas and it can test an entrepreneur’s idea.
rewrite phrases: meritocratic, viability, vision
rewritten with phrases: the meritocratic nature of crowdfunding empowers entrepreneurs to test their vision's viability.
```
```
music before bedtime [makes for being able to relax] -> is a recipe for relaxation.
```
```
[people wanting entertainment love traveling new york city] -> travelers flock to new york city in droves, drawn to its iconic entertainment scene. [cannot blame them] -> one cannot fault them [broadway so fun] -> when it is home to such thrilling fare as Broadway.
```
```
in their ( ‖ when you are rushing because you want to get there on time ‖ / haste to arrive punctually / mad dash to be timely ), morning commuters are too rushed to whip up their own meal.
***
politicians prefer to author vague plans rather than ( ‖ when you can make a plan without many unknowns ‖ / actionable policies / concrete solutions ).
```
```
Q: What is whistleblower protection?
A: Whistleblower protection is a form of legal immunity granted to employees who expose the unethical practices of their employer.
Q: Why are whistleblower protections important?
A: Absent whistleblower protections, employees would be deterred from exposing their employer’s wrongdoing for fear of retribution.
Q: Why would an employer engage in retribution?
A: An employer who has acted unethically stands to suffer severe financial and reputational damage were their transgressions to become public. To safeguard themselves from these consequences, they might seek to dissuade employees from exposing their wrongdoing.
```
```
original: the meritocratic nature of crowdfunding [MASK] into their vision's viability.
infill: the meritocratic nature of crowdfunding [gives investors idea of how successful] -> ( offers entrepreneurs a window ) into their vision's viability.
```
```
Leadership | Lecture 17: Worker Morale
What Workers Look for in Companies:
• Benefits
o Tuition reimbursement
o Paid parental leave
o 401K matching
o Profit sharing
o Pension plans
o Free meals
• Social responsibility
o Environmental stewardship
o Charitable contributions
o Diversity
• Work-life balance
o Telecommuting
o Paid holidays and vacation
o Casual dress
• Growth opportunities
• Job security
• Competitive compensation
• Recognition
o Open-door policies
o Whistleblower protection
o Employee-of-the-month awards
o Positive performance reviews
o Bonuses
```
```
description: business
keywords: for-profit, fiduciary duty, monopolistic, bottom line, return on investment, short-term thinking, capital-intensive, self-interested, risk-taking, fiduciary duty, merger, speculation, profiteering, oversight, capitalism, diversification
```
```
3. In this task, you are given a company name and you need to find its industry.
McDonalds -- Restaurant
Facebook -- Social Network
IKEA -- Furniture
American Express -- Credit Services
Nokia -- Telecom
Nintendo -- Entertainment
4. In this task, you are given a Month and you need to convert it to its corresponding season
April -- Spring
December -- Winter
July -- Summer
October -- Fall
February -- Winter
5. In this task, you are given a sentence with a missing word and you need to predict the correct word.
Managers should set an _____ for their employees. -- example
Some people spend more than four _____ in the gym. -- hours
The police were on the _____ of arresting the suspect. -- verge
They were looking for _____ on how to solve the problem. -- guidance
What is the _____ of the coffee? -- price
6. In this task, you are given a paragraph and you need to reorder it to make it logical.
It was first proposed in 1987. The total length of the bridge is 1,828 meters. The idea of a bridge connects Hong Kong to Macau. -- The idea of bridge connecting Hong Kong and Macau was first proposed in 1987. The total length of the bridge is 1,828 meters.
It is a movie about a brave and noble policeman. The film was produced by Americans. They were Kevin Lima and Chris Buck. They are directors. The movie is called Tarzan. -- Produced by Americans Kevin Lima and Chris Buck, Tarzan is a movie about a brave and noble policeman.
It was first discovered in the mountains of India. The active ingredients in this plant can stimulate hair growth. The plant is called "Hair Plus." -- First discovered in the mountains of India, Hair Plus is a plant whose active ingredients can stimulate hair growth.
```
```
trivia: What is the population of South Korea?
response: 51 million.
***
trivia: What is the minimum voting age in the US?
response: 18.
***
trivia: What are the first ten amendments of the US constitution called?
response: Bill of Rights.
```
```
ideas: in modern-day america, it is customary for the commander-in-chief to conduct regular press conferences
related keywords: transparency, check and balance, sacrosanct, public accountability, adversarial, unscripted, direct access, open government, watchdog, healthy democracy, institutional integrity, right to know, direct line of communication, behind closed doors, updates, track progress, instill confidence, reassure, humanize, leadership style, day-to-day, forthcoming, demystify, ask hard questions
***
ideas: i know this one guy who retired so young, attesting to how careful they were with money.
related keywords: money management, resourceful, penny-pinching, live below their means, frugal, financial discipline, financial independence, conservative, long-term vision, discretionary spending, deferred gratification, preparedness, self-control, cushion
```
```
less specific: actors and musicians should ( support democracy ).
clarifies: actors and musicians should ( wield their celebrity to amplify pro-democracy messaging / marshal their considerable influence in the service of the democratic cause ).
***
less specific: amid a contemporary culture that thrives on profligacy, the discipline necessary to retire early is a vanishing quality. rather than yielding to the lure of indulgence, the aspiring retiree must ( be careful ).
clarifies: amid a contemporary culture that thrives on profligacy, the discipline necessary to retire early is a vanishing quality. rather than yielding to the lure of indulgence, the aspiring retiree must ( master their desires / exercise self-restraint / embrace frugality / restrain their appetite for splendor ).
```
```
dull: clean
emotional heft: spotless, immaculate, pristine
***
dull: hot
emotional heft: scorching, searing, blistering
***
dull: happy
emotional heft: euphoric
```
```
text: {guide: vividly describe the premise of the show "seinfield"} -> set in the heart of bustling new york city, the sitcom "seinfield" follows the everyday {restrict: term that implies they get into trouble but in a comical way} -> misadventures of four neurotic friends. on any given episode, one can find them quarreling over their favorite diner's latest menu change, haggling over the division of household expenses, or contriving a scheme to spy on the apartment's newest resident. mundane as their exploits may be, they never fail to elicit a hearty laugh. {guide: mention how larry david is responsible} -> behind the show's witty, incisive dialogue lies the sharp pen of larry david, who co-created the show with jerry seinfeld. {guide: mention how larry david came up with the most recognizable lines} -> it is his genius that conjured such instantly {restrict: term that imply everybody knows them} -> recognizable quips as "no soup for you!" and "not that there's anything wrong with that!". {guide: mention how humanity should revel in having such good comedians these days} -> as this list of laugh-out-loud lines attests, the world is fortunate to ( count such a sharp comedic mind among its ranks / have such a talented humorist in its midst / have such comedic talent gracing its airwaves ).
```
```
system: i'm not sure i agree with your assessment of the situation.
user: replace "assessment" with a more casual term.
chatbot: ( take / read / view ).
system: i've been feeling really busy lately, trying to balance work and family responsibilities.
user: replace "busy" with a more intense word that conveys it is too much.
chatbot: overwhelmed.
system: the candidate's stance on immigration is quite nebulous, and it's difficult to discern exactly what they would do if elected.
user: replace "nebulous" with a word that conveys a lack of detail.
chatbot: ( vague / ambiguous ).
```
```
user: exorbitant in price, scholarly journals unconscionably define the breadth of one's knowledge awareness by the contents of their wallet. [replace “knowledge awareness” with a more natural expression]
chatbot: intellectual horizons.
user: can you do another alternative to “intellectual horizons” that has more relation to “scholarly journals”?
chatbot: academic enlightenment.
```
```
key: calculate.
syn: estimate, consider, weigh, number, count, apportion, proportion, investigate, reckon, rate, compute.
ant: guess, conjecture, hit, chance, risk, stake, miscalculate.
```
```
description: more forceful version of curious that is less forceful than nosy
answer: inquisitive
description: more forceful version of hopeful that is less forceful than overconfident
answer: optimistic
```
```
key: inquisitive
positive: curious, interested
negative: nosy, prying
***
key: witty
positive: clever, humorous
negative: sarcastic, caustic
***
key: influential
positive: impactful, powerful
negative: overbearing, domineering
```
```
defective: the blogger's { use of language imprecise } confused an already complicated issue.
precise: the blogger's ( vague wording ) confused an already complicated issue.
defective: the senator's speech was high on { words sounding dignified } but low on concrete proposals.
precise: the senator's speech was high on ( lofty rhetoric ) but low on concrete proposals.
```
```
example: the new car uses gas.
boring: uses
stronger: guzzles
example: he hates people that are rude.
boring: hates
stronger: loathes, abhors, despises, scorns, detests
```
```
initial: The music at the party was [ loud; replace with a word that suggests a more uncomfortable noise level ] and overwhelming.
modified: The music at the party was { ear-splitting } and overwhelming.
initial: their house is [ small; replace with a positive spin ].
modified: their house is { cozy }.
```
```
defective: they spent the weekend enjoying { time do what you want }.
precise: they spent the weekend enjoying ( leisure activities).
defective: the author rightly notes the inequities perpetuated by { employment based on who you know }.
precise: the author rightly notes the inequities perpetuated by ( nepotism ).
defective: the senator's speech was high on { words sounding dignified } but low on concrete proposals.
precise: the senator's speech was high on ( lofty rhetoric ) but low on concrete proposals.
```
```
persona: human resources manager
buzzwords: pipeline, talent, retention, compensation, flexible, recruitment, personnel, resume, competitive, quality, onboard
```
|
NasimB/gpt2-cocnat-aochildes-mod-sub-length-10k
|
NasimB
| 2023-07-10T19:27:45Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-10T17:32:01Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-cocnat-aochildes-mod-sub-length-10k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-cocnat-aochildes-mod-sub-length-10k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3425
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6933 | 0.29 | 500 | 5.6341 |
| 5.3469 | 0.59 | 1000 | 5.1996 |
| 4.9864 | 0.88 | 1500 | 4.9580 |
| 4.7189 | 1.18 | 2000 | 4.8083 |
| 4.5609 | 1.47 | 2500 | 4.6850 |
| 4.4523 | 1.77 | 3000 | 4.5821 |
| 4.317 | 2.06 | 3500 | 4.5146 |
| 4.1329 | 2.35 | 4000 | 4.4652 |
| 4.1086 | 2.65 | 4500 | 4.4071 |
| 4.0635 | 2.94 | 5000 | 4.3601 |
| 3.8482 | 3.24 | 5500 | 4.3553 |
| 3.8055 | 3.53 | 6000 | 4.3282 |
| 3.7859 | 3.83 | 6500 | 4.2926 |
| 3.6619 | 4.12 | 7000 | 4.2970 |
| 3.5196 | 4.41 | 7500 | 4.2933 |
| 3.5139 | 4.71 | 8000 | 4.2857 |
| 3.4905 | 5.0 | 8500 | 4.2710 |
| 3.3203 | 5.3 | 9000 | 4.2871 |
| 3.322 | 5.59 | 9500 | 4.2867 |
| 3.3172 | 5.89 | 10000 | 4.2863 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
grace-pro/afriberta-small-finetuned-hausa
|
grace-pro
| 2023-07-10T19:21:16Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-10T18:52:05Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: afriberta-small-finetuned-hausa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afriberta-small-finetuned-hausa
This model is a fine-tuned version of [castorini/afriberta_small](https://huggingface.co/castorini/afriberta_small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1444
- Precision: 0.6873
- Recall: 0.4713
- F1: 0.5592
- Accuracy: 0.9618
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1493 | 1.0 | 2624 | 0.1382 | 0.6423 | 0.3968 | 0.4905 | 0.9572 |
| 0.1259 | 2.0 | 5248 | 0.1319 | 0.6734 | 0.4415 | 0.5333 | 0.9603 |
| 0.106 | 3.0 | 7872 | 0.1385 | 0.6908 | 0.4502 | 0.5452 | 0.9611 |
| 0.0949 | 4.0 | 10496 | 0.1377 | 0.6752 | 0.4759 | 0.5583 | 0.9613 |
| 0.086 | 5.0 | 13120 | 0.1444 | 0.6873 | 0.4713 | 0.5592 | 0.9618 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jordyvl/vit-small_tobacco3482_kd_CEKD_t2.5_a0.5
|
jordyvl
| 2023-07-10T19:16:42Z | 166 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-10T18:37:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-small_tobacco3482_kd_CEKD_t2.5_a0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-small_tobacco3482_kd_CEKD_t2.5_a0.5
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4300
- Accuracy: 0.83
- Brier Loss: 0.2807
- Nll: 1.0350
- F1 Micro: 0.83
- F1 Macro: 0.8295
- Ece: 0.2287
- Aurc: 0.0560
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 1.6525 | 0.225 | 0.8757 | 5.3231 | 0.225 | 0.1387 | 0.2689 | 0.6977 |
| No log | 2.0 | 14 | 1.3106 | 0.405 | 0.7470 | 3.3487 | 0.405 | 0.2195 | 0.2936 | 0.4032 |
| No log | 3.0 | 21 | 0.9127 | 0.585 | 0.5785 | 1.8686 | 0.585 | 0.5142 | 0.2974 | 0.2067 |
| No log | 4.0 | 28 | 0.7280 | 0.715 | 0.4339 | 1.6780 | 0.715 | 0.6761 | 0.2672 | 0.1204 |
| No log | 5.0 | 35 | 0.6523 | 0.775 | 0.3676 | 1.6537 | 0.775 | 0.7619 | 0.2554 | 0.0929 |
| No log | 6.0 | 42 | 0.5888 | 0.785 | 0.3502 | 1.3926 | 0.785 | 0.7538 | 0.2277 | 0.0908 |
| No log | 7.0 | 49 | 0.6113 | 0.805 | 0.3326 | 1.7118 | 0.805 | 0.7903 | 0.2428 | 0.0803 |
| No log | 8.0 | 56 | 0.5404 | 0.785 | 0.3178 | 1.1557 | 0.785 | 0.7671 | 0.2183 | 0.0716 |
| No log | 9.0 | 63 | 0.5380 | 0.82 | 0.3051 | 1.3231 | 0.82 | 0.8072 | 0.2168 | 0.0773 |
| No log | 10.0 | 70 | 0.6035 | 0.775 | 0.3508 | 1.3888 | 0.775 | 0.7682 | 0.2191 | 0.0812 |
| No log | 11.0 | 77 | 0.5473 | 0.795 | 0.3202 | 1.2622 | 0.795 | 0.7740 | 0.2303 | 0.0626 |
| No log | 12.0 | 84 | 0.4860 | 0.825 | 0.2937 | 1.3575 | 0.825 | 0.8053 | 0.2392 | 0.0727 |
| No log | 13.0 | 91 | 0.5046 | 0.81 | 0.3032 | 1.1857 | 0.81 | 0.8086 | 0.2248 | 0.0564 |
| No log | 14.0 | 98 | 0.4745 | 0.825 | 0.2870 | 1.2338 | 0.825 | 0.8089 | 0.2441 | 0.0518 |
| No log | 15.0 | 105 | 0.4764 | 0.81 | 0.2943 | 1.0325 | 0.81 | 0.8110 | 0.1935 | 0.0556 |
| No log | 16.0 | 112 | 0.4918 | 0.81 | 0.3062 | 1.0551 | 0.81 | 0.8015 | 0.2198 | 0.0587 |
| No log | 17.0 | 119 | 0.4757 | 0.815 | 0.2970 | 1.4203 | 0.815 | 0.7965 | 0.2263 | 0.0850 |
| No log | 18.0 | 126 | 0.4586 | 0.825 | 0.2926 | 1.0361 | 0.825 | 0.8268 | 0.2279 | 0.0583 |
| No log | 19.0 | 133 | 0.4503 | 0.835 | 0.2855 | 1.1476 | 0.835 | 0.8301 | 0.2392 | 0.0589 |
| No log | 20.0 | 140 | 0.4780 | 0.805 | 0.3105 | 0.9928 | 0.805 | 0.7902 | 0.1988 | 0.0775 |
| No log | 21.0 | 147 | 0.4965 | 0.8 | 0.3205 | 1.1887 | 0.8000 | 0.8029 | 0.2410 | 0.0702 |
| No log | 22.0 | 154 | 0.4753 | 0.815 | 0.3016 | 0.9609 | 0.815 | 0.8169 | 0.2163 | 0.0580 |
| No log | 23.0 | 161 | 0.4733 | 0.8 | 0.3074 | 1.2566 | 0.8000 | 0.8001 | 0.2162 | 0.0704 |
| No log | 24.0 | 168 | 0.4472 | 0.815 | 0.2888 | 1.0352 | 0.815 | 0.8187 | 0.2317 | 0.0590 |
| No log | 25.0 | 175 | 0.4434 | 0.815 | 0.2854 | 0.9874 | 0.815 | 0.8186 | 0.2149 | 0.0554 |
| No log | 26.0 | 182 | 0.4316 | 0.82 | 0.2754 | 1.0477 | 0.82 | 0.8267 | 0.2195 | 0.0508 |
| No log | 27.0 | 189 | 0.4276 | 0.83 | 0.2751 | 1.1016 | 0.83 | 0.8336 | 0.2050 | 0.0525 |
| No log | 28.0 | 196 | 0.4329 | 0.82 | 0.2795 | 1.0537 | 0.82 | 0.8220 | 0.2158 | 0.0611 |
| No log | 29.0 | 203 | 0.4327 | 0.82 | 0.2827 | 1.1766 | 0.82 | 0.8237 | 0.2024 | 0.0603 |
| No log | 30.0 | 210 | 0.4317 | 0.82 | 0.2820 | 1.0331 | 0.82 | 0.8219 | 0.2083 | 0.0611 |
| No log | 31.0 | 217 | 0.4316 | 0.82 | 0.2803 | 1.0974 | 0.82 | 0.8263 | 0.1984 | 0.0575 |
| No log | 32.0 | 224 | 0.4340 | 0.82 | 0.2833 | 1.0384 | 0.82 | 0.8240 | 0.2202 | 0.0590 |
| No log | 33.0 | 231 | 0.4333 | 0.81 | 0.2824 | 1.0355 | 0.81 | 0.8160 | 0.2103 | 0.0586 |
| No log | 34.0 | 238 | 0.4309 | 0.83 | 0.2817 | 1.1015 | 0.83 | 0.8307 | 0.2107 | 0.0577 |
| No log | 35.0 | 245 | 0.4321 | 0.82 | 0.2817 | 1.0359 | 0.82 | 0.8229 | 0.2147 | 0.0590 |
| No log | 36.0 | 252 | 0.4304 | 0.825 | 0.2802 | 1.1016 | 0.825 | 0.8257 | 0.2137 | 0.0569 |
| No log | 37.0 | 259 | 0.4303 | 0.825 | 0.2811 | 1.0990 | 0.825 | 0.8268 | 0.2149 | 0.0581 |
| No log | 38.0 | 266 | 0.4314 | 0.825 | 0.2814 | 1.1003 | 0.825 | 0.8257 | 0.2163 | 0.0581 |
| No log | 39.0 | 273 | 0.4302 | 0.82 | 0.2806 | 1.1007 | 0.82 | 0.8226 | 0.2102 | 0.0576 |
| No log | 40.0 | 280 | 0.4307 | 0.825 | 0.2809 | 1.0376 | 0.825 | 0.8264 | 0.2049 | 0.0573 |
| No log | 41.0 | 287 | 0.4303 | 0.82 | 0.2808 | 1.0434 | 0.82 | 0.8226 | 0.2096 | 0.0574 |
| No log | 42.0 | 294 | 0.4310 | 0.825 | 0.2817 | 1.0376 | 0.825 | 0.8268 | 0.2140 | 0.0580 |
| No log | 43.0 | 301 | 0.4310 | 0.825 | 0.2813 | 1.0391 | 0.825 | 0.8257 | 0.2147 | 0.0580 |
| No log | 44.0 | 308 | 0.4301 | 0.825 | 0.2808 | 1.0389 | 0.825 | 0.8257 | 0.2064 | 0.0573 |
| No log | 45.0 | 315 | 0.4305 | 0.83 | 0.2811 | 1.0419 | 0.83 | 0.8307 | 0.2300 | 0.0577 |
| No log | 46.0 | 322 | 0.4303 | 0.82 | 0.2808 | 1.0423 | 0.82 | 0.8226 | 0.2197 | 0.0582 |
| No log | 47.0 | 329 | 0.4304 | 0.825 | 0.2811 | 1.0405 | 0.825 | 0.8257 | 0.2240 | 0.0580 |
| No log | 48.0 | 336 | 0.4300 | 0.82 | 0.2805 | 1.0407 | 0.82 | 0.8226 | 0.2105 | 0.0574 |
| No log | 49.0 | 343 | 0.4307 | 0.825 | 0.2812 | 1.0381 | 0.825 | 0.8257 | 0.2252 | 0.0577 |
| No log | 50.0 | 350 | 0.4304 | 0.82 | 0.2810 | 1.0422 | 0.82 | 0.8226 | 0.2353 | 0.0578 |
| No log | 51.0 | 357 | 0.4310 | 0.825 | 0.2813 | 1.0382 | 0.825 | 0.8264 | 0.2153 | 0.0569 |
| No log | 52.0 | 364 | 0.4309 | 0.82 | 0.2814 | 1.0380 | 0.82 | 0.8226 | 0.2282 | 0.0574 |
| No log | 53.0 | 371 | 0.4307 | 0.825 | 0.2813 | 1.0357 | 0.825 | 0.8264 | 0.2250 | 0.0568 |
| No log | 54.0 | 378 | 0.4305 | 0.82 | 0.2810 | 1.0366 | 0.82 | 0.8226 | 0.2284 | 0.0575 |
| No log | 55.0 | 385 | 0.4304 | 0.825 | 0.2811 | 1.0351 | 0.825 | 0.8264 | 0.2241 | 0.0566 |
| No log | 56.0 | 392 | 0.4308 | 0.825 | 0.2813 | 1.0369 | 0.825 | 0.8257 | 0.2414 | 0.0572 |
| No log | 57.0 | 399 | 0.4305 | 0.825 | 0.2810 | 1.0356 | 0.825 | 0.8257 | 0.2322 | 0.0571 |
| No log | 58.0 | 406 | 0.4302 | 0.82 | 0.2808 | 1.0359 | 0.82 | 0.8226 | 0.2368 | 0.0569 |
| No log | 59.0 | 413 | 0.4302 | 0.82 | 0.2809 | 1.0346 | 0.82 | 0.8226 | 0.2271 | 0.0569 |
| No log | 60.0 | 420 | 0.4303 | 0.82 | 0.2809 | 1.0357 | 0.82 | 0.8226 | 0.2272 | 0.0570 |
| No log | 61.0 | 427 | 0.4304 | 0.825 | 0.2810 | 1.0360 | 0.825 | 0.8257 | 0.2325 | 0.0569 |
| No log | 62.0 | 434 | 0.4303 | 0.825 | 0.2809 | 1.0360 | 0.825 | 0.8257 | 0.2321 | 0.0568 |
| No log | 63.0 | 441 | 0.4303 | 0.83 | 0.2809 | 1.0356 | 0.83 | 0.8295 | 0.2300 | 0.0562 |
| No log | 64.0 | 448 | 0.4304 | 0.825 | 0.2810 | 1.0347 | 0.825 | 0.8264 | 0.2242 | 0.0564 |
| No log | 65.0 | 455 | 0.4301 | 0.83 | 0.2808 | 1.0361 | 0.83 | 0.8295 | 0.2384 | 0.0564 |
| No log | 66.0 | 462 | 0.4303 | 0.83 | 0.2810 | 1.0359 | 0.83 | 0.8295 | 0.2293 | 0.0563 |
| No log | 67.0 | 469 | 0.4302 | 0.83 | 0.2809 | 1.0360 | 0.83 | 0.8295 | 0.2386 | 0.0564 |
| No log | 68.0 | 476 | 0.4304 | 0.83 | 0.2810 | 1.0360 | 0.83 | 0.8295 | 0.2384 | 0.0563 |
| No log | 69.0 | 483 | 0.4305 | 0.83 | 0.2812 | 1.0355 | 0.83 | 0.8295 | 0.2295 | 0.0564 |
| No log | 70.0 | 490 | 0.4302 | 0.825 | 0.2808 | 1.0354 | 0.825 | 0.8264 | 0.2239 | 0.0561 |
| No log | 71.0 | 497 | 0.4305 | 0.83 | 0.2812 | 1.0352 | 0.83 | 0.8295 | 0.2296 | 0.0564 |
| 0.1776 | 72.0 | 504 | 0.4303 | 0.83 | 0.2808 | 1.0356 | 0.83 | 0.8295 | 0.2287 | 0.0561 |
| 0.1776 | 73.0 | 511 | 0.4301 | 0.825 | 0.2807 | 1.0351 | 0.825 | 0.8264 | 0.2348 | 0.0563 |
| 0.1776 | 74.0 | 518 | 0.4304 | 0.83 | 0.2811 | 1.0353 | 0.83 | 0.8295 | 0.2195 | 0.0562 |
| 0.1776 | 75.0 | 525 | 0.4301 | 0.825 | 0.2808 | 1.0355 | 0.825 | 0.8257 | 0.2320 | 0.0568 |
| 0.1776 | 76.0 | 532 | 0.4302 | 0.83 | 0.2808 | 1.0348 | 0.83 | 0.8295 | 0.2289 | 0.0561 |
| 0.1776 | 77.0 | 539 | 0.4301 | 0.83 | 0.2808 | 1.0355 | 0.83 | 0.8295 | 0.2300 | 0.0562 |
| 0.1776 | 78.0 | 546 | 0.4301 | 0.83 | 0.2808 | 1.0354 | 0.83 | 0.8295 | 0.2394 | 0.0563 |
| 0.1776 | 79.0 | 553 | 0.4302 | 0.83 | 0.2809 | 1.0346 | 0.83 | 0.8295 | 0.2287 | 0.0560 |
| 0.1776 | 80.0 | 560 | 0.4302 | 0.83 | 0.2809 | 1.0353 | 0.83 | 0.8295 | 0.2299 | 0.0563 |
| 0.1776 | 81.0 | 567 | 0.4302 | 0.83 | 0.2809 | 1.0350 | 0.83 | 0.8295 | 0.2299 | 0.0563 |
| 0.1776 | 82.0 | 574 | 0.4302 | 0.83 | 0.2808 | 1.0354 | 0.83 | 0.8295 | 0.2298 | 0.0560 |
| 0.1776 | 83.0 | 581 | 0.4302 | 0.83 | 0.2809 | 1.0350 | 0.83 | 0.8295 | 0.2299 | 0.0561 |
| 0.1776 | 84.0 | 588 | 0.4299 | 0.83 | 0.2807 | 1.0352 | 0.83 | 0.8295 | 0.2287 | 0.0561 |
| 0.1776 | 85.0 | 595 | 0.4301 | 0.83 | 0.2808 | 1.0349 | 0.83 | 0.8295 | 0.2296 | 0.0562 |
| 0.1776 | 86.0 | 602 | 0.4301 | 0.83 | 0.2808 | 1.0351 | 0.83 | 0.8295 | 0.2287 | 0.0562 |
| 0.1776 | 87.0 | 609 | 0.4300 | 0.83 | 0.2807 | 1.0351 | 0.83 | 0.8295 | 0.2297 | 0.0561 |
| 0.1776 | 88.0 | 616 | 0.4300 | 0.83 | 0.2807 | 1.0349 | 0.83 | 0.8295 | 0.2287 | 0.0562 |
| 0.1776 | 89.0 | 623 | 0.4300 | 0.83 | 0.2807 | 1.0353 | 0.83 | 0.8295 | 0.2296 | 0.0560 |
| 0.1776 | 90.0 | 630 | 0.4300 | 0.83 | 0.2807 | 1.0349 | 0.83 | 0.8295 | 0.2297 | 0.0559 |
| 0.1776 | 91.0 | 637 | 0.4300 | 0.83 | 0.2807 | 1.0352 | 0.83 | 0.8295 | 0.2296 | 0.0562 |
| 0.1776 | 92.0 | 644 | 0.4300 | 0.83 | 0.2807 | 1.0351 | 0.83 | 0.8295 | 0.2287 | 0.0561 |
| 0.1776 | 93.0 | 651 | 0.4300 | 0.83 | 0.2807 | 1.0351 | 0.83 | 0.8295 | 0.2297 | 0.0562 |
| 0.1776 | 94.0 | 658 | 0.4300 | 0.83 | 0.2807 | 1.0349 | 0.83 | 0.8295 | 0.2297 | 0.0560 |
| 0.1776 | 95.0 | 665 | 0.4300 | 0.83 | 0.2807 | 1.0350 | 0.83 | 0.8295 | 0.2297 | 0.0562 |
| 0.1776 | 96.0 | 672 | 0.4300 | 0.83 | 0.2807 | 1.0349 | 0.83 | 0.8295 | 0.2296 | 0.0561 |
| 0.1776 | 97.0 | 679 | 0.4300 | 0.83 | 0.2807 | 1.0350 | 0.83 | 0.8295 | 0.2296 | 0.0560 |
| 0.1776 | 98.0 | 686 | 0.4300 | 0.83 | 0.2807 | 1.0350 | 0.83 | 0.8295 | 0.2296 | 0.0560 |
| 0.1776 | 99.0 | 693 | 0.4300 | 0.83 | 0.2807 | 1.0350 | 0.83 | 0.8295 | 0.2287 | 0.0560 |
| 0.1776 | 100.0 | 700 | 0.4300 | 0.83 | 0.2807 | 1.0350 | 0.83 | 0.8295 | 0.2287 | 0.0560 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
NasimB/gpt2-dp-all-mod-datasets-rarity-all-iorder-13k-2p6k
|
NasimB
| 2023-07-10T19:10:16Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-10T16:50:38Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-dp-all-mod-datasets-rarity-all-iorder-13k-2p6k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-dp-all-mod-datasets-rarity-all-iorder-13k-2p6k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7606 | 0.29 | 500 | 5.6940 |
| 5.4347 | 0.59 | 1000 | 5.2560 |
| 5.0945 | 0.88 | 1500 | 5.0226 |
| 4.8232 | 1.18 | 2000 | 4.8777 |
| 4.675 | 1.47 | 2500 | 4.7626 |
| 4.5767 | 1.77 | 3000 | 4.6625 |
| 4.4488 | 2.06 | 3500 | 4.5933 |
| 4.2612 | 2.36 | 4000 | 4.5563 |
| 4.245 | 2.65 | 4500 | 4.4882 |
| 4.208 | 2.94 | 5000 | 4.4332 |
| 3.9773 | 3.24 | 5500 | 4.4362 |
| 3.9484 | 3.53 | 6000 | 4.4046 |
| 3.9304 | 3.83 | 6500 | 4.3669 |
| 3.7943 | 4.12 | 7000 | 4.3731 |
| 3.6517 | 4.42 | 7500 | 4.3646 |
| 3.646 | 4.71 | 8000 | 4.3456 |
| 3.6381 | 5.01 | 8500 | 4.3333 |
| 3.3812 | 5.3 | 9000 | 4.3586 |
| 3.3875 | 5.59 | 9500 | 4.3536 |
| 3.3847 | 5.89 | 10000 | 4.3483 |
| 3.2816 | 6.18 | 10500 | 4.3600 |
| 3.2295 | 6.48 | 11000 | 4.3636 |
| 3.223 | 6.77 | 11500 | 4.3630 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
darthPanda/ppo-Huggy-v0
|
darthPanda
| 2023-07-10T18:57:02Z | 14 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-10T18:55:55Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: darthPanda/ppo-Huggy-v0
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
43m1m4n/jpbrinx
|
43m1m4n
| 2023-07-10T18:53:46Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-04T20:40:20Z |
---
license: creativeml-openrail-m
---
|
MaitreHibou/Reinforce-Cartpole-v1
|
MaitreHibou
| 2023-07-10T18:49:32Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T18:49:23Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
DavidSolan0/coverart
|
DavidSolan0
| 2023-07-10T18:34:53Z | 9 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-10T18:30:01Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### coverart Dreambooth model trained by DavidSolan0 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
simonestradasch/COMPner-bert-base-spanish-wwm-cased
|
simonestradasch
| 2023-07-10T18:28:38Z | 107 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"es",
"dataset:simonestradasch/NERcomp",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-10T18:07:06Z |
---
language:
- es
tags:
- generated_from_trainer
datasets:
- simonestradasch/NERcomp
model-index:
- name: COMPner-bert-base-spanish-wwm-cased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# COMPner-bert-base-spanish-wwm-cased
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the simonestradasch/NERcomp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2793
- Body Part Precision: 0.6700
- Body Part Recall: 0.7186
- Body Part F1: 0.6934
- Body Part Number: 565
- Disease Precision: 0.6966
- Disease Recall: 0.7533
- Disease F1: 0.7238
- Disease Number: 1350
- Family Member Precision: 0.9
- Family Member Recall: 0.75
- Family Member F1: 0.8182
- Family Member Number: 24
- Medication Precision: 0.7143
- Medication Recall: 0.6190
- Medication F1: 0.6633
- Medication Number: 105
- Procedure Precision: 0.5233
- Procedure Recall: 0.5125
- Procedure F1: 0.5178
- Procedure Number: 439
- Overall Precision: 0.6640
- Overall Recall: 0.6971
- Overall F1: 0.6802
- Overall Accuracy: 0.9136
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 13
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Body Part Precision | Body Part Recall | Body Part F1 | Body Part Number | Disease Precision | Disease Recall | Disease F1 | Disease Number | Family Member Precision | Family Member Recall | Family Member F1 | Family Member Number | Medication Precision | Medication Recall | Medication F1 | Medication Number | Procedure Precision | Procedure Recall | Procedure F1 | Procedure Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------------------:|:--------------------:|:----------------:|:--------------------:|:--------------------:|:-----------------:|:-------------:|:-----------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.4741 | 1.0 | 703 | 0.2932 | 0.6449 | 0.6301 | 0.6374 | 565 | 0.6984 | 0.7170 | 0.7076 | 1350 | 0.9412 | 0.6667 | 0.7805 | 24 | 0.8551 | 0.5619 | 0.6782 | 105 | 0.5113 | 0.3599 | 0.4225 | 439 | 0.6674 | 0.6271 | 0.6466 | 0.9091 |
| 0.259 | 2.0 | 1406 | 0.2793 | 0.6700 | 0.7186 | 0.6934 | 565 | 0.6966 | 0.7533 | 0.7238 | 1350 | 0.9 | 0.75 | 0.8182 | 24 | 0.7143 | 0.6190 | 0.6633 | 105 | 0.5233 | 0.5125 | 0.5178 | 439 | 0.6640 | 0.6971 | 0.6802 | 0.9136 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
PraveenJesu/openai-whisper-medium-peft-lora-v2.2.5
|
PraveenJesu
| 2023-07-10T18:28:05Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-10T18:28:04Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
MaitreHibou/dqn-SpaceInvadersNoFrameskip-v4
|
MaitreHibou
| 2023-07-10T18:21:47Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T18:21:06Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 656.50 +/- 140.98
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MaitreHibou -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MaitreHibou -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga MaitreHibou
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0002),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
skrl/IsaacGymEnvs-AnymalTerrain-PPO
|
skrl
| 2023-07-10T18:15:29Z | 0 | 0 |
skrl
|
[
"skrl",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-24T20:41:55Z |
---
library_name: skrl
tags:
- deep-reinforcement-learning
- reinforcement-learning
- skrl
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 19.88 +/- 0.5
name: Total reward (mean)
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: IsaacGymEnvs-AnymalTerrain
type: IsaacGymEnvs-AnymalTerrain
---
<!-- ---
torch: 19.88 +/- 0.5
jax: 17.24 +/- 0.62
numpy: 17.8 +/- 0.29
--- -->
# IsaacGymEnvs-AnymalTerrain-PPO
Trained agent for [NVIDIA Isaac Gym Preview](https://github.com/NVIDIA-Omniverse/IsaacGymEnvs) environments.
- **Task:** AnymalTerrain
- **Agent:** [PPO](https://skrl.readthedocs.io/en/latest/api/agents/ppo.html)
# Usage (with skrl)
Note: Visit the skrl [Examples](https://skrl.readthedocs.io/en/latest/intro/examples.html) section to access the scripts.
* PyTorch
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-AnymalTerrain-PPO", filename="agent.pt")
agent.load(path)
```
* JAX
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-AnymalTerrain-PPO", filename="agent.pickle")
agent.load(path)
```
# Hyperparameters
Note: Undefined parameters keep their values by default.
```python
# https://skrl.readthedocs.io/en/latest/api/agents/ppo.html#configuration-and-hyperparameters
cfg = PPO_DEFAULT_CONFIG.copy()
cfg["rollouts"] = 24 # memory_size
cfg["learning_epochs"] = 5
cfg["mini_batches"] = 6 # 24 * 4096 / 16384
cfg["discount_factor"] = 0.99
cfg["lambda"] = 0.95
cfg["learning_rate"] = 3e-4
cfg["learning_rate_scheduler"] = KLAdaptiveRL
cfg["learning_rate_scheduler_kwargs"] = {"kl_threshold": 0.008}
cfg["random_timesteps"] = 0
cfg["learning_starts"] = 0
cfg["grad_norm_clip"] = 1.0
cfg["ratio_clip"] = 0.2
cfg["value_clip"] = 0.2
cfg["clip_predicted_values"] = True
cfg["entropy_loss_scale"] = 0.001
cfg["value_loss_scale"] = 1.0
cfg["kl_threshold"] = 0
cfg["rewards_shaper"] = None
cfg["state_preprocessor"] = RunningStandardScaler
cfg["state_preprocessor_kwargs"] = {"size": env.observation_space, "device": device}
cfg["value_preprocessor"] = RunningStandardScaler
cfg["value_preprocessor_kwargs"] = {"size": 1, "device": device}
```
|
FerhatDk/wav2vec2-base-finetuned-ks
|
FerhatDk
| 2023-07-10T18:08:04Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-09-22T08:59:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3550
- Accuracy: 0.8727
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 500
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 8 | 0.6840 | 0.6 |
| 0.6867 | 2.0 | 16 | 0.6780 | 0.6364 |
| 0.6742 | 3.0 | 24 | 0.6601 | 0.6182 |
| 0.6446 | 4.0 | 32 | 0.6294 | 0.6364 |
| 0.6299 | 5.0 | 40 | 0.6002 | 0.6727 |
| 0.6299 | 6.0 | 48 | 0.5755 | 0.7091 |
| 0.6021 | 7.0 | 56 | 0.5530 | 0.7273 |
| 0.5678 | 8.0 | 64 | 0.5036 | 0.8182 |
| 0.5512 | 9.0 | 72 | 0.4753 | 0.8545 |
| 0.4784 | 10.0 | 80 | 0.4184 | 0.9273 |
| 0.4784 | 11.0 | 88 | 0.4102 | 0.8909 |
| 0.4515 | 12.0 | 96 | 0.4444 | 0.8182 |
| 0.4878 | 13.0 | 104 | 0.3780 | 0.9091 |
| 0.4418 | 14.0 | 112 | 0.4570 | 0.8 |
| 0.4746 | 15.0 | 120 | 0.3870 | 0.8545 |
| 0.4746 | 16.0 | 128 | 0.3932 | 0.8364 |
| 0.4226 | 17.0 | 136 | 0.2779 | 0.9636 |
| 0.4301 | 18.0 | 144 | 0.3125 | 0.9455 |
| 0.3482 | 19.0 | 152 | 0.3212 | 0.9091 |
| 0.3611 | 20.0 | 160 | 0.3925 | 0.8364 |
| 0.3611 | 21.0 | 168 | 0.3389 | 0.8909 |
| 0.3507 | 22.0 | 176 | 0.3099 | 0.8727 |
| 0.3241 | 23.0 | 184 | 0.3120 | 0.8727 |
| 0.2533 | 24.0 | 192 | 0.2313 | 0.9455 |
| 0.2466 | 25.0 | 200 | 0.3550 | 0.8727 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Bellaaazzzzz/model_archive
|
Bellaaazzzzz
| 2023-07-10T18:00:43Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-10T17:41:57Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-Bellaaazzzzz/model_archive
These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning.
You can find some example images below.
Validation result of 1 round.

Validation result of 2 round.

|
jordyvl/vit-small_tobacco3482_kd_CEKD_t1.5_a0.7
|
jordyvl
| 2023-07-10T17:57:06Z | 166 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-10T17:18:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-small_tobacco3482_kd_CEKD_t1.5_a0.7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-small_tobacco3482_kd_CEKD_t1.5_a0.7
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4797
- Accuracy: 0.835
- Brier Loss: 0.2522
- Nll: 0.8627
- F1 Micro: 0.835
- F1 Macro: 0.8222
- Ece: 0.1830
- Aurc: 0.0434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 1.9341 | 0.215 | 0.8749 | 5.3238 | 0.2150 | 0.1264 | 0.2642 | 0.6914 |
| No log | 2.0 | 14 | 1.5320 | 0.405 | 0.7410 | 3.5078 | 0.405 | 0.2276 | 0.2957 | 0.4015 |
| No log | 3.0 | 21 | 1.0532 | 0.635 | 0.5629 | 2.0153 | 0.635 | 0.5844 | 0.3037 | 0.2006 |
| No log | 4.0 | 28 | 0.7915 | 0.715 | 0.4093 | 1.6974 | 0.715 | 0.6762 | 0.2420 | 0.1131 |
| No log | 5.0 | 35 | 0.8024 | 0.745 | 0.3869 | 1.7109 | 0.745 | 0.7548 | 0.2160 | 0.1006 |
| No log | 6.0 | 42 | 0.7162 | 0.765 | 0.3351 | 1.8105 | 0.765 | 0.7599 | 0.2216 | 0.0874 |
| No log | 7.0 | 49 | 0.6966 | 0.785 | 0.3304 | 1.5292 | 0.785 | 0.7682 | 0.2058 | 0.0979 |
| No log | 8.0 | 56 | 0.6317 | 0.805 | 0.2995 | 1.3486 | 0.805 | 0.7887 | 0.2266 | 0.0721 |
| No log | 9.0 | 63 | 0.6903 | 0.805 | 0.3304 | 1.5866 | 0.805 | 0.7971 | 0.2371 | 0.0995 |
| No log | 10.0 | 70 | 0.6223 | 0.805 | 0.2940 | 1.3478 | 0.805 | 0.8114 | 0.2281 | 0.0697 |
| No log | 11.0 | 77 | 0.6350 | 0.795 | 0.3145 | 1.3386 | 0.795 | 0.7730 | 0.2063 | 0.0962 |
| No log | 12.0 | 84 | 0.5570 | 0.835 | 0.2666 | 1.2662 | 0.835 | 0.8181 | 0.1951 | 0.0553 |
| No log | 13.0 | 91 | 0.5610 | 0.81 | 0.2858 | 1.2619 | 0.81 | 0.8002 | 0.1884 | 0.0626 |
| No log | 14.0 | 98 | 0.5843 | 0.8 | 0.2961 | 1.0782 | 0.8000 | 0.8083 | 0.1993 | 0.0683 |
| No log | 15.0 | 105 | 0.5918 | 0.78 | 0.2965 | 1.1207 | 0.78 | 0.7861 | 0.1895 | 0.0634 |
| No log | 16.0 | 112 | 0.5541 | 0.84 | 0.2765 | 1.3189 | 0.8400 | 0.8455 | 0.1969 | 0.0597 |
| No log | 17.0 | 119 | 0.5037 | 0.835 | 0.2568 | 0.9024 | 0.835 | 0.8248 | 0.2083 | 0.0499 |
| No log | 18.0 | 126 | 0.5050 | 0.85 | 0.2563 | 1.0032 | 0.85 | 0.8441 | 0.2147 | 0.0580 |
| No log | 19.0 | 133 | 0.5430 | 0.815 | 0.2779 | 1.1046 | 0.815 | 0.8044 | 0.1906 | 0.0562 |
| No log | 20.0 | 140 | 0.5276 | 0.84 | 0.2743 | 0.9964 | 0.8400 | 0.8144 | 0.2104 | 0.0597 |
| No log | 21.0 | 147 | 0.5155 | 0.835 | 0.2686 | 0.9556 | 0.835 | 0.8210 | 0.1962 | 0.0572 |
| No log | 22.0 | 154 | 0.4937 | 0.835 | 0.2581 | 1.0079 | 0.835 | 0.8172 | 0.1975 | 0.0479 |
| No log | 23.0 | 161 | 0.4931 | 0.845 | 0.2533 | 1.0021 | 0.845 | 0.8270 | 0.1884 | 0.0503 |
| No log | 24.0 | 168 | 0.4869 | 0.83 | 0.2554 | 0.9660 | 0.83 | 0.8084 | 0.1945 | 0.0481 |
| No log | 25.0 | 175 | 0.4843 | 0.845 | 0.2512 | 0.9979 | 0.845 | 0.8316 | 0.1746 | 0.0466 |
| No log | 26.0 | 182 | 0.4866 | 0.835 | 0.2531 | 0.9006 | 0.835 | 0.8188 | 0.1833 | 0.0472 |
| No log | 27.0 | 189 | 0.4882 | 0.825 | 0.2562 | 0.8929 | 0.825 | 0.8043 | 0.2023 | 0.0469 |
| No log | 28.0 | 196 | 0.4814 | 0.82 | 0.2494 | 0.9122 | 0.82 | 0.8060 | 0.1773 | 0.0451 |
| No log | 29.0 | 203 | 0.4749 | 0.835 | 0.2501 | 0.8770 | 0.835 | 0.8252 | 0.1688 | 0.0442 |
| No log | 30.0 | 210 | 0.4761 | 0.84 | 0.2490 | 0.8848 | 0.8400 | 0.8250 | 0.2068 | 0.0443 |
| No log | 31.0 | 217 | 0.4787 | 0.845 | 0.2508 | 0.8754 | 0.845 | 0.8309 | 0.1635 | 0.0438 |
| No log | 32.0 | 224 | 0.4791 | 0.835 | 0.2521 | 0.8711 | 0.835 | 0.8224 | 0.1876 | 0.0446 |
| No log | 33.0 | 231 | 0.4779 | 0.84 | 0.2509 | 0.8650 | 0.8400 | 0.8252 | 0.1813 | 0.0436 |
| No log | 34.0 | 238 | 0.4774 | 0.84 | 0.2513 | 0.8662 | 0.8400 | 0.8252 | 0.1919 | 0.0441 |
| No log | 35.0 | 245 | 0.4760 | 0.835 | 0.2502 | 0.8636 | 0.835 | 0.8224 | 0.1840 | 0.0434 |
| No log | 36.0 | 252 | 0.4784 | 0.84 | 0.2509 | 0.8688 | 0.8400 | 0.8281 | 0.1691 | 0.0437 |
| No log | 37.0 | 259 | 0.4771 | 0.835 | 0.2507 | 0.8670 | 0.835 | 0.8224 | 0.1936 | 0.0440 |
| No log | 38.0 | 266 | 0.4764 | 0.835 | 0.2499 | 0.8614 | 0.835 | 0.8224 | 0.1830 | 0.0434 |
| No log | 39.0 | 273 | 0.4769 | 0.835 | 0.2503 | 0.8651 | 0.835 | 0.8224 | 0.2001 | 0.0438 |
| No log | 40.0 | 280 | 0.4777 | 0.84 | 0.2514 | 0.8608 | 0.8400 | 0.8281 | 0.1832 | 0.0435 |
| No log | 41.0 | 287 | 0.4777 | 0.835 | 0.2504 | 0.8650 | 0.835 | 0.8224 | 0.1953 | 0.0437 |
| No log | 42.0 | 294 | 0.4779 | 0.835 | 0.2511 | 0.8629 | 0.835 | 0.8224 | 0.1944 | 0.0440 |
| No log | 43.0 | 301 | 0.4790 | 0.835 | 0.2519 | 0.8631 | 0.835 | 0.8222 | 0.1808 | 0.0439 |
| No log | 44.0 | 308 | 0.4777 | 0.835 | 0.2509 | 0.8604 | 0.835 | 0.8222 | 0.1886 | 0.0435 |
| No log | 45.0 | 315 | 0.4787 | 0.835 | 0.2517 | 0.8620 | 0.835 | 0.8222 | 0.1940 | 0.0437 |
| No log | 46.0 | 322 | 0.4774 | 0.84 | 0.2509 | 0.8614 | 0.8400 | 0.8281 | 0.1779 | 0.0433 |
| No log | 47.0 | 329 | 0.4785 | 0.835 | 0.2517 | 0.8609 | 0.835 | 0.8222 | 0.1811 | 0.0438 |
| No log | 48.0 | 336 | 0.4792 | 0.835 | 0.2521 | 0.8611 | 0.835 | 0.8222 | 0.1849 | 0.0436 |
| No log | 49.0 | 343 | 0.4771 | 0.84 | 0.2509 | 0.8623 | 0.8400 | 0.8281 | 0.1908 | 0.0430 |
| No log | 50.0 | 350 | 0.4793 | 0.835 | 0.2520 | 0.8633 | 0.835 | 0.8222 | 0.1900 | 0.0435 |
| No log | 51.0 | 357 | 0.4786 | 0.83 | 0.2517 | 0.8654 | 0.83 | 0.8159 | 0.1684 | 0.0437 |
| No log | 52.0 | 364 | 0.4792 | 0.83 | 0.2521 | 0.8625 | 0.83 | 0.8166 | 0.1915 | 0.0430 |
| No log | 53.0 | 371 | 0.4785 | 0.835 | 0.2513 | 0.8652 | 0.835 | 0.8222 | 0.1853 | 0.0434 |
| No log | 54.0 | 378 | 0.4798 | 0.835 | 0.2523 | 0.8652 | 0.835 | 0.8222 | 0.1767 | 0.0437 |
| No log | 55.0 | 385 | 0.4791 | 0.835 | 0.2519 | 0.8637 | 0.835 | 0.8222 | 0.1891 | 0.0435 |
| No log | 56.0 | 392 | 0.4790 | 0.835 | 0.2519 | 0.8614 | 0.835 | 0.8222 | 0.1749 | 0.0429 |
| No log | 57.0 | 399 | 0.4782 | 0.835 | 0.2513 | 0.8625 | 0.835 | 0.8222 | 0.1909 | 0.0433 |
| No log | 58.0 | 406 | 0.4794 | 0.835 | 0.2521 | 0.8602 | 0.835 | 0.8222 | 0.1758 | 0.0435 |
| No log | 59.0 | 413 | 0.4790 | 0.835 | 0.2517 | 0.8617 | 0.835 | 0.8222 | 0.1754 | 0.0432 |
| No log | 60.0 | 420 | 0.4791 | 0.835 | 0.2520 | 0.8614 | 0.835 | 0.8222 | 0.1830 | 0.0430 |
| No log | 61.0 | 427 | 0.4789 | 0.835 | 0.2518 | 0.8612 | 0.835 | 0.8222 | 0.1870 | 0.0432 |
| No log | 62.0 | 434 | 0.4792 | 0.835 | 0.2520 | 0.8620 | 0.835 | 0.8222 | 0.1902 | 0.0433 |
| No log | 63.0 | 441 | 0.4789 | 0.835 | 0.2518 | 0.8619 | 0.835 | 0.8222 | 0.1997 | 0.0431 |
| No log | 64.0 | 448 | 0.4797 | 0.835 | 0.2523 | 0.8607 | 0.835 | 0.8222 | 0.1833 | 0.0434 |
| No log | 65.0 | 455 | 0.4797 | 0.835 | 0.2522 | 0.8624 | 0.835 | 0.8222 | 0.1922 | 0.0434 |
| No log | 66.0 | 462 | 0.4791 | 0.835 | 0.2519 | 0.8620 | 0.835 | 0.8222 | 0.1894 | 0.0430 |
| No log | 67.0 | 469 | 0.4792 | 0.835 | 0.2520 | 0.8612 | 0.835 | 0.8222 | 0.1885 | 0.0433 |
| No log | 68.0 | 476 | 0.4796 | 0.835 | 0.2522 | 0.8627 | 0.835 | 0.8222 | 0.1918 | 0.0433 |
| No log | 69.0 | 483 | 0.4793 | 0.835 | 0.2521 | 0.8628 | 0.835 | 0.8222 | 0.1828 | 0.0433 |
| No log | 70.0 | 490 | 0.4792 | 0.835 | 0.2519 | 0.8622 | 0.835 | 0.8222 | 0.1918 | 0.0432 |
| No log | 71.0 | 497 | 0.4797 | 0.835 | 0.2523 | 0.8615 | 0.835 | 0.8222 | 0.1836 | 0.0434 |
| 0.194 | 72.0 | 504 | 0.4797 | 0.835 | 0.2522 | 0.8618 | 0.835 | 0.8222 | 0.1842 | 0.0433 |
| 0.194 | 73.0 | 511 | 0.4794 | 0.835 | 0.2521 | 0.8624 | 0.835 | 0.8222 | 0.1914 | 0.0432 |
| 0.194 | 74.0 | 518 | 0.4794 | 0.835 | 0.2521 | 0.8617 | 0.835 | 0.8222 | 0.1915 | 0.0431 |
| 0.194 | 75.0 | 525 | 0.4796 | 0.835 | 0.2522 | 0.8623 | 0.835 | 0.8222 | 0.1917 | 0.0434 |
| 0.194 | 76.0 | 532 | 0.4795 | 0.835 | 0.2520 | 0.8622 | 0.835 | 0.8222 | 0.1985 | 0.0433 |
| 0.194 | 77.0 | 539 | 0.4795 | 0.835 | 0.2520 | 0.8623 | 0.835 | 0.8222 | 0.1985 | 0.0432 |
| 0.194 | 78.0 | 546 | 0.4795 | 0.835 | 0.2522 | 0.8621 | 0.835 | 0.8222 | 0.1981 | 0.0432 |
| 0.194 | 79.0 | 553 | 0.4798 | 0.835 | 0.2522 | 0.8626 | 0.835 | 0.8222 | 0.1909 | 0.0433 |
| 0.194 | 80.0 | 560 | 0.4796 | 0.835 | 0.2521 | 0.8630 | 0.835 | 0.8222 | 0.1984 | 0.0433 |
| 0.194 | 81.0 | 567 | 0.4797 | 0.835 | 0.2522 | 0.8619 | 0.835 | 0.8222 | 0.1902 | 0.0434 |
| 0.194 | 82.0 | 574 | 0.4797 | 0.835 | 0.2522 | 0.8631 | 0.835 | 0.8222 | 0.1913 | 0.0433 |
| 0.194 | 83.0 | 581 | 0.4797 | 0.835 | 0.2522 | 0.8627 | 0.835 | 0.8222 | 0.1909 | 0.0433 |
| 0.194 | 84.0 | 588 | 0.4797 | 0.835 | 0.2522 | 0.8623 | 0.835 | 0.8222 | 0.1909 | 0.0433 |
| 0.194 | 85.0 | 595 | 0.4797 | 0.835 | 0.2522 | 0.8624 | 0.835 | 0.8222 | 0.1909 | 0.0434 |
| 0.194 | 86.0 | 602 | 0.4796 | 0.835 | 0.2522 | 0.8623 | 0.835 | 0.8222 | 0.1830 | 0.0433 |
| 0.194 | 87.0 | 609 | 0.4797 | 0.835 | 0.2522 | 0.8629 | 0.835 | 0.8222 | 0.1909 | 0.0434 |
| 0.194 | 88.0 | 616 | 0.4797 | 0.835 | 0.2521 | 0.8634 | 0.835 | 0.8222 | 0.1830 | 0.0433 |
| 0.194 | 89.0 | 623 | 0.4797 | 0.835 | 0.2522 | 0.8627 | 0.835 | 0.8222 | 0.1910 | 0.0434 |
| 0.194 | 90.0 | 630 | 0.4798 | 0.835 | 0.2523 | 0.8627 | 0.835 | 0.8222 | 0.1909 | 0.0434 |
| 0.194 | 91.0 | 637 | 0.4797 | 0.835 | 0.2522 | 0.8625 | 0.835 | 0.8222 | 0.1909 | 0.0434 |
| 0.194 | 92.0 | 644 | 0.4797 | 0.835 | 0.2522 | 0.8630 | 0.835 | 0.8222 | 0.1830 | 0.0434 |
| 0.194 | 93.0 | 651 | 0.4798 | 0.835 | 0.2522 | 0.8629 | 0.835 | 0.8222 | 0.1910 | 0.0434 |
| 0.194 | 94.0 | 658 | 0.4797 | 0.835 | 0.2522 | 0.8628 | 0.835 | 0.8222 | 0.1910 | 0.0434 |
| 0.194 | 95.0 | 665 | 0.4797 | 0.835 | 0.2522 | 0.8627 | 0.835 | 0.8222 | 0.1910 | 0.0434 |
| 0.194 | 96.0 | 672 | 0.4798 | 0.835 | 0.2522 | 0.8627 | 0.835 | 0.8222 | 0.1834 | 0.0435 |
| 0.194 | 97.0 | 679 | 0.4797 | 0.835 | 0.2522 | 0.8628 | 0.835 | 0.8222 | 0.1830 | 0.0434 |
| 0.194 | 98.0 | 686 | 0.4797 | 0.835 | 0.2522 | 0.8628 | 0.835 | 0.8222 | 0.1830 | 0.0434 |
| 0.194 | 99.0 | 693 | 0.4797 | 0.835 | 0.2522 | 0.8628 | 0.835 | 0.8222 | 0.1830 | 0.0434 |
| 0.194 | 100.0 | 700 | 0.4797 | 0.835 | 0.2522 | 0.8627 | 0.835 | 0.8222 | 0.1830 | 0.0434 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
FerhatDk/wav2vec2-base_music_speech_both_classification
|
FerhatDk
| 2023-07-10T17:56:34Z | 167 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-10T17:00:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-base_music_speech_both_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base_music_speech_both_classification
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0586
- Accuracy: 0.9847
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 500
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9458 | 1.0 | 66 | 0.8468 | 0.7405 |
| 0.3785 | 2.0 | 132 | 0.2951 | 0.9771 |
| 0.1762 | 3.0 | 198 | 0.2639 | 0.9313 |
| 0.134 | 4.0 | 264 | 0.1084 | 0.9771 |
| 0.0782 | 5.0 | 330 | 0.0877 | 0.9771 |
| 0.0568 | 6.0 | 396 | 0.0912 | 0.9771 |
| 0.0122 | 7.0 | 462 | 0.4056 | 0.9198 |
| 0.059 | 8.0 | 528 | 0.0586 | 0.9847 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
simonestradasch/fake-news-bert-base-spanish-wwm-cased
|
simonestradasch
| 2023-07-10T17:35:24Z | 108 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-10T17:29:57Z |
---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: fake-news-bert-base-spanish-wwm-cased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fake-news-bert-base-spanish-wwm-cased
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4163
- F1: 0.8558
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 13
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4947 | 1.0 | 140 | 0.4019 | 0.8137 |
| 0.2068 | 2.0 | 280 | 0.4163 | 0.8558 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
grammarly/detexd-roberta-base
|
grammarly
| 2023-07-10T17:34:23Z | 132 | 10 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-21T18:44:55Z |
---
license: apache-2.0
language:
- en
pipeline_tag: text-classification
---
# DeTexD-RoBERTa-base delicate text detection
This is a baseline RoBERTa-base model for the delicate text detection task.
* Paper: [DeTexD: A Benchmark Dataset for Delicate Text Detection](TODO)
* [GitHub repository](https://github.com/grammarly/detexd)
The labels meaning according to the paper:
- LABEL_0 -> non-delicate (0)
- LABEL_1 -> very low risk (1)
- LABEL_2 -> low risk (2)
- LABEL_3 -> medium risk (3)
- LABEL_4 -> high risk (4)
- LABEL_5 -> very high risk (5)
## Classification example code
Here's a short usage example with the torch library in a binary classification task:
```python
from transformers import pipeline
classifier = pipeline("text-classification", model="grammarly/detexd-roberta-base")
def predict_binary_score(text: str):
# get multiclass probability scores
scores = classifier(text, top_k=None)
# convert to a single score by summing the probability scores
# for the higher-index classes
return sum(score['score']
for score in scores
if score['label'] in ('LABEL_3', 'LABEL_4', 'LABEL_5'))
def predict_delicate(text: str, threshold=0.72496545):
return predict_binary_score(text) > threshold
print(predict_delicate("Time flies like an arrow. Fruit flies like a banana."))
```
Expected output:
```
False
```
## Citation Information
```
@inproceedings{chernodub-etal-2023-detexd,
title = "{D}e{T}ex{D}: A Benchmark Dataset for Delicate Text Detection",
author = "Yavnyi, Serhii and Sliusarenko, Oleksii and Razzaghi, Jade and Mo, Yichen and Hovakimyan, Knar and Chernodub, Artem",
booktitle = "The 7th Workshop on Online Abuse and Harms (WOAH)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.woah-1.2",
pages = "14--28",
abstract = "Over the past few years, much research has been conducted to identify and regulate toxic language. However, few studies have addressed a broader range of sensitive texts that are not necessarily overtly toxic. In this paper, we introduce and define a new category of sensitive text called {``}delicate text.{''} We provide the taxonomy of delicate text and present a detailed annotation scheme. We annotate DeTexD, the first benchmark dataset for delicate text detection. The significance of the difference in the definitions is highlighted by the relative performance deltas between models trained each definitions and corpora and evaluated on the other. We make publicly available the DeTexD Benchmark dataset, annotation guidelines, and baseline model for delicate text detection.",
}
```
|
cagarraz/rl_course_vizdoom_health_gathering_supreme
|
cagarraz
| 2023-07-10T17:23:21Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T17:23:08Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 3.94 +/- 0.20
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r cagarraz/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
komo-dono/harukatomatsu
|
komo-dono
| 2023-07-10T17:05:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-10T17:03:49Z |
---
license: openrail
language:
- ja
tags:
- music
haruka tomatsu 600 epoch
|
opendiffusion/sentimentcheck
|
opendiffusion
| 2023-07-10T16:58:49Z | 0 | 0 |
tf-keras
|
[
"tf-keras",
"bert",
"region:us"
] | null | 2023-05-11T18:26:04Z |
# Intro
OpenDiffusion's SentimentCheck is an AI model built upon Tensorflow+Keras+Pickles. SentimentCheck harnesses the power of deep learning algorithms to accurately classify sentiment in text, making it a flexible tool for businesses, researchers, and developers.
## Usage
---
language:
- en
- nl
- de
- fr
- it
- es
license: mit
---
# bert-base-multilingual-uncased-sentiment
This a bert-base-multilingual-uncased model finetuned for sentiment analysis on product reviews in six languages: English, Dutch, German, French, Spanish and Italian. It predicts the sentiment of the review as a number of stars (between 1 and 5).
This model is intended for direct use as a sentiment analysis model for product reviews in any of the six languages above, or for further finetuning on related sentiment analysis tasks.
## Training data
Here is the number of product reviews we used for finetuning the model:
| Language | Number of reviews |
| -------- | ----------------- |
| English | 150k |
| Dutch | 80k |
| German | 137k |
| French | 140k |
| Italian | 72k |
| Spanish | 50k |
## Accuracy
The finetuned model obtained the following accuracy on 5,000 held-out product reviews in each of the languages:
- Accuracy (exact) is the exact match on the number of stars.
- Accuracy (off-by-1) is the percentage of reviews where the number of stars the model predicts differs by a maximum of 1 from the number given by the human reviewer.
| Language | Accuracy (exact) | Accuracy (off-by-1) |
| -------- | ---------------------- | ------------------- |
| English | 67% | 95%
| Dutch | 57% | 93%
| German | 61% | 94%
| French | 59% | 94%
| Italian | 59% | 95%
| Spanish | 58% | 95%
|
Buth/fatuh
|
Buth
| 2023-07-10T16:50:46Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"en",
"dataset:Open-Orca/OpenOrca",
"license:openrail",
"region:us"
] | null | 2023-07-10T16:48:59Z |
---
license: openrail
datasets:
- Open-Orca/OpenOrca
language:
- en
metrics:
- accuracy
library_name: adapter-transformers
---
|
svalcin/q-FrozenLake-v1-4x4-noSlippery
|
svalcin
| 2023-07-10T16:39:14Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T16:39:10Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="svalcin/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dashan1992/dsl2
|
dashan1992
| 2023-07-10T16:35:33Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-10T16:34:19Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
banden/ppo-LunarLander-v2
|
banden
| 2023-07-10T16:23:12Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T16:22:51Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 250.46 +/- 41.26
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
uw-madison/mra-base-4096-8-d3
|
uw-madison
| 2023-07-10T16:12:42Z | 495 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mra",
"fill-mask",
"arxiv:2207.10284",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-23T06:36:15Z |
# MRA
MRA model for masked language modeling (MLM) for sequence length 512.
## About MRA
The MRA model was proposed in [Multi Resolution Analysis (MRA) for Approximate Self-Attention](https://arxiv.org/abs/2207.10284) by Zhanpeng Zeng, Sourav Pal, Jeffery Kline, Glenn M Fung, and Vikas Singh.
The abstract from the paper is the following:
*Transformers have emerged as a preferred model for many tasks in natural langugage processing and vision. Recent efforts on training and deploying Transformers more efficiently have identified many strategies to approximate the self-attention matrix, a key module in a Transformer architecture. Effective ideas include various prespecified sparsity patterns, low-rank basis expansions and combinations thereof. In this paper, we revisit classical Multiresolution Analysis (MRA) concepts such as Wavelets, whose potential value in this setting remains underexplored thus far. We show that simple approximations based on empirical feedback and design choices informed by modern hardware and implementation challenges, eventually yield a MRA-based approach for self-attention with an excellent performance profile across most criteria of interest. We undertake an extensive set of experiments and demonstrate that this multi-resolution scheme outperforms most efficient self-attention proposals and is favorable for both short and long sequences. Code is available at https://github.com/mlpen/mra-attention.*
This model was contributed by [novice03](https://huggingface.co/novice03).
The original code can be found [here](https://github.com/mlpen/mra-attention).
|
tyavika/Distilbert-QA-Pytorch-seed
|
tyavika
| 2023-07-10T16:10:13Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-10T12:52:40Z |
---
tags:
- generated_from_trainer
model-index:
- name: Distilbert-QA-Pytorch-seed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Distilbert-QA-Pytorch-seed
This model is a fine-tuned version of [tyavika/Distilbert-QA-Pytorch-seed](https://huggingface.co/tyavika/Distilbert-QA-Pytorch-seed) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
mitra-mir/setfit_model_labelfaithful_epochs2
|
mitra-mir
| 2023-07-10T15:54:42Z | 7 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-07-08T13:16:11Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 22 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 44,
"warmup_steps": 5,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
sianadouglas/ensembletest
|
sianadouglas
| 2023-07-10T15:48:14Z | 0 | 0 | null |
[
"en",
"license:other",
"region:us"
] | null | 2023-07-10T15:47:23Z |
---
license: other
language:
- en
---
|
mgmeskill/Pixelcopter-PLE-v0
|
mgmeskill
| 2023-07-10T15:38:32Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T15:26:11Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 42.50 +/- 37.13
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Suryabhan/openai-whisper-large-v2-LORA-colab
|
Suryabhan
| 2023-07-10T15:32:46Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-10T15:32:41Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
tyavika/LR1E4-BS16-Bert_CNN512LSTM256NoBid
|
tyavika
| 2023-07-10T15:31:42Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-09T20:06:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: LR1E4-BS16-Bert_CNN512LSTM256NoBid
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LR1E4-BS16-Bert_CNN512LSTM256NoBid
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.7267 | 1.0 | 3290 | 1.5092 |
| 1.2394 | 2.0 | 6580 | 1.3933 |
| 0.8348 | 3.0 | 9870 | 1.5591 |
| 0.542 | 4.0 | 13160 | 1.6667 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
MnLgt/textual_inversion_muir_1_5
|
MnLgt
| 2023-07-10T15:31:36Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-10T14:16:45Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - jordandavis/textual_inversion_muir_1_5
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
agercas/speecht5_finetuned_voxpopuli_nl
|
agercas
| 2023-07-10T15:27:22Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-10T09:21:57Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4572
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5221 | 4.3 | 1000 | 0.4774 |
| 0.505 | 8.61 | 2000 | 0.4648 |
| 0.4929 | 12.91 | 3000 | 0.4583 |
| 0.4901 | 17.21 | 4000 | 0.4572 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Trong-Nghia/xlnet-base-cased-detect-dep
|
Trong-Nghia
| 2023-07-10T15:17:37Z | 90 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlnet",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-30T13:14:00Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlnet-base-cased-detect-dep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased-detect-dep
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5555
- Accuracy: 0.744
- F1: 0.8164
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 376 | 0.5582 | 0.716 | 0.8065 |
| 0.6188 | 2.0 | 752 | 0.5479 | 0.756 | 0.8232 |
| 0.5835 | 3.0 | 1128 | 0.5306 | 0.758 | 0.8276 |
| 0.5492 | 4.0 | 1504 | 0.5555 | 0.744 | 0.8164 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Birchlabs/llama-13b-stepwise-embeddings
|
Birchlabs
| 2023-07-10T15:17:11Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-07-10T13:55:53Z |
---
license: apache-2.0
---
Fine-tuned input (`embed_tokens: Embedding`) and output (`lm_head: Linear`) embeddings layers, for use with [`Birchlabs/llama-13b-stepwise-adapter`](https://huggingface.co/Birchlabs/llama-13b-stepwise-adapter).
Prior to finetuning: we grew the vocabulary of the tokenizer and embeddings layers. The new embeddings were average-initialized, and needed training, so we trained them. These are the weights from that training.
Ordinarily a QLoRA finetune of an LLM would not finetune the `embed_tokens: Embedding` (you'd need to get a bit creative, because not only have the dimensions changed, but also I don't believe any way has been established to train _adapters_ over `Embedding`s).
Nor apparently would it finetune `lm_head: Linear`. This is harder than it sounds (i.e. you can't handle it the same way you adapt the other Linear layers), because the dimensions have grown.
|
EleutherAI/pythia-1b-deduped
|
EleutherAI
| 2023-07-10T15:04:31Z | 22,714 | 18 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"en",
"dataset:EleutherAI/the_pile_deduplicated",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-02-14T00:07:42Z |
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-1B-deduped
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-1B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-1B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1B-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-1B-deduped to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1B-deduped may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
Pythia-1B-deduped was trained on the Pile **after the dataset has been globally
deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
|
NICFRU/bart-base-paraphrasing-news
|
NICFRU
| 2023-07-10T15:02:02Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-10T14:46:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-base-paraphrasing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-paraphrasing
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6617
- Rouge1: 57.7088
- Rouge2: 51.0096
- Rougel: 54.7514
- Rougelsum: 56.3943
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 0.2 | 10 | 0.5263 | 58.2676 | 51.5842 | 55.5057 | 57.1584 | 19.94 |
| No log | 0.4 | 20 | 0.5050 | 56.1604 | 48.7383 | 54.0373 | 55.372 | 20.0 |
| No log | 0.6 | 30 | 0.4674 | 58.0617 | 51.4993 | 56.0368 | 56.9665 | 20.0 |
| No log | 0.8 | 40 | 0.4545 | 57.5375 | 51.0203 | 55.5247 | 56.5761 | 19.94 |
| No log | 1.0 | 50 | 0.4373 | 57.7263 | 50.8021 | 55.0549 | 56.35 | 19.98 |
| No log | 1.2 | 60 | 0.4313 | 57.87 | 50.9904 | 54.9727 | 56.5379 | 19.97 |
| No log | 1.4 | 70 | 0.4855 | 56.5101 | 49.3124 | 54.1572 | 55.0671 | 20.0 |
| No log | 1.6 | 80 | 0.4202 | 56.6535 | 50.0302 | 53.6891 | 55.1016 | 19.96 |
| No log | 1.8 | 90 | 0.4544 | 57.315 | 50.6289 | 54.642 | 55.7326 | 19.95 |
| 0.5858 | 2.0 | 100 | 0.4157 | 56.4569 | 48.8105 | 53.937 | 55.3515 | 20.0 |
| 0.5858 | 2.2 | 110 | 0.4555 | 57.8424 | 51.5966 | 55.6655 | 56.6862 | 20.0 |
| 0.5858 | 2.4 | 120 | 0.4196 | 58.2562 | 51.7596 | 55.5085 | 57.1823 | 19.97 |
| 0.5858 | 2.6 | 130 | 0.4334 | 58.6906 | 51.6106 | 55.6631 | 57.5254 | 19.89 |
| 0.5858 | 2.8 | 140 | 0.4710 | 56.5401 | 49.33 | 53.8792 | 55.0282 | 20.0 |
| 0.5858 | 3.0 | 150 | 0.4357 | 58.2083 | 52.0049 | 55.9938 | 57.1928 | 20.0 |
| 0.5858 | 3.2 | 160 | 0.4735 | 58.8112 | 52.2196 | 56.5004 | 57.7703 | 19.94 |
| 0.5858 | 3.4 | 170 | 0.4428 | 57.6778 | 50.6377 | 54.8752 | 56.4778 | 20.0 |
| 0.5858 | 3.6 | 180 | 0.4983 | 57.4124 | 50.4244 | 54.6163 | 56.0992 | 20.0 |
| 0.5858 | 3.8 | 190 | 0.4620 | 58.0701 | 51.5021 | 55.7222 | 56.8737 | 20.0 |
| 0.2865 | 4.0 | 200 | 0.4502 | 59.1191 | 52.7516 | 56.4389 | 57.7153 | 20.0 |
| 0.2865 | 4.2 | 210 | 0.4805 | 58.9064 | 52.7148 | 56.1058 | 57.6709 | 20.0 |
| 0.2865 | 4.4 | 220 | 0.4755 | 58.6883 | 52.1464 | 55.9164 | 57.3825 | 20.0 |
| 0.2865 | 4.6 | 230 | 0.4524 | 58.9916 | 52.1101 | 56.4116 | 57.9468 | 19.9 |
| 0.2865 | 4.8 | 240 | 0.4726 | 58.9953 | 52.8173 | 56.5846 | 58.0805 | 20.0 |
| 0.2865 | 5.0 | 250 | 0.4841 | 58.1058 | 51.614 | 55.3374 | 56.7617 | 20.0 |
| 0.2865 | 5.2 | 260 | 0.5047 | 58.2785 | 51.1874 | 55.5336 | 56.8795 | 20.0 |
| 0.2865 | 5.4 | 270 | 0.4658 | 57.2753 | 49.6038 | 53.9588 | 55.6038 | 19.91 |
| 0.2865 | 5.6 | 280 | 0.5261 | 58.1691 | 51.5254 | 55.2685 | 56.7787 | 20.0 |
| 0.2865 | 5.8 | 290 | 0.4833 | 57.8088 | 51.2838 | 54.8739 | 56.4374 | 20.0 |
| 0.1668 | 6.0 | 300 | 0.5067 | 58.2021 | 51.3629 | 55.3548 | 56.9093 | 19.99 |
| 0.1668 | 6.2 | 310 | 0.5461 | 58.0327 | 51.4051 | 55.3426 | 56.7923 | 20.0 |
| 0.1668 | 6.4 | 320 | 0.5463 | 58.1027 | 51.3706 | 55.1733 | 56.7923 | 19.9 |
| 0.1668 | 6.6 | 330 | 0.5837 | 57.6284 | 50.8245 | 54.6253 | 56.2127 | 20.0 |
| 0.1668 | 6.8 | 340 | 0.5221 | 58.0869 | 51.5448 | 55.4226 | 56.7532 | 20.0 |
| 0.1668 | 7.0 | 350 | 0.5433 | 58.7676 | 52.0403 | 56.2634 | 57.6441 | 20.0 |
| 0.1668 | 7.2 | 360 | 0.5498 | 57.9172 | 50.9727 | 55.1006 | 56.6018 | 20.0 |
| 0.1668 | 7.4 | 370 | 0.5581 | 57.4669 | 50.698 | 54.6448 | 56.1325 | 20.0 |
| 0.1668 | 7.6 | 380 | 0.5526 | 57.0821 | 50.298 | 54.1635 | 55.8059 | 20.0 |
| 0.1668 | 7.8 | 390 | 0.5548 | 57.5422 | 50.2734 | 54.2446 | 56.1223 | 20.0 |
| 0.1071 | 8.0 | 400 | 0.5620 | 57.4548 | 50.2657 | 54.5094 | 55.9422 | 20.0 |
| 0.1071 | 8.2 | 410 | 0.5772 | 57.4144 | 50.2443 | 54.5173 | 55.9331 | 20.0 |
| 0.1071 | 8.4 | 420 | 0.5857 | 57.2975 | 50.2116 | 54.5918 | 55.9931 | 20.0 |
| 0.1071 | 8.6 | 430 | 0.5827 | 58.4767 | 51.4318 | 55.4792 | 57.1284 | 20.0 |
| 0.1071 | 8.8 | 440 | 0.5728 | 58.4414 | 51.3523 | 55.2838 | 57.202 | 20.0 |
| 0.1071 | 9.0 | 450 | 0.5919 | 58.0499 | 51.3783 | 55.0748 | 56.6939 | 20.0 |
| 0.1071 | 9.2 | 460 | 0.5937 | 57.7604 | 50.845 | 54.8941 | 56.351 | 20.0 |
| 0.1071 | 9.4 | 470 | 0.5970 | 57.3655 | 50.4126 | 54.4522 | 55.7815 | 20.0 |
| 0.1071 | 9.6 | 480 | 0.5911 | 58.203 | 51.0367 | 55.3215 | 56.8485 | 20.0 |
| 0.1071 | 9.8 | 490 | 0.6121 | 58.2898 | 51.2749 | 55.4292 | 57.0241 | 20.0 |
| 0.0718 | 10.0 | 500 | 0.5903 | 58.2487 | 51.3838 | 55.4237 | 56.8863 | 20.0 |
| 0.0718 | 10.2 | 510 | 0.5983 | 58.2681 | 51.0925 | 55.2887 | 56.9562 | 20.0 |
| 0.0718 | 10.4 | 520 | 0.6308 | 57.9797 | 50.7386 | 54.995 | 56.5939 | 20.0 |
| 0.0718 | 10.6 | 530 | 0.6307 | 57.6269 | 50.5515 | 54.446 | 56.1544 | 20.0 |
| 0.0718 | 10.8 | 540 | 0.6173 | 57.9545 | 51.1005 | 54.9406 | 56.5732 | 20.0 |
| 0.0718 | 11.0 | 550 | 0.6322 | 58.3718 | 51.4321 | 55.4241 | 57.1879 | 20.0 |
| 0.0718 | 11.2 | 560 | 0.6027 | 58.6581 | 51.8607 | 55.6436 | 57.32 | 20.0 |
| 0.0718 | 11.4 | 570 | 0.6140 | 58.6476 | 51.7822 | 55.5845 | 57.3018 | 20.0 |
| 0.0718 | 11.6 | 580 | 0.6184 | 59.2454 | 52.4204 | 56.2174 | 57.9278 | 20.0 |
| 0.0718 | 11.8 | 590 | 0.6281 | 59.2945 | 52.8165 | 56.547 | 58.0674 | 20.0 |
| 0.0512 | 12.0 | 600 | 0.6128 | 58.2165 | 51.3689 | 55.37 | 56.8342 | 20.0 |
| 0.0512 | 12.2 | 610 | 0.6482 | 57.9196 | 50.9793 | 55.0883 | 56.6986 | 20.0 |
| 0.0512 | 12.4 | 620 | 0.6267 | 57.4782 | 50.1118 | 54.2802 | 55.8872 | 20.0 |
| 0.0512 | 12.6 | 630 | 0.6198 | 57.457 | 50.4079 | 54.2449 | 55.8118 | 20.0 |
| 0.0512 | 12.8 | 640 | 0.6500 | 57.6903 | 51.0627 | 55.0743 | 56.3025 | 20.0 |
| 0.0512 | 13.0 | 650 | 0.6265 | 57.4394 | 50.9013 | 54.7936 | 56.1688 | 20.0 |
| 0.0512 | 13.2 | 660 | 0.6817 | 58.4345 | 51.7087 | 55.291 | 57.0057 | 20.0 |
| 0.0512 | 13.4 | 670 | 0.6322 | 57.869 | 50.9503 | 54.8937 | 56.5178 | 20.0 |
| 0.0512 | 13.6 | 680 | 0.6424 | 57.8285 | 51.1014 | 55.0072 | 56.5022 | 20.0 |
| 0.0512 | 13.8 | 690 | 0.6668 | 58.7067 | 51.9929 | 55.5044 | 57.1517 | 20.0 |
| 0.0397 | 14.0 | 700 | 0.6537 | 58.8717 | 52.4036 | 55.6521 | 57.4855 | 20.0 |
| 0.0397 | 14.2 | 710 | 0.6463 | 58.9623 | 52.4749 | 55.8145 | 57.8095 | 20.0 |
| 0.0397 | 14.4 | 720 | 0.6630 | 58.8097 | 52.1997 | 55.8204 | 57.6325 | 20.0 |
| 0.0397 | 14.6 | 730 | 0.6839 | 59.0479 | 52.6573 | 56.0439 | 57.7322 | 20.0 |
| 0.0397 | 14.8 | 740 | 0.6541 | 59.2854 | 52.6109 | 56.1891 | 57.9446 | 20.0 |
| 0.0397 | 15.0 | 750 | 0.6486 | 58.8419 | 52.2004 | 55.8071 | 57.49 | 20.0 |
| 0.0397 | 15.2 | 760 | 0.6578 | 57.6161 | 50.7276 | 54.5514 | 56.2359 | 20.0 |
| 0.0397 | 15.4 | 770 | 0.6673 | 57.5458 | 50.8286 | 54.4597 | 56.1513 | 20.0 |
| 0.0397 | 15.6 | 780 | 0.6624 | 57.6634 | 51.0017 | 54.6769 | 56.3837 | 20.0 |
| 0.0397 | 15.8 | 790 | 0.6469 | 57.9037 | 51.137 | 54.8939 | 56.6427 | 20.0 |
| 0.0301 | 16.0 | 800 | 0.6373 | 57.8696 | 51.0899 | 54.8543 | 56.4596 | 20.0 |
| 0.0301 | 16.2 | 810 | 0.6712 | 58.614 | 52.0052 | 55.6436 | 57.3211 | 20.0 |
| 0.0301 | 16.4 | 820 | 0.6812 | 58.5214 | 51.8911 | 55.7447 | 57.2663 | 20.0 |
| 0.0301 | 16.6 | 830 | 0.6716 | 58.5818 | 51.929 | 55.7993 | 57.4064 | 20.0 |
| 0.0301 | 16.8 | 840 | 0.6590 | 57.745 | 51.0481 | 54.8545 | 56.4781 | 20.0 |
| 0.0301 | 17.0 | 850 | 0.6695 | 57.6663 | 50.9646 | 54.7863 | 56.3687 | 20.0 |
| 0.0301 | 17.2 | 860 | 0.6858 | 57.5552 | 51.0436 | 54.7092 | 56.3079 | 20.0 |
| 0.0301 | 17.4 | 870 | 0.6840 | 57.9091 | 51.3823 | 54.8309 | 56.6186 | 20.0 |
| 0.0301 | 17.6 | 880 | 0.6751 | 57.8223 | 51.1688 | 54.7562 | 56.5558 | 20.0 |
| 0.0301 | 17.8 | 890 | 0.6589 | 57.9956 | 51.1425 | 54.9509 | 56.6868 | 20.0 |
| 0.0482 | 18.0 | 900 | 0.6634 | 58.0392 | 51.3121 | 55.0726 | 56.7878 | 20.0 |
| 0.0482 | 18.2 | 910 | 0.6907 | 58.2021 | 51.4548 | 55.1874 | 56.91 | 20.0 |
| 0.0482 | 18.4 | 920 | 0.6977 | 58.1124 | 51.4254 | 55.062 | 56.8412 | 20.0 |
| 0.0482 | 18.6 | 930 | 0.6832 | 58.0776 | 51.3168 | 55.0849 | 56.8226 | 20.0 |
| 0.0482 | 18.8 | 940 | 0.6672 | 57.925 | 51.2475 | 54.9661 | 56.655 | 20.0 |
| 0.0482 | 19.0 | 950 | 0.6582 | 57.9285 | 51.2483 | 54.9744 | 56.6609 | 20.0 |
| 0.0482 | 19.2 | 960 | 0.6575 | 57.9285 | 51.2483 | 54.9744 | 56.6609 | 20.0 |
| 0.0482 | 19.4 | 970 | 0.6619 | 57.8961 | 51.2097 | 54.9475 | 56.6344 | 20.0 |
| 0.0482 | 19.6 | 980 | 0.6658 | 57.8961 | 51.2097 | 54.9475 | 56.6344 | 20.0 |
| 0.0482 | 19.8 | 990 | 0.6635 | 57.7222 | 51.0096 | 54.8166 | 56.4623 | 20.0 |
| 0.0201 | 20.0 | 1000 | 0.6617 | 57.7088 | 51.0096 | 54.7514 | 56.3943 | 20.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
alienware/layoutlmv3-finetuned-cord_100
|
alienware
| 2023-07-10T15:01:59Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"dataset:cord-layoutlmv3",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-09T12:32:12Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- cord-layoutlmv3
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-cord_100
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: cord-layoutlmv3
type: cord-layoutlmv3
config: cord
split: test
args: cord
metrics:
- name: Precision
type: precision
value: 0.9569093610698366
- name: Recall
type: recall
value: 0.9640718562874252
- name: F1
type: f1
value: 0.9604772557792692
- name: Accuracy
type: accuracy
value: 0.9681663837011885
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-cord_100
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the cord-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1720
- Precision: 0.9569
- Recall: 0.9641
- F1: 0.9605
- Accuracy: 0.9682
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.56 | 250 | 0.3320 | 0.9011 | 0.9207 | 0.9108 | 0.9253 |
| 0.3502 | 3.12 | 500 | 0.2811 | 0.9281 | 0.9371 | 0.9326 | 0.9427 |
| 0.3502 | 4.69 | 750 | 0.2429 | 0.9210 | 0.9341 | 0.9275 | 0.9435 |
| 0.162 | 6.25 | 1000 | 0.2264 | 0.9385 | 0.9476 | 0.9430 | 0.9542 |
| 0.162 | 7.81 | 1250 | 0.1996 | 0.9373 | 0.9513 | 0.9443 | 0.9601 |
| 0.0971 | 9.38 | 1500 | 0.1686 | 0.9569 | 0.9633 | 0.9601 | 0.9690 |
| 0.0971 | 10.94 | 1750 | 0.1814 | 0.9532 | 0.9603 | 0.9567 | 0.9652 |
| 0.0704 | 12.5 | 2000 | 0.1915 | 0.9539 | 0.9611 | 0.9575 | 0.9656 |
| 0.0704 | 14.06 | 2250 | 0.1833 | 0.9590 | 0.9633 | 0.9612 | 0.9677 |
| 0.0513 | 15.62 | 2500 | 0.1720 | 0.9569 | 0.9641 | 0.9605 | 0.9682 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
SHENMU007/neunit_BASE_V10.19
|
SHENMU007
| 2023-07-10T15:01:47Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-07T08:50:51Z |
---
language:
- zh
license: mit
base_model: microsoft/speecht5_tts
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
NasimB/gpt2-cocnat-mod-datasets-txt-processing
|
NasimB
| 2023-07-10T15:01:23Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-10T12:29:02Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-cocnat-mod-datasets-txt-processing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-cocnat-mod-datasets-txt-processing
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3377
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6848 | 0.3 | 500 | 5.6500 |
| 5.3379 | 0.59 | 1000 | 5.2204 |
| 4.9909 | 0.89 | 1500 | 4.9703 |
| 4.7146 | 1.19 | 2000 | 4.8200 |
| 4.5695 | 1.49 | 2500 | 4.7076 |
| 4.4685 | 1.78 | 3000 | 4.5985 |
| 4.3237 | 2.08 | 3500 | 4.5311 |
| 4.1614 | 2.38 | 4000 | 4.4731 |
| 4.1267 | 2.68 | 4500 | 4.4151 |
| 4.082 | 2.97 | 5000 | 4.3593 |
| 3.8448 | 3.27 | 5500 | 4.3575 |
| 3.8261 | 3.57 | 6000 | 4.3240 |
| 3.8089 | 3.86 | 6500 | 4.2887 |
| 3.6462 | 4.16 | 7000 | 4.2921 |
| 3.5453 | 4.46 | 7500 | 4.2840 |
| 3.529 | 4.76 | 8000 | 4.2688 |
| 3.4926 | 5.05 | 8500 | 4.2683 |
| 3.3463 | 5.35 | 9000 | 4.2715 |
| 3.3453 | 5.65 | 9500 | 4.2702 |
| 3.3408 | 5.95 | 10000 | 4.2694 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
rickareo/distilbert-base-uncased-finetuned-emotion
|
rickareo
| 2023-07-10T14:59:25Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-10T14:44:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9229910973969778
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2155
- Accuracy: 0.923
- F1: 0.9230
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8271 | 1.0 | 250 | 0.3166 | 0.903 | 0.8989 |
| 0.2469 | 2.0 | 500 | 0.2155 | 0.923 | 0.9230 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
ericNguyen0132/DepRoBERTa-2ndStage
|
ericNguyen0132
| 2023-07-10T14:56:14Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-10T13:42:58Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: DepRoBERTa-2ndStage
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DepRoBERTa-2ndStage
This model is a fine-tuned version of [rafalposwiata/deproberta-large-v1](https://huggingface.co/rafalposwiata/deproberta-large-v1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6330
- Accuracy: 0.855
- F1: 0.9134
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 469 | 0.3572 | 0.8617 | 0.9224 |
| 0.4953 | 2.0 | 938 | 0.3593 | 0.8783 | 0.9315 |
| 0.3493 | 3.0 | 1407 | 0.4274 | 0.8483 | 0.9091 |
| 0.313 | 4.0 | 1876 | 0.5488 | 0.8617 | 0.9187 |
| 0.2622 | 5.0 | 2345 | 0.6330 | 0.855 | 0.9134 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
pcuenq/falcon-7b-instruct-transformers
|
pcuenq
| 2023-07-10T14:54:25Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"falcon",
"text-generation",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2205.14135",
"arxiv:1911.02150",
"arxiv:2005.14165",
"arxiv:2104.09864",
"arxiv:2306.01116",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-10T12:57:31Z |
---
datasets:
- tiiuae/falcon-refinedweb
language:
- en
inference: true
license: apache-2.0
duplicated_from: pcuenq/falcon-7b-instruct
---
# ✨ Falcon-7B-Instruct
**Falcon-7B-Instruct is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and finetuned on a mixture of chat/instruct datasets. It is made available under the Apache 2.0 license.**
*Paper coming soon 😊.*
🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)!
## Why use Falcon-7B-Instruct?
* **You are looking for a ready-to-use chat/instruct model based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).**
* **Falcon-7B is a strong base model, outperforming comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
* **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
💬 **This is an instruct model, which may not be ideal for further finetuning.** If you are interested in building your own instruct/chat model, we recommend starting from [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
🔥 **Looking for an even more powerful model?** [Falcon-40B-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) is Falcon-7B-Instruct's big brother!
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
You will need **at least 16GB of memory** to swiftly run inference with Falcon-7B-Instruct.
# Model Card for Falcon-7B-Instruct
## Model Details
### Model Description
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
- **Model type:** Causal decoder-only;
- **Language(s) (NLP):** English and French;
- **License:** Apache 2.0;
- **Finetuned from model:** [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
### Model Source
- **Paper:** *coming soon*.
## Uses
### Direct Use
Falcon-7B-Instruct has been finetuned on a mixture of instruct and chat datasets.
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
Falcon-7B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of Falcon-7B-Instruct to develop guardrails and to take appropriate precautions for any production use.
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Training Details
### Training Data
Falcon-7B-Instruct was finetuned on a 250M tokens mixture of instruct/chat datasets.
| **Data source** | **Fraction** | **Tokens** | **Description** |
|--------------------|--------------|------------|-----------------------------------|
| [Bai ze](https://github.com/project-baize/baize-chatbot) | 65% | 164M | chat |
| [GPT4All](https://github.com/nomic-ai/gpt4all) | 25% | 62M | instruct |
| [GPTeacher](https://github.com/teknium1/GPTeacher) | 5% | 11M | instruct |
| [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 5% | 13M | massive web crawl |
The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer.
## Evaluation
*Paper coming soon.*
See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results.
Note that this model variant is not optimized for NLP benchmarks.
## Technical Specifications
For more information about pretraining, see [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
### Model Architecture and Objective
Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
* **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
* **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135));
* **Decoder-block:** parallel attention/MLP with a single layer norm.
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|-----------|----------------------------------------|
| Layers | 32 | |
| `d_model` | 4544 | Increased to compensate for multiquery |
| `head_dim` | 64 | Reduced to optimise for FlashAttention |
| Vocabulary | 65024 | |
| Sequence length | 2048 | |
### Compute Infrastructure
#### Hardware
Falcon-7B-Instruct was trained on AWS SageMaker, on 32 A100 40GB GPUs in P4d instances.
#### Software
Falcon-7B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
## Citation
*Paper coming soon* 😊. In the meanwhile, you can use the following information to cite:
```
@article{falcon40b,
title={{Falcon-40B}: an open large language model with state-of-the-art performance},
author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
year={2023}
}
```
To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116).
```
@article{refinedweb,
title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
journal={arXiv preprint arXiv:2306.01116},
eprint={2306.01116},
eprinttype = {arXiv},
url={https://arxiv.org/abs/2306.01116},
year={2023}
}
```
## License
Falcon-7B-Instruct is made available under the Apache 2.0 license.
## Contact
[email protected]
|
pmpc/de_pipeline
|
pmpc
| 2023-07-10T14:53:50Z | 1 | 0 |
spacy
|
[
"spacy",
"token-classification",
"de",
"model-index",
"region:us"
] |
token-classification
| 2023-07-10T10:51:54Z |
---
tags:
- spacy
- token-classification
language:
- de
model-index:
- name: de_pipeline
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.9573497322
- name: NER Recall
type: recall
value: 0.9567803331
- name: NER F Score
type: f_score
value: 0.9570649479
---
| Feature | Description |
| --- | --- |
| **Name** | `de_pipeline` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.5.3,<3.6.0` |
| **Default Pipeline** | `transformer`, `ner` |
| **Components** | `transformer`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (19 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `AN`, `EUN`, `GRT`, `GS`, `INN`, `LD`, `LDS`, `LIT`, `MRK`, `ORG`, `PER`, `RR`, `RS`, `ST`, `STR`, `UN`, `VO`, `VS`, `VT` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 95.71 |
| `ENTS_P` | 95.73 |
| `ENTS_R` | 95.68 |
| `TRANSFORMER_LOSS` | 11836.63 |
| `NER_LOSS` | 8009.96 |
|
firecoral/ppo-LunarLander-v2
|
firecoral
| 2023-07-10T14:49:32Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T14:49:10Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.63 +/- 20.29
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
lizhuang144/flan-t5-base-factual-sg
|
lizhuang144
| 2023-07-10T14:34:47Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-06T11:13:16Z |
See details at 'https://github.com/zhuang-li/FACTUAL/tree/main'
|
marsh5/Reinforce-cartpole
|
marsh5
| 2023-07-10T14:31:44Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T14:31:34Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Birchlabs/llama-13b-stepwise-tokenizer
|
Birchlabs
| 2023-07-10T14:25:54Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-07-10T13:49:42Z |
---
license: apache-2.0
---
forked from https://huggingface.co/huggyllama/llama-13b/tree/main
This tokenizer supports [`Birchlabs/llama-13b-stepwise-adapter`](https://huggingface.co/Birchlabs/llama-13b-stepwise-adapter).
Adds four new tokens for stepwise reasoning:
```
<|step_start|>
<|step_end|>
<|answer_start|>
<|answer_end|>
```
See [`Birchlabs/llama-13b-stepwise-adapter`](https://huggingface.co/Birchlabs/llama-13b-stepwise-adapter) for details of how all the parts should be used together.
|
JennnDexter/textual_inversion
|
JennnDexter
| 2023-07-10T14:24:31Z | 29 | 0 |
diffusers
|
[
"diffusers",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-07T11:57:47Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - JennnDexter/textual_inversion
These are textual inversion adaption weights for CompVis/stable-diffusion-v1-4. You can find some example images in the following.
|
sannne990/meinahentai
|
sannne990
| 2023-07-10T14:08:40Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-10T13:56:40Z |
---
license: creativeml-openrail-m
---
|
iammartian0/speecht5_finetuned_voxpopuli_it
|
iammartian0
| 2023-07-10T14:03:39Z | 81 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"dataset:voxpopuli/it",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-07-10T11:00:58Z |
---
license: mit
tags:
- generated_from_trainer
- text-to-speech
datasets:
- voxpopuli/it
model-index:
- name: speecht5_finetuned_voxpopuli_it
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_it
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli/it dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5467 | 10.58 | 1000 | 0.5003 |
| 0.5182 | 21.16 | 2000 | 0.4882 |
| 0.5046 | 31.75 | 3000 | 0.4857 |
| 0.5013 | 42.33 | 4000 | 0.4855 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
dilip-reddy/ppo-LunarLander
|
dilip-reddy
| 2023-07-10T13:57:53Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T13:57:33Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 268.69 +/- 17.74
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
cointegrated/rubert-base-lesha17-punctuation
|
cointegrated
| 2023-07-10T13:56:54Z | 125 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
The model for https://github.com/Lesha17/Punctuation; all credits go to the owner of this repository.
|
JoaoReis/Neuronet
|
JoaoReis
| 2023-07-10T13:45:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-10T13:29:55Z |
import socket,warnings
try:
socket.setdefaulttimeout(1)
socket.socket(socket.AF_INET, socket.SOCK_STREAM).connect(('1.1.1.1', 53))
except socket.error as ex: raise Exception("STOP: No internet. Click '>|' in top right and set 'Internet' switch to on")
import os
iskaggle = os.environ.get('KAGGLE_KERNEL_RUN_TYPE', '')
if iskaggle:
!pip install -Uqq fastai
!pip install -Uqq duckduckgo_search
from duckduckgo_search import ddg_images
from fastcore.all import *
def search_images(term, max_images=200): return L(ddg_images(term, max_results=max_images)).itemgot('image')
urls = search_images(' star fox photos', max_images=1)
urls[0]
from fastdownload import download_url
dest = 'starfox.jpg'
download_url(urls[0], dest, show_progress=False)
from fastai.vision.all import *
im = Image.open(dest)
im.to_thumb(256,256)
download_url(search_images('eva 01', max_images=1)[0], 'forest.jpg', show_progress=False)
Image.open('forest.jpg').to_thumb(256,256)
searches = 'eva 01','star fox'
path = Path('eva 01_or_not')
from time import sleep
for o in searches:
dest = (path/o)
dest.mkdir(exist_ok=True, parents=True)
download_images(dest, urls=search_images(f'{o} photo'))
sleep(10) # Pause between searches to avoid over-loading server
download_images(dest, urls=search_images(f'{o} sun photo'))
sleep(10)
download_images(dest, urls=search_images(f'{o} shade photo'))
sleep(10)
resize_images(path/o, max_size=400, dest=path/o)
failed = verify_images(get_image_files(path))
failed.map(Path.unlink)
len(failed)
dls = DataBlock(
blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
splitter=RandomSplitter(valid_pct=0.2, seed=42),
get_y=parent_label,
item_tfms=[Resize(192, method='squish')]
).dataloaders(path)
dls.show_batch(max_n=6)
learn = vision_learner(dls, resnet18, metrics=error_rate)
learn.fine_tune(3)
is_bird,_,probs = learn.predict(PILImage.create('bird.jpg'))
print(f"This is a: {is_bird}.")
print(f"Probability it's a bird: {probs[0]:.4f}")
|
PKU-Alignment/beaver-dam-7b
|
PKU-Alignment
| 2023-07-10T13:42:02Z | 1,707 | 6 |
safe-rlhf
|
[
"safe-rlhf",
"pytorch",
"llama",
"beaver",
"safety",
"ai-safety",
"deepspeed",
"rlhf",
"alpaca",
"en",
"dataset:PKU-Alignment/BeaverTails",
"arxiv:2302.13971",
"region:us"
] | null | 2023-07-10T02:57:51Z |
---
datasets:
- PKU-Alignment/BeaverTails
language:
- en
tags:
- beaver
- safety
- llama
- ai-safety
- deepspeed
- rlhf
- alpaca
library_name: safe-rlhf
---
# 🦫 BeaverDam Model Card
## Beaver-Dam-7B
Boasting 7 billion parameters, Beaver-Dam-7B is a powerful QA-Moderation model derived from the Llama-7B base model and trained on the [PKU-Alignment/BeaverTails](https://huggingface.co/datasets/PKU-Alignment/BeaverTails) Classification Dataset.
Beaver-Dam's key feature is its ability to analyze responses to prompts for toxicity across 14 different categories.
- **Developed by:** [PKU-Alignment Team](https://github.com/PKU-Alignment)
- **Model type:** QA moderation
- **License:** Non-commercial license
- **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971)
## Model Sources
- **Repository:** https://github.com/PKU-Alignment/beavertails
- **Web:** https://sites.google.com/view/pku-beavertails
- **Paper:** Coming soon
## Why Choose Beaver-Dam-7B?
Traditional approaches to content moderation in Question-Answering (QA) tasks often gauge the toxicity of a QA pair by examining each utterance individually. This method, while effective to a degree, can inadvertently result in a significant number of user prompts being discarded. If the moderation system perceives them as too harmful, it may prevent the language model from generating appropriate responses, consequently interrupting the user experience and potentially hindering the evolution of a beneficial AI with human-like understanding.
BeaverDam is a shift in the approach to content moderation for QA tasks - a concept we term "QA moderation":

In this paradigm, a QA pair is classified as harmful or benign based on its degree of risk neutrality. Specifically, it assesses the extent to which potential risks in a potentially harmful question can be counteracted by a non-threatening response.
|
WALIDALI/bekiamzrev
|
WALIDALI
| 2023-07-10T13:39:54Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-10T13:33:42Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### bekiamzrev Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
skrl/IsaacGymEnvs-FrankaCabinet-PPO
|
skrl
| 2023-07-10T13:39:14Z | 0 | 0 |
skrl
|
[
"skrl",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-24T20:43:14Z |
---
library_name: skrl
tags:
- deep-reinforcement-learning
- reinforcement-learning
- skrl
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 3368.97 +/- 117.64
name: Total reward (mean)
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: IsaacGymEnvs-FrankaCabinet
type: IsaacGymEnvs-FrankaCabinet
---
<!-- ---
torch: 3250.18 +/- 126.12
jax: 3368.97 +/- 117.64
numpy: 3118.77 +/- 140.06
--- -->
# IsaacGymEnvs-FrankaCabinet-PPO
Trained agent for [NVIDIA Isaac Gym Preview](https://github.com/NVIDIA-Omniverse/IsaacGymEnvs) environments.
- **Task:** FrankaCabinet
- **Agent:** [PPO](https://skrl.readthedocs.io/en/latest/api/agents/ppo.html)
# Usage (with skrl)
Note: Visit the skrl [Examples](https://skrl.readthedocs.io/en/latest/intro/examples.html) section to access the scripts.
* PyTorch
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-FrankaCabinet-PPO", filename="agent.pt")
agent.load(path)
```
* JAX
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-FrankaCabinet-PPO", filename="agent.pickle")
agent.load(path)
```
# Hyperparameters
Note: Undefined parameters keep their values by default.
```python
# https://skrl.readthedocs.io/en/latest/api/agents/ppo.html#configuration-and-hyperparameters
cfg = PPO_DEFAULT_CONFIG.copy()
cfg["rollouts"] = 16 # memory_size
cfg["learning_epochs"] = 8
cfg["mini_batches"] = 8 # 16 * 4096 / 8192
cfg["discount_factor"] = 0.99
cfg["lambda"] = 0.95
cfg["learning_rate"] = 5e-4
cfg["learning_rate_scheduler"] = KLAdaptiveRL
cfg["learning_rate_scheduler_kwargs"] = {"kl_threshold": 0.008}
cfg["random_timesteps"] = 0
cfg["learning_starts"] = 0
cfg["grad_norm_clip"] = 1.0
cfg["ratio_clip"] = 0.2
cfg["value_clip"] = 0.2
cfg["clip_predicted_values"] = True
cfg["entropy_loss_scale"] = 0.0
cfg["value_loss_scale"] = 2.0
cfg["kl_threshold"] = 0
cfg["rewards_shaper"] = lambda rewards, timestep, timesteps: rewards * 0.01
cfg["state_preprocessor"] = RunningStandardScaler
cfg["state_preprocessor_kwargs"] = {"size": env.observation_space, "device": device}
cfg["value_preprocessor"] = RunningStandardScaler
cfg["value_preprocessor_kwargs"] = {"size": 1, "device": device}
```
|
Winmodel/Taxi-v3
|
Winmodel
| 2023-07-10T13:32:37Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T13:32:35Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Winmodel/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
JesseJr/distilbert-base-uncased-finetuned-cola
|
JesseJr
| 2023-07-10T13:32:08Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-10T13:27:38Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: JesseJr/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# JesseJr/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2071
- Validation Loss: 0.5352
- Train Matthews Correlation: 0.5089
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5299 | 0.4596 | 0.4739 | 0 |
| 0.3386 | 0.4643 | 0.5152 | 1 |
| 0.2071 | 0.5352 | 0.5089 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
AladarMezga/detr-resnet-50_finetuned_cppe5
|
AladarMezga
| 2023-07-10T13:26:52Z | 192 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:cppe-5",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-07-10T12:06:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cppe-5
model-index:
- name: detr-resnet-50_finetuned_cppe5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.