modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-28 00:48:09
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
534 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-28 00:47:12
card
stringlengths
11
1.01M
ayanban011/6_e_200-tiny_tobacco3482_kd_CEKD_t1.5_a0.9
ayanban011
2023-07-13T17:42:01Z
165
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-13T15:27:45Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: 6_e_200-tiny_tobacco3482_kd_CEKD_t1.5_a0.9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 6_e_200-tiny_tobacco3482_kd_CEKD_t1.5_a0.9 This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5536 - Accuracy: 0.82 - Brier Loss: 0.2571 - Nll: 1.4560 - F1 Micro: 0.82 - F1 Macro: 0.7994 - Ece: 0.1404 - Aurc: 0.0578 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | No log | 1.0 | 25 | 2.0125 | 0.23 | 0.8650 | 4.4951 | 0.23 | 0.1799 | 0.2806 | 0.7660 | | No log | 2.0 | 50 | 1.2756 | 0.555 | 0.5948 | 2.6781 | 0.555 | 0.4537 | 0.2800 | 0.2519 | | No log | 3.0 | 75 | 0.9515 | 0.685 | 0.4392 | 1.9416 | 0.685 | 0.5937 | 0.2067 | 0.1288 | | No log | 4.0 | 100 | 0.7861 | 0.72 | 0.3622 | 1.5125 | 0.72 | 0.6675 | 0.2050 | 0.0961 | | No log | 5.0 | 125 | 0.7551 | 0.77 | 0.3362 | 1.5478 | 0.7700 | 0.7318 | 0.2043 | 0.0838 | | No log | 6.0 | 150 | 0.8056 | 0.77 | 0.3525 | 1.4305 | 0.7700 | 0.7589 | 0.1943 | 0.0891 | | No log | 7.0 | 175 | 0.7942 | 0.775 | 0.3310 | 1.8237 | 0.775 | 0.7454 | 0.1812 | 0.0924 | | No log | 8.0 | 200 | 0.7735 | 0.77 | 0.3384 | 1.5161 | 0.7700 | 0.7530 | 0.1987 | 0.0931 | | No log | 9.0 | 225 | 0.6992 | 0.79 | 0.3025 | 1.5664 | 0.79 | 0.7777 | 0.1631 | 0.0774 | | No log | 10.0 | 250 | 0.6753 | 0.8 | 0.2955 | 1.5189 | 0.8000 | 0.7900 | 0.1654 | 0.0633 | | No log | 11.0 | 275 | 0.7701 | 0.805 | 0.3018 | 1.4787 | 0.805 | 0.7932 | 0.1581 | 0.0881 | | No log | 12.0 | 300 | 0.7164 | 0.79 | 0.3292 | 1.3527 | 0.79 | 0.7892 | 0.1946 | 0.0871 | | No log | 13.0 | 325 | 0.6376 | 0.8 | 0.2901 | 1.4953 | 0.8000 | 0.7824 | 0.1770 | 0.0659 | | No log | 14.0 | 350 | 0.7319 | 0.77 | 0.3247 | 1.6062 | 0.7700 | 0.7424 | 0.1803 | 0.0816 | | No log | 15.0 | 375 | 0.5749 | 0.805 | 0.2738 | 0.8483 | 0.805 | 0.8010 | 0.1569 | 0.0647 | | No log | 16.0 | 400 | 0.6879 | 0.775 | 0.3085 | 1.3379 | 0.775 | 0.7759 | 0.1909 | 0.0730 | | No log | 17.0 | 425 | 0.5094 | 0.85 | 0.2241 | 1.4391 | 0.85 | 0.8360 | 0.1589 | 0.0441 | | No log | 18.0 | 450 | 0.6826 | 0.8 | 0.3015 | 1.6933 | 0.8000 | 0.7969 | 0.1651 | 0.0792 | | No log | 19.0 | 475 | 0.5677 | 0.825 | 0.2622 | 1.5426 | 0.825 | 0.8051 | 0.1600 | 0.0515 | | 0.4493 | 20.0 | 500 | 0.5156 | 0.85 | 0.2312 | 1.5882 | 0.85 | 0.8471 | 0.1466 | 0.0427 | | 0.4493 | 21.0 | 525 | 0.5743 | 0.83 | 0.2600 | 1.5702 | 0.83 | 0.8187 | 0.1604 | 0.0540 | | 0.4493 | 22.0 | 550 | 0.5872 | 0.825 | 0.2712 | 1.6270 | 0.825 | 0.8056 | 0.1687 | 0.0572 | | 0.4493 | 23.0 | 575 | 0.5770 | 0.81 | 0.2701 | 1.5089 | 0.81 | 0.7969 | 0.1559 | 0.0655 | | 0.4493 | 24.0 | 600 | 0.5621 | 0.82 | 0.2590 | 1.3500 | 0.82 | 0.8052 | 0.1621 | 0.0587 | | 0.4493 | 25.0 | 625 | 0.5480 | 0.805 | 0.2518 | 1.2519 | 0.805 | 0.7884 | 0.1483 | 0.0619 | | 0.4493 | 26.0 | 650 | 0.5555 | 0.81 | 0.2575 | 1.3183 | 0.81 | 0.7926 | 0.1585 | 0.0598 | | 0.4493 | 27.0 | 675 | 0.5449 | 0.82 | 0.2524 | 1.4400 | 0.82 | 0.8059 | 0.1713 | 0.0579 | | 0.4493 | 28.0 | 700 | 0.5483 | 0.81 | 0.2545 | 1.4400 | 0.81 | 0.7894 | 0.1450 | 0.0580 | | 0.4493 | 29.0 | 725 | 0.5448 | 0.81 | 0.2524 | 1.3070 | 0.81 | 0.7931 | 0.1447 | 0.0595 | | 0.4493 | 30.0 | 750 | 0.5476 | 0.815 | 0.2538 | 1.3101 | 0.815 | 0.7982 | 0.1536 | 0.0582 | | 0.4493 | 31.0 | 775 | 0.5433 | 0.82 | 0.2529 | 1.3812 | 0.82 | 0.8011 | 0.1637 | 0.0575 | | 0.4493 | 32.0 | 800 | 0.5469 | 0.805 | 0.2528 | 1.2973 | 0.805 | 0.7905 | 0.1668 | 0.0600 | | 0.4493 | 33.0 | 825 | 0.5443 | 0.815 | 0.2525 | 1.3020 | 0.815 | 0.7933 | 0.1768 | 0.0579 | | 0.4493 | 34.0 | 850 | 0.5442 | 0.82 | 0.2521 | 1.3234 | 0.82 | 0.8011 | 0.1555 | 0.0580 | | 0.4493 | 35.0 | 875 | 0.5434 | 0.82 | 0.2531 | 1.4362 | 0.82 | 0.8011 | 0.1430 | 0.0564 | | 0.4493 | 36.0 | 900 | 0.5469 | 0.815 | 0.2534 | 1.3075 | 0.815 | 0.7933 | 0.1590 | 0.0578 | | 0.4493 | 37.0 | 925 | 0.5468 | 0.815 | 0.2546 | 1.3204 | 0.815 | 0.7933 | 0.1623 | 0.0567 | | 0.4493 | 38.0 | 950 | 0.5473 | 0.815 | 0.2540 | 1.3722 | 0.815 | 0.7933 | 0.1514 | 0.0582 | | 0.4493 | 39.0 | 975 | 0.5453 | 0.82 | 0.2532 | 1.3874 | 0.82 | 0.8011 | 0.1751 | 0.0568 | | 0.0581 | 40.0 | 1000 | 0.5475 | 0.815 | 0.2543 | 1.3116 | 0.815 | 0.7933 | 0.1654 | 0.0573 | | 0.0581 | 41.0 | 1025 | 0.5452 | 0.815 | 0.2533 | 1.4421 | 0.815 | 0.7933 | 0.1459 | 0.0579 | | 0.0581 | 42.0 | 1050 | 0.5467 | 0.815 | 0.2538 | 1.3730 | 0.815 | 0.7933 | 0.1642 | 0.0576 | | 0.0581 | 43.0 | 1075 | 0.5478 | 0.815 | 0.2544 | 1.3086 | 0.815 | 0.7933 | 0.1657 | 0.0581 | | 0.0581 | 44.0 | 1100 | 0.5482 | 0.815 | 0.2545 | 1.3744 | 0.815 | 0.7933 | 0.1629 | 0.0583 | | 0.0581 | 45.0 | 1125 | 0.5493 | 0.815 | 0.2550 | 1.3676 | 0.815 | 0.7933 | 0.1638 | 0.0594 | | 0.0581 | 46.0 | 1150 | 0.5478 | 0.82 | 0.2547 | 1.4645 | 0.82 | 0.8011 | 0.1631 | 0.0572 | | 0.0581 | 47.0 | 1175 | 0.5487 | 0.815 | 0.2547 | 1.3795 | 0.815 | 0.7933 | 0.1634 | 0.0577 | | 0.0581 | 48.0 | 1200 | 0.5471 | 0.825 | 0.2546 | 1.4421 | 0.825 | 0.8067 | 0.1436 | 0.0564 | | 0.0581 | 49.0 | 1225 | 0.5489 | 0.815 | 0.2547 | 1.3676 | 0.815 | 0.7933 | 0.1663 | 0.0578 | | 0.0581 | 50.0 | 1250 | 0.5482 | 0.82 | 0.2549 | 1.4346 | 0.82 | 0.7990 | 0.1481 | 0.0574 | | 0.0581 | 51.0 | 1275 | 0.5472 | 0.82 | 0.2540 | 1.5012 | 0.82 | 0.8011 | 0.1565 | 0.0569 | | 0.0581 | 52.0 | 1300 | 0.5489 | 0.825 | 0.2553 | 1.4351 | 0.825 | 0.8051 | 0.1608 | 0.0576 | | 0.0581 | 53.0 | 1325 | 0.5486 | 0.815 | 0.2549 | 1.3799 | 0.815 | 0.7933 | 0.1483 | 0.0573 | | 0.0581 | 54.0 | 1350 | 0.5498 | 0.815 | 0.2552 | 1.4434 | 0.815 | 0.7933 | 0.1542 | 0.0578 | | 0.0581 | 55.0 | 1375 | 0.5508 | 0.82 | 0.2559 | 1.4394 | 0.82 | 0.7994 | 0.1562 | 0.0576 | | 0.0581 | 56.0 | 1400 | 0.5492 | 0.825 | 0.2552 | 1.4368 | 0.825 | 0.8051 | 0.1483 | 0.0572 | | 0.0581 | 57.0 | 1425 | 0.5501 | 0.815 | 0.2552 | 1.3874 | 0.815 | 0.7933 | 0.1390 | 0.0579 | | 0.0581 | 58.0 | 1450 | 0.5497 | 0.82 | 0.2553 | 1.4365 | 0.82 | 0.7994 | 0.1437 | 0.0579 | | 0.0581 | 59.0 | 1475 | 0.5507 | 0.82 | 0.2557 | 1.4343 | 0.82 | 0.7994 | 0.1389 | 0.0584 | | 0.056 | 60.0 | 1500 | 0.5501 | 0.825 | 0.2555 | 1.4410 | 0.825 | 0.8051 | 0.1585 | 0.0583 | | 0.056 | 61.0 | 1525 | 0.5510 | 0.82 | 0.2559 | 1.4380 | 0.82 | 0.7994 | 0.1395 | 0.0578 | | 0.056 | 62.0 | 1550 | 0.5510 | 0.82 | 0.2558 | 1.4421 | 0.82 | 0.7994 | 0.1441 | 0.0573 | | 0.056 | 63.0 | 1575 | 0.5508 | 0.82 | 0.2559 | 1.4369 | 0.82 | 0.7994 | 0.1395 | 0.0575 | | 0.056 | 64.0 | 1600 | 0.5514 | 0.82 | 0.2560 | 1.4410 | 0.82 | 0.7994 | 0.1393 | 0.0579 | | 0.056 | 65.0 | 1625 | 0.5519 | 0.825 | 0.2563 | 1.4544 | 0.825 | 0.8051 | 0.1427 | 0.0575 | | 0.056 | 66.0 | 1650 | 0.5510 | 0.82 | 0.2560 | 1.4400 | 0.82 | 0.7994 | 0.1391 | 0.0576 | | 0.056 | 67.0 | 1675 | 0.5520 | 0.825 | 0.2563 | 1.4396 | 0.825 | 0.8051 | 0.1422 | 0.0580 | | 0.056 | 68.0 | 1700 | 0.5516 | 0.82 | 0.2561 | 1.4412 | 0.82 | 0.7994 | 0.1394 | 0.0580 | | 0.056 | 69.0 | 1725 | 0.5512 | 0.82 | 0.2560 | 1.4433 | 0.82 | 0.7994 | 0.1393 | 0.0577 | | 0.056 | 70.0 | 1750 | 0.5515 | 0.82 | 0.2561 | 1.4418 | 0.82 | 0.7994 | 0.1391 | 0.0576 | | 0.056 | 71.0 | 1775 | 0.5517 | 0.82 | 0.2562 | 1.4448 | 0.82 | 0.7994 | 0.1449 | 0.0581 | | 0.056 | 72.0 | 1800 | 0.5524 | 0.825 | 0.2566 | 1.4421 | 0.825 | 0.8051 | 0.1437 | 0.0579 | | 0.056 | 73.0 | 1825 | 0.5518 | 0.82 | 0.2562 | 1.4403 | 0.82 | 0.7994 | 0.1469 | 0.0576 | | 0.056 | 74.0 | 1850 | 0.5529 | 0.825 | 0.2568 | 1.4450 | 0.825 | 0.8051 | 0.1434 | 0.0580 | | 0.056 | 75.0 | 1875 | 0.5528 | 0.82 | 0.2566 | 1.4475 | 0.82 | 0.7994 | 0.1447 | 0.0585 | | 0.056 | 76.0 | 1900 | 0.5529 | 0.82 | 0.2568 | 1.4463 | 0.82 | 0.7994 | 0.1447 | 0.0578 | | 0.056 | 77.0 | 1925 | 0.5528 | 0.82 | 0.2567 | 1.4469 | 0.82 | 0.7994 | 0.1401 | 0.0577 | | 0.056 | 78.0 | 1950 | 0.5525 | 0.82 | 0.2565 | 1.4506 | 0.82 | 0.7994 | 0.1444 | 0.0576 | | 0.056 | 79.0 | 1975 | 0.5527 | 0.825 | 0.2567 | 1.4479 | 0.825 | 0.8051 | 0.1423 | 0.0576 | | 0.0559 | 80.0 | 2000 | 0.5530 | 0.825 | 0.2568 | 1.4429 | 0.825 | 0.8051 | 0.1423 | 0.0578 | | 0.0559 | 81.0 | 2025 | 0.5529 | 0.825 | 0.2567 | 1.4489 | 0.825 | 0.8051 | 0.1422 | 0.0581 | | 0.0559 | 82.0 | 2050 | 0.5529 | 0.82 | 0.2568 | 1.4550 | 0.82 | 0.7994 | 0.1401 | 0.0576 | | 0.0559 | 83.0 | 2075 | 0.5534 | 0.82 | 0.2570 | 1.4458 | 0.82 | 0.7994 | 0.1399 | 0.0580 | | 0.0559 | 84.0 | 2100 | 0.5530 | 0.82 | 0.2568 | 1.4497 | 0.82 | 0.7994 | 0.1399 | 0.0577 | | 0.0559 | 85.0 | 2125 | 0.5533 | 0.82 | 0.2570 | 1.4507 | 0.82 | 0.7994 | 0.1401 | 0.0577 | | 0.0559 | 86.0 | 2150 | 0.5531 | 0.825 | 0.2568 | 1.4515 | 0.825 | 0.8051 | 0.1428 | 0.0577 | | 0.0559 | 87.0 | 2175 | 0.5534 | 0.82 | 0.2569 | 1.4503 | 0.82 | 0.7994 | 0.1404 | 0.0577 | | 0.0559 | 88.0 | 2200 | 0.5534 | 0.82 | 0.2569 | 1.4532 | 0.82 | 0.7994 | 0.1399 | 0.0581 | | 0.0559 | 89.0 | 2225 | 0.5533 | 0.825 | 0.2569 | 1.4499 | 0.825 | 0.8051 | 0.1423 | 0.0578 | | 0.0559 | 90.0 | 2250 | 0.5534 | 0.82 | 0.2570 | 1.4517 | 0.82 | 0.7994 | 0.1404 | 0.0577 | | 0.0559 | 91.0 | 2275 | 0.5533 | 0.82 | 0.2569 | 1.4526 | 0.82 | 0.7994 | 0.1405 | 0.0579 | | 0.0559 | 92.0 | 2300 | 0.5534 | 0.825 | 0.2570 | 1.4533 | 0.825 | 0.8051 | 0.1424 | 0.0577 | | 0.0559 | 93.0 | 2325 | 0.5535 | 0.82 | 0.2570 | 1.4527 | 0.82 | 0.7994 | 0.1399 | 0.0580 | | 0.0559 | 94.0 | 2350 | 0.5536 | 0.82 | 0.2571 | 1.4533 | 0.82 | 0.7994 | 0.1404 | 0.0577 | | 0.0559 | 95.0 | 2375 | 0.5536 | 0.82 | 0.2571 | 1.4547 | 0.82 | 0.7994 | 0.1400 | 0.0579 | | 0.0559 | 96.0 | 2400 | 0.5535 | 0.82 | 0.2570 | 1.4567 | 0.82 | 0.7994 | 0.1400 | 0.0578 | | 0.0559 | 97.0 | 2425 | 0.5536 | 0.82 | 0.2571 | 1.4523 | 0.82 | 0.7994 | 0.1404 | 0.0579 | | 0.0559 | 98.0 | 2450 | 0.5536 | 0.82 | 0.2571 | 1.4570 | 0.82 | 0.7994 | 0.1404 | 0.0578 | | 0.0559 | 99.0 | 2475 | 0.5536 | 0.82 | 0.2571 | 1.4570 | 0.82 | 0.7994 | 0.1404 | 0.0578 | | 0.0559 | 100.0 | 2500 | 0.5536 | 0.82 | 0.2571 | 1.4560 | 0.82 | 0.7994 | 0.1404 | 0.0578 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.13.1 - Datasets 2.13.1 - Tokenizers 0.13.3
koruni/charsembeds
koruni
2023-07-13T17:34:37Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-13T17:30:36Z
--- license: creativeml-openrail-m ---
ayanban011/6_e_200-tiny_tobacco3482_kd_CEKD_t1.5_a0.7
ayanban011
2023-07-13T17:33:13Z
165
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-13T15:25:23Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: 6_e_200-tiny_tobacco3482_kd_CEKD_t1.5_a0.7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 6_e_200-tiny_tobacco3482_kd_CEKD_t1.5_a0.7 This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4925 - Accuracy: 0.845 - Brier Loss: 0.2526 - Nll: 1.5547 - F1 Micro: 0.845 - F1 Macro: 0.8258 - Ece: 0.1785 - Aurc: 0.0736 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | No log | 1.0 | 25 | 1.8463 | 0.245 | 0.8631 | 4.7256 | 0.245 | 0.2002 | 0.2955 | 0.7640 | | No log | 2.0 | 50 | 1.1593 | 0.535 | 0.5972 | 2.7208 | 0.535 | 0.4319 | 0.2539 | 0.2591 | | No log | 3.0 | 75 | 0.9039 | 0.67 | 0.4555 | 2.3747 | 0.67 | 0.5677 | 0.2448 | 0.1349 | | No log | 4.0 | 100 | 0.7631 | 0.73 | 0.3757 | 1.5518 | 0.7300 | 0.7026 | 0.1947 | 0.0987 | | No log | 5.0 | 125 | 0.7412 | 0.775 | 0.3497 | 1.4677 | 0.775 | 0.7456 | 0.2239 | 0.0892 | | No log | 6.0 | 150 | 0.9198 | 0.72 | 0.3977 | 1.7618 | 0.72 | 0.6958 | 0.2190 | 0.1118 | | No log | 7.0 | 175 | 0.6117 | 0.81 | 0.2969 | 1.2112 | 0.81 | 0.7726 | 0.2244 | 0.0661 | | No log | 8.0 | 200 | 0.6296 | 0.78 | 0.3090 | 1.3439 | 0.78 | 0.7443 | 0.1959 | 0.0771 | | No log | 9.0 | 225 | 0.6850 | 0.785 | 0.3187 | 1.6325 | 0.785 | 0.7651 | 0.2194 | 0.0986 | | No log | 10.0 | 250 | 0.6304 | 0.79 | 0.3111 | 1.3598 | 0.79 | 0.7821 | 0.2106 | 0.0838 | | No log | 11.0 | 275 | 0.6668 | 0.775 | 0.3242 | 1.9754 | 0.775 | 0.6942 | 0.2005 | 0.0947 | | No log | 12.0 | 300 | 0.6795 | 0.775 | 0.3263 | 1.6182 | 0.775 | 0.7692 | 0.2155 | 0.0875 | | No log | 13.0 | 325 | 0.5156 | 0.85 | 0.2454 | 0.9647 | 0.85 | 0.8378 | 0.2033 | 0.0515 | | No log | 14.0 | 350 | 0.5341 | 0.845 | 0.2644 | 1.0410 | 0.845 | 0.8402 | 0.2050 | 0.0503 | | No log | 15.0 | 375 | 0.4678 | 0.865 | 0.2245 | 0.9232 | 0.865 | 0.8564 | 0.1836 | 0.0363 | | No log | 16.0 | 400 | 0.5620 | 0.82 | 0.2819 | 1.1475 | 0.82 | 0.7980 | 0.2050 | 0.0710 | | No log | 17.0 | 425 | 0.5253 | 0.83 | 0.2642 | 0.8809 | 0.83 | 0.8145 | 0.1811 | 0.0723 | | No log | 18.0 | 450 | 0.6295 | 0.815 | 0.2997 | 1.8144 | 0.815 | 0.8062 | 0.2120 | 0.0636 | | No log | 19.0 | 475 | 0.5748 | 0.83 | 0.2774 | 1.7900 | 0.83 | 0.8200 | 0.1920 | 0.0506 | | 0.466 | 20.0 | 500 | 0.4704 | 0.84 | 0.2275 | 0.8869 | 0.8400 | 0.8135 | 0.1882 | 0.0472 | | 0.466 | 21.0 | 525 | 0.5693 | 0.82 | 0.2820 | 1.3315 | 0.82 | 0.8013 | 0.2011 | 0.0821 | | 0.466 | 22.0 | 550 | 0.5251 | 0.81 | 0.2677 | 1.2663 | 0.81 | 0.7890 | 0.2037 | 0.0745 | | 0.466 | 23.0 | 575 | 0.5158 | 0.83 | 0.2638 | 1.2621 | 0.83 | 0.8070 | 0.1927 | 0.0614 | | 0.466 | 24.0 | 600 | 0.5056 | 0.835 | 0.2590 | 1.5337 | 0.835 | 0.8080 | 0.1887 | 0.0617 | | 0.466 | 25.0 | 625 | 0.4897 | 0.85 | 0.2476 | 1.4341 | 0.85 | 0.8361 | 0.1870 | 0.0627 | | 0.466 | 26.0 | 650 | 0.4994 | 0.85 | 0.2556 | 1.5846 | 0.85 | 0.8302 | 0.1965 | 0.0718 | | 0.466 | 27.0 | 675 | 0.4720 | 0.845 | 0.2406 | 1.3093 | 0.845 | 0.8234 | 0.1873 | 0.0704 | | 0.466 | 28.0 | 700 | 0.4858 | 0.84 | 0.2486 | 1.4459 | 0.8400 | 0.8192 | 0.1676 | 0.0730 | | 0.466 | 29.0 | 725 | 0.4908 | 0.84 | 0.2510 | 1.4941 | 0.8400 | 0.8159 | 0.1754 | 0.0717 | | 0.466 | 30.0 | 750 | 0.4805 | 0.855 | 0.2442 | 1.3279 | 0.855 | 0.8334 | 0.1827 | 0.0667 | | 0.466 | 31.0 | 775 | 0.4783 | 0.845 | 0.2428 | 1.4150 | 0.845 | 0.8264 | 0.1759 | 0.0660 | | 0.466 | 32.0 | 800 | 0.4822 | 0.855 | 0.2449 | 1.4848 | 0.855 | 0.8322 | 0.1928 | 0.0702 | | 0.466 | 33.0 | 825 | 0.4845 | 0.84 | 0.2462 | 1.4925 | 0.8400 | 0.8227 | 0.1837 | 0.0692 | | 0.466 | 34.0 | 850 | 0.4843 | 0.85 | 0.2466 | 1.4881 | 0.85 | 0.8295 | 0.1752 | 0.0683 | | 0.466 | 35.0 | 875 | 0.4837 | 0.85 | 0.2464 | 1.4939 | 0.85 | 0.8295 | 0.1842 | 0.0718 | | 0.466 | 36.0 | 900 | 0.4843 | 0.85 | 0.2467 | 1.4910 | 0.85 | 0.8295 | 0.1950 | 0.0705 | | 0.466 | 37.0 | 925 | 0.4862 | 0.85 | 0.2479 | 1.4938 | 0.85 | 0.8295 | 0.1871 | 0.0713 | | 0.466 | 38.0 | 950 | 0.4854 | 0.85 | 0.2478 | 1.4945 | 0.85 | 0.8295 | 0.1859 | 0.0719 | | 0.466 | 39.0 | 975 | 0.4850 | 0.85 | 0.2471 | 1.4891 | 0.85 | 0.8295 | 0.1855 | 0.0724 | | 0.0749 | 40.0 | 1000 | 0.4869 | 0.85 | 0.2484 | 1.4967 | 0.85 | 0.8295 | 0.1969 | 0.0718 | | 0.0749 | 41.0 | 1025 | 0.4857 | 0.85 | 0.2482 | 1.5544 | 0.85 | 0.8295 | 0.1904 | 0.0726 | | 0.0749 | 42.0 | 1050 | 0.4872 | 0.85 | 0.2487 | 1.5559 | 0.85 | 0.8295 | 0.1877 | 0.0732 | | 0.0749 | 43.0 | 1075 | 0.4873 | 0.85 | 0.2488 | 1.5534 | 0.85 | 0.8295 | 0.1871 | 0.0723 | | 0.0749 | 44.0 | 1100 | 0.4870 | 0.85 | 0.2489 | 1.5542 | 0.85 | 0.8295 | 0.1787 | 0.0730 | | 0.0749 | 45.0 | 1125 | 0.4874 | 0.85 | 0.2490 | 1.5544 | 0.85 | 0.8295 | 0.1867 | 0.0724 | | 0.0749 | 46.0 | 1150 | 0.4868 | 0.85 | 0.2486 | 1.5531 | 0.85 | 0.8295 | 0.1954 | 0.0723 | | 0.0749 | 47.0 | 1175 | 0.4879 | 0.85 | 0.2493 | 1.5546 | 0.85 | 0.8295 | 0.1842 | 0.0727 | | 0.0749 | 48.0 | 1200 | 0.4882 | 0.85 | 0.2495 | 1.5537 | 0.85 | 0.8295 | 0.1864 | 0.0730 | | 0.0749 | 49.0 | 1225 | 0.4875 | 0.85 | 0.2492 | 1.5537 | 0.85 | 0.8295 | 0.1884 | 0.0727 | | 0.0749 | 50.0 | 1250 | 0.4880 | 0.85 | 0.2494 | 1.5528 | 0.85 | 0.8295 | 0.1877 | 0.0726 | | 0.0749 | 51.0 | 1275 | 0.4888 | 0.85 | 0.2499 | 1.5539 | 0.85 | 0.8295 | 0.1754 | 0.0725 | | 0.0749 | 52.0 | 1300 | 0.4894 | 0.85 | 0.2501 | 1.5540 | 0.85 | 0.8295 | 0.1883 | 0.0736 | | 0.0749 | 53.0 | 1325 | 0.4889 | 0.85 | 0.2501 | 1.5533 | 0.85 | 0.8295 | 0.1708 | 0.0727 | | 0.0749 | 54.0 | 1350 | 0.4891 | 0.85 | 0.2500 | 1.5531 | 0.85 | 0.8295 | 0.1785 | 0.0729 | | 0.0749 | 55.0 | 1375 | 0.4904 | 0.85 | 0.2509 | 1.5541 | 0.85 | 0.8295 | 0.1744 | 0.0730 | | 0.0749 | 56.0 | 1400 | 0.4903 | 0.85 | 0.2507 | 1.5541 | 0.85 | 0.8295 | 0.1897 | 0.0730 | | 0.0749 | 57.0 | 1425 | 0.4894 | 0.85 | 0.2503 | 1.5536 | 0.85 | 0.8295 | 0.1792 | 0.0730 | | 0.0749 | 58.0 | 1450 | 0.4889 | 0.85 | 0.2501 | 1.5531 | 0.85 | 0.8295 | 0.1892 | 0.0730 | | 0.0749 | 59.0 | 1475 | 0.4907 | 0.85 | 0.2511 | 1.5542 | 0.85 | 0.8295 | 0.1767 | 0.0733 | | 0.0712 | 60.0 | 1500 | 0.4897 | 0.85 | 0.2506 | 1.5540 | 0.85 | 0.8295 | 0.1813 | 0.0732 | | 0.0712 | 61.0 | 1525 | 0.4906 | 0.85 | 0.2512 | 1.5545 | 0.85 | 0.8295 | 0.1853 | 0.0733 | | 0.0712 | 62.0 | 1550 | 0.4905 | 0.85 | 0.2512 | 1.5541 | 0.85 | 0.8295 | 0.1723 | 0.0733 | | 0.0712 | 63.0 | 1575 | 0.4904 | 0.85 | 0.2512 | 1.5543 | 0.85 | 0.8295 | 0.1817 | 0.0732 | | 0.0712 | 64.0 | 1600 | 0.4915 | 0.85 | 0.2515 | 1.5544 | 0.85 | 0.8295 | 0.1942 | 0.0736 | | 0.0712 | 65.0 | 1625 | 0.4898 | 0.85 | 0.2506 | 1.5534 | 0.85 | 0.8295 | 0.1712 | 0.0735 | | 0.0712 | 66.0 | 1650 | 0.4911 | 0.85 | 0.2516 | 1.5548 | 0.85 | 0.8295 | 0.1824 | 0.0733 | | 0.0712 | 67.0 | 1675 | 0.4908 | 0.85 | 0.2513 | 1.5546 | 0.85 | 0.8295 | 0.1896 | 0.0734 | | 0.0712 | 68.0 | 1700 | 0.4911 | 0.85 | 0.2516 | 1.5548 | 0.85 | 0.8295 | 0.1744 | 0.0734 | | 0.0712 | 69.0 | 1725 | 0.4912 | 0.85 | 0.2516 | 1.5541 | 0.85 | 0.8295 | 0.1726 | 0.0733 | | 0.0712 | 70.0 | 1750 | 0.4910 | 0.85 | 0.2514 | 1.5543 | 0.85 | 0.8295 | 0.1827 | 0.0736 | | 0.0712 | 71.0 | 1775 | 0.4918 | 0.85 | 0.2520 | 1.5546 | 0.85 | 0.8295 | 0.1909 | 0.0736 | | 0.0712 | 72.0 | 1800 | 0.4916 | 0.85 | 0.2519 | 1.5545 | 0.85 | 0.8295 | 0.1830 | 0.0734 | | 0.0712 | 73.0 | 1825 | 0.4913 | 0.85 | 0.2517 | 1.5540 | 0.85 | 0.8295 | 0.1835 | 0.0733 | | 0.0712 | 74.0 | 1850 | 0.4918 | 0.85 | 0.2521 | 1.5544 | 0.85 | 0.8295 | 0.1831 | 0.0736 | | 0.0712 | 75.0 | 1875 | 0.4919 | 0.85 | 0.2521 | 1.5548 | 0.85 | 0.8295 | 0.1829 | 0.0734 | | 0.0712 | 76.0 | 1900 | 0.4916 | 0.85 | 0.2520 | 1.5547 | 0.85 | 0.8295 | 0.1831 | 0.0733 | | 0.0712 | 77.0 | 1925 | 0.4919 | 0.85 | 0.2521 | 1.5542 | 0.85 | 0.8295 | 0.1732 | 0.0735 | | 0.0712 | 78.0 | 1950 | 0.4920 | 0.85 | 0.2521 | 1.5541 | 0.85 | 0.8295 | 0.1831 | 0.0734 | | 0.0712 | 79.0 | 1975 | 0.4920 | 0.85 | 0.2522 | 1.5544 | 0.85 | 0.8295 | 0.1833 | 0.0734 | | 0.0712 | 80.0 | 2000 | 0.4922 | 0.845 | 0.2523 | 1.5549 | 0.845 | 0.8258 | 0.1859 | 0.0735 | | 0.0712 | 81.0 | 2025 | 0.4920 | 0.85 | 0.2522 | 1.5542 | 0.85 | 0.8295 | 0.1830 | 0.0732 | | 0.0712 | 82.0 | 2050 | 0.4920 | 0.845 | 0.2522 | 1.5549 | 0.845 | 0.8258 | 0.1783 | 0.0734 | | 0.0712 | 83.0 | 2075 | 0.4922 | 0.85 | 0.2524 | 1.5546 | 0.85 | 0.8295 | 0.1832 | 0.0734 | | 0.0712 | 84.0 | 2100 | 0.4920 | 0.845 | 0.2522 | 1.5543 | 0.845 | 0.8258 | 0.1784 | 0.0735 | | 0.0712 | 85.0 | 2125 | 0.4921 | 0.845 | 0.2523 | 1.5547 | 0.845 | 0.8258 | 0.1785 | 0.0735 | | 0.0712 | 86.0 | 2150 | 0.4921 | 0.85 | 0.2523 | 1.5545 | 0.85 | 0.8295 | 0.1836 | 0.0733 | | 0.0712 | 87.0 | 2175 | 0.4924 | 0.85 | 0.2524 | 1.5547 | 0.85 | 0.8295 | 0.1836 | 0.0734 | | 0.0712 | 88.0 | 2200 | 0.4925 | 0.845 | 0.2524 | 1.5548 | 0.845 | 0.8258 | 0.1785 | 0.0735 | | 0.0712 | 89.0 | 2225 | 0.4924 | 0.85 | 0.2525 | 1.5548 | 0.85 | 0.8295 | 0.1835 | 0.0734 | | 0.0712 | 90.0 | 2250 | 0.4921 | 0.845 | 0.2523 | 1.5545 | 0.845 | 0.8258 | 0.1688 | 0.0735 | | 0.0712 | 91.0 | 2275 | 0.4925 | 0.845 | 0.2525 | 1.5546 | 0.845 | 0.8258 | 0.1785 | 0.0735 | | 0.0712 | 92.0 | 2300 | 0.4924 | 0.845 | 0.2524 | 1.5546 | 0.845 | 0.8258 | 0.1785 | 0.0736 | | 0.0712 | 93.0 | 2325 | 0.4925 | 0.845 | 0.2526 | 1.5548 | 0.845 | 0.8258 | 0.1785 | 0.0736 | | 0.0712 | 94.0 | 2350 | 0.4924 | 0.845 | 0.2525 | 1.5547 | 0.845 | 0.8258 | 0.1786 | 0.0736 | | 0.0712 | 95.0 | 2375 | 0.4926 | 0.845 | 0.2526 | 1.5547 | 0.845 | 0.8258 | 0.1785 | 0.0736 | | 0.0712 | 96.0 | 2400 | 0.4925 | 0.845 | 0.2526 | 1.5548 | 0.845 | 0.8258 | 0.1785 | 0.0736 | | 0.0712 | 97.0 | 2425 | 0.4925 | 0.845 | 0.2526 | 1.5547 | 0.845 | 0.8258 | 0.1785 | 0.0735 | | 0.0712 | 98.0 | 2450 | 0.4926 | 0.845 | 0.2526 | 1.5548 | 0.845 | 0.8258 | 0.1785 | 0.0736 | | 0.0712 | 99.0 | 2475 | 0.4925 | 0.845 | 0.2526 | 1.5548 | 0.845 | 0.8258 | 0.1785 | 0.0736 | | 0.0711 | 100.0 | 2500 | 0.4925 | 0.845 | 0.2526 | 1.5547 | 0.845 | 0.8258 | 0.1785 | 0.0736 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.13.1 - Datasets 2.13.1 - Tokenizers 0.13.3
grace-pro/xlmr-base-finetuned-hausa-2e-3
grace-pro
2023-07-13T17:31:39Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-13T17:03:58Z
--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: xlmr-base-finetuned-hausa-2e-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr-base-finetuned-hausa-2e-3 This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2694 - Precision: 0.1719 - Recall: 0.0235 - F1: 0.0414 - Accuracy: 0.9247 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2724 | 1.0 | 1312 | 0.2700 | 0.1719 | 0.0235 | 0.0414 | 0.9247 | | 0.2754 | 2.0 | 2624 | 0.2689 | 0.1719 | 0.0235 | 0.0414 | 0.9247 | | 0.2743 | 3.0 | 3936 | 0.2708 | 0.1719 | 0.0235 | 0.0414 | 0.9247 | | 0.2745 | 4.0 | 5248 | 0.2692 | 0.1719 | 0.0235 | 0.0414 | 0.9247 | | 0.2713 | 5.0 | 6560 | 0.2694 | 0.1719 | 0.0235 | 0.0414 | 0.9247 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
matsia/distilbert-base-uncased-finetuned-cola
matsia
2023-07-13T17:23:27Z
61
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-13T17:19:41Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: matsia/distilbert-base-uncased-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # matsia/distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1962 - Validation Loss: 0.5175 - Train Matthews Correlation: 0.5387 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Matthews Correlation | Epoch | |:----------:|:---------------:|:--------------------------:|:-----:| | 0.5161 | 0.4480 | 0.4716 | 0 | | 0.3231 | 0.4417 | 0.5414 | 1 | | 0.1962 | 0.5175 | 0.5387 | 2 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
Akshay-123/vit-base-patch16-224-in21k
Akshay-123
2023-07-13T17:14:53Z
222
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-13T16:54:32Z
--- tags: - generated_from_trainer metrics: - f1 model-index: - name: vit-base-patch16-224-in21k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-in21k This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7692 - F1: 0.9865 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 10 | 1.5877 | 0.6675 | | No log | 2.0 | 20 | 1.4149 | 0.8402 | | No log | 3.0 | 30 | 1.2687 | 0.8917 | | No log | 4.0 | 40 | 1.1382 | 0.9113 | | No log | 5.0 | 50 | 1.0214 | 0.9523 | | No log | 6.0 | 60 | 0.9285 | 0.9662 | | No log | 7.0 | 70 | 0.8601 | 0.9728 | | No log | 8.0 | 80 | 0.8089 | 0.9797 | | No log | 9.0 | 90 | 0.7796 | 0.9865 | | No log | 10.0 | 100 | 0.7692 | 0.9865 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Arup-Dutta-Bappy/bert-base-cased-finetuned-squad
Arup-Dutta-Bappy
2023-07-13T16:53:32Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-13T14:36:06Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-base-cased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
NasimB/gpt2-cocnat-guten-mod-rm-2k-rarity-no-cut
NasimB
2023-07-13T16:46:31Z
9
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-13T15:02:21Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-cocnat-guten-mod-rm-2k-rarity-no-cut results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-cocnat-guten-mod-rm-2k-rarity-no-cut This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.3120 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.7018 | 0.29 | 500 | 5.6444 | | 5.3406 | 0.58 | 1000 | 5.2034 | | 4.9891 | 0.88 | 1500 | 4.9570 | | 4.7257 | 1.17 | 2000 | 4.8069 | | 4.5644 | 1.46 | 2500 | 4.6833 | | 4.4557 | 1.75 | 3000 | 4.5769 | | 4.3292 | 2.04 | 3500 | 4.4986 | | 4.137 | 2.34 | 4000 | 4.4485 | | 4.1027 | 2.63 | 4500 | 4.3900 | | 4.064 | 2.92 | 5000 | 4.3414 | | 3.8721 | 3.21 | 5500 | 4.3322 | | 3.8018 | 3.5 | 6000 | 4.3007 | | 3.7893 | 3.79 | 6500 | 4.2661 | | 3.6925 | 4.09 | 7000 | 4.2635 | | 3.5253 | 4.38 | 7500 | 4.2599 | | 3.5119 | 4.67 | 8000 | 4.2446 | | 3.506 | 4.96 | 8500 | 4.2295 | | 3.3528 | 5.25 | 9000 | 4.2434 | | 3.3251 | 5.55 | 9500 | 4.2431 | | 3.325 | 5.84 | 10000 | 4.2415 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
grace-pro/afriberta-base-finetuned-hausa-2e-3
grace-pro
2023-07-13T16:45:14Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-13T16:28:08Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: afriberta-base-finetuned-hausa-2e-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # afriberta-base-finetuned-hausa-2e-3 This model is a fine-tuned version of [castorini/afriberta_base](https://huggingface.co/castorini/afriberta_base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2360 - Precision: 0.1719 - Recall: 0.0276 - F1: 0.0476 - Accuracy: 0.9373 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2428 | 1.0 | 1312 | 0.2368 | 0.1719 | 0.0276 | 0.0476 | 0.9373 | | 0.2435 | 2.0 | 2624 | 0.2385 | 0.1719 | 0.0276 | 0.0476 | 0.9373 | | 0.2428 | 3.0 | 3936 | 0.2371 | 0.1719 | 0.0276 | 0.0476 | 0.9373 | | 0.2434 | 4.0 | 5248 | 0.2359 | 0.1719 | 0.0276 | 0.0476 | 0.9373 | | 0.2411 | 5.0 | 6560 | 0.2360 | 0.1719 | 0.0276 | 0.0476 | 0.9373 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Gustavosta/SowlfieModelsRVC
Gustavosta
2023-07-13T16:44:17Z
0
2
null
[ "audio-to-audio", "pt", "en", "license:mit", "region:us" ]
audio-to-audio
2023-07-09T22:53:01Z
--- license: mit language: - pt - en pipeline_tag: audio-to-audio --- # Sowlfie Models RVC Repository with public **models for RVC** that I make. (**Open commissions** on "@lengodev" on Discord) Have suggestions? [Open an issue](https://huggingface.co/Gustavosta/SowlfieModelsRVC/discussions/new)! ## 🎤 RVC Models: | Model | Dataset | Epochs | Language | Sample | |---|:---:|---:|---:|---:| | [Pica-Pau (Woody Woodpecker PT-BR)](https://huggingface.co/Gustavosta/SowlfieModelsRVC/resolve/main/pica-pau-model-rvc-v2.zip) | [8 Minutes, 120 segments](https://drive.google.com/file/d/1t37uofCRrohhPLxcXfJWlfhIU_afwIdM/view?usp=sharing) | 400 Epochs | 🇧🇷 Brazilian Portuguese | [Bolo de morango de cada estado](https://youtu.be/UxmEFyC4R_0) | ## ❓ How to use a model? If you've never used RVC v2 before, I recommend checking out **[this guide](https://docs.google.com/document/d/13_l1bd1Osgz7qlAZn-zhklCbHpVRk6bYOuAuB78qmsE/edit?pli=1)**. To use a model from this repository, you will **need the URL of the `.zip` model** file in the repository and modify the URL, adding "`/resolve/main/`" in the **URL slug before the filename**. Then you can **use the URL in the model download field**. **URL Example**: ``` https://huggingface.co/Gustavosta/SowlfieModelsRVC/resolve/main/model-filename.zip ``` ## ⚖️ Licence: [MIT](https://huggingface.co/models?license=license:mit) Licence --- ⚠️ It's hard work to **build datasets**, **train models** and make them **available for free**. So if you use the model, please **credit the model** under the name of `Sowlfie Models` or `Gustavosta`. Anyway, **thanks for reading this far**! 🤝
brunogs/distilbert-base-uncased-finetuned-cola
brunogs
2023-07-13T16:42:33Z
61
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-13T15:53:06Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: brunogs/distilbert-base-uncased-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # brunogs/distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1860 - Validation Loss: 0.5510 - Train Matthews Correlation: 0.5076 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Matthews Correlation | Epoch | |:----------:|:---------------:|:--------------------------:|:-----:| | 0.5165 | 0.4641 | 0.4474 | 0 | | 0.3176 | 0.4989 | 0.5060 | 1 | | 0.1860 | 0.5510 | 0.5076 | 2 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
1aurent/poca-SoccerTwos
1aurent
2023-07-13T16:33:04Z
25
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-07-13T15:40:45Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: 1aurent/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
VK246/IC_ver6b_coco_swin_gpt2_50Bpc_1e
VK246
2023-07-13T16:16:38Z
45
0
transformers
[ "transformers", "pytorch", "tensorboard", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:coco", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-07-13T12:40:57Z
--- tags: - generated_from_trainer datasets: - coco metrics: - rouge - bleu model-index: - name: IC_ver6b_coco_swin_gpt2_50Bpc_1e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # IC_ver6b_coco_swin_gpt2_50Bpc_1e This model is a fine-tuned version of [VK246/IC_ver6a_coco_swin_gpt2_50Apc_1e](https://huggingface.co/VK246/IC_ver6a_coco_swin_gpt2_50Apc_1e) on the coco dataset. It achieves the following results on the evaluation set: - Loss: 0.8180 - Rouge1: 41.462 - Rouge2: 16.1291 - Rougel: 37.6518 - Rougelsum: 37.6471 - Bleu: 9.9643 - Gen Len: 11.3063 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:------:|:-------:| | 0.8889 | 0.17 | 500 | 0.8659 | 39.7654 | 14.603 | 36.1709 | 36.1638 | 8.8491 | 11.3063 | | 0.8756 | 0.34 | 1000 | 0.8515 | 40.3678 | 15.2852 | 36.7303 | 36.7188 | 9.3029 | 11.3063 | | 0.862 | 0.51 | 1500 | 0.8388 | 40.7537 | 15.2635 | 37.0523 | 37.0379 | 9.3057 | 11.3063 | | 0.8546 | 0.68 | 2000 | 0.8281 | 40.961 | 15.6192 | 37.1627 | 37.1546 | 9.7453 | 11.3063 | | 0.837 | 0.85 | 2500 | 0.8214 | 41.5703 | 16.1006 | 37.7767 | 37.7654 | 9.9062 | 11.3063 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
FarziBuilder/NeoXAdapter
FarziBuilder
2023-07-13T16:07:15Z
3
0
peft
[ "peft", "region:us" ]
null
2023-07-13T16:07:13Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
Shushant/thesis_nepaliGPT
Shushant
2023-07-13T15:56:14Z
152
0
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "ne", "license:bsd-3-clause-clear", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-13T09:18:25Z
--- license: bsd-3-clause-clear language: - ne metrics: - perplexity library_name: transformers pipeline_tag: text-generation --- # NepaliGPT: Nepali Language Generative Pretrained Transformer Model This is an experiment for developing a language generation model for the Nepali language. Causal Language Model which can predict the next possible tokens given a context in Nepali language. # Dataset Used A large corpus of 9.3 GB size has been collected from different sources on the internet. The sources include - Nepali Books found online. - Nepali News Article from Nepali news portals. - Nepali text collected from different open source Nepali NLP datasets. # Hyperparameters Used Learning rate -> 2e-5 \ Weight Decay -> 0.01 \ Number of training epochs -> 5 \ bf16 -> True \ Base Model Architecture -> GPT-2 \ ## Training Results It achieves the following results on the evaluation set: | Training Loss | Validation Loss | Perplexity |:-------------:|:---------------:|:----------:| | 3.3968 | 3.2705 | 26.3245
flaviagiammarino/medsam-vit-base
flaviagiammarino
2023-07-13T15:43:56Z
9,672
11
transformers
[ "transformers", "pytorch", "tf", "sam", "mask-generation", "medical", "vision", "arxiv:2304.12306", "license:apache-2.0", "endpoints_compatible", "region:us" ]
mask-generation
2023-07-11T07:37:57Z
--- license: apache-2.0 tags: - medical - vision --- # Model Card for MedSAM MedSAM is a fine-tuned version of [SAM](https://huggingface.co/docs/transformers/main/model_doc/sam) for the medical domain. This repository is based on the paper, code and pre-trained model released by the authors in July 2023. ## Model Description MedSAM was trained on a large-scale medical image segmentation dataset of 1,090,486 image-mask pairs collected from different publicly available sources. The image-mask pairs cover 15 imaging modalities and over 30 cancer types. MedSAM was initialized using the pre-trained SAM model with the ViT-Base backbone. The prompt encoder weights were frozen, while the image encoder and mask decoder weights were updated during training. The training was performed for 100 epochs with a batch size of 160 using the AdamW optimizer with a learning rate of 10−4 and a weight decay of 0.01. - **Repository:** [MedSAM Official GitHub Repository](https://github.com/bowang-lab/medsam) - **Paper:** [Segment Anything in Medical Images](https://arxiv.org/abs/2304.12306v1) ## Usage ```python import requests import numpy as np import matplotlib.pyplot as plt from PIL import Image from transformers import SamModel, SamProcessor import torch device = "cuda" if torch.cuda.is_available() else "cpu" model = SamModel.from_pretrained("flaviagiammarino/medsam-vit-base").to(device) processor = SamProcessor.from_pretrained("flaviagiammarino/medsam-vit-base") img_url = "https://huggingface.co/flaviagiammarino/medsam-vit-base/resolve/main/scripts/input.png" raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB") input_boxes = [95., 255., 190., 350.] inputs = processor(raw_image, input_boxes=[[input_boxes]], return_tensors="pt").to(device) outputs = model(**inputs, multimask_output=False) probs = processor.image_processor.post_process_masks(outputs.pred_masks.sigmoid().cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu(), binarize=False) def show_mask(mask, ax, random_color): if random_color: color = np.concatenate([np.random.random(3), np.array([0.6])], axis=0) else: color = np.array([251/255, 252/255, 30/255, 0.6]) h, w = mask.shape[-2:] mask_image = mask.reshape(h, w, 1) * color.reshape(1, 1, -1) ax.imshow(mask_image) def show_box(box, ax): x0, y0 = box[0], box[1] w, h = box[2] - box[0], box[3] - box[1] ax.add_patch(plt.Rectangle((x0, y0), w, h, edgecolor="blue", facecolor=(0, 0, 0, 0), lw=2)) fig, ax = plt.subplots(1, 2, figsize=(10, 5)) ax[0].imshow(np.array(raw_image)) show_box(input_boxes, ax[0]) ax[0].set_title("Input Image and Bounding Box") ax[0].axis("off") ax[1].imshow(np.array(raw_image)) show_mask(mask=probs[0] > 0.5, ax=ax[1], random_color=False) show_box(input_boxes, ax[1]) ax[1].set_title("MedSAM Segmentation") ax[1].axis("off") plt.show() ``` ![results](scripts/output.png) ## Additional Information ### Licensing Information The authors have released the model code and pre-trained checkpoint under the [Apache License 2.0](https://github.com/bowang-lab/MedSAM/blob/main/LICENSE). ### Citation Information ``` @article{ma2023segment, title={Segment anything in medical images}, author={Ma, Jun and Wang, Bo}, journal={arXiv preprint arXiv:2304.12306}, year={2023} } ```
Faith-nchifor/distilbert-base-uncased-finetuned-cola-2
Faith-nchifor
2023-07-13T15:32:02Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-13T15:27:14Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola-2 results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.1229361555243494 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola-2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 4.0843 - Matthews Correlation: 0.1229 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | No log | 1.0 | 381 | 3.9140 | 0.1059 | | 0.0791 | 2.0 | 762 | 4.4408 | 0.0927 | | 0.0561 | 3.0 | 1143 | 3.5105 | 0.1140 | | 0.041 | 4.0 | 1524 | 4.0843 | 0.1229 | | 0.041 | 5.0 | 1905 | 4.4197 | 0.1194 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
1aurent/rl_course_vizdoom_defend_the_line
1aurent
2023-07-13T15:24:15Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-13T15:24:07Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_defend_the_line type: doom_defend_the_line metrics: - type: mean_reward value: 20.10 +/- 3.39 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_defend_the_line** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r 1aurent/rl_course_vizdoom_defend_the_line ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_defend_the_line --train_dir=./train_dir --experiment=rl_course_vizdoom_defend_the_line ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_defend_the_line --train_dir=./train_dir --experiment=rl_course_vizdoom_defend_the_line --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
orya16215/ppo-Huggy
orya16215
2023-07-13T15:17:58Z
11
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-07-13T15:17:55Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: orya16215/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Arthuerwang/output_models_girls
Arthuerwang
2023-07-13T15:13:35Z
0
1
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:runwayml/stable-diffusion-v1-5", "base_model:finetune:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-13T11:35:54Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: a photo of gril in anime tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - Arthuerwang/output_models_girls This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of gril in anime using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
1aurent/rl_course_vizdoom_health_gathering_supreme
1aurent
2023-07-13T15:02:29Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-13T15:02:20Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 15.34 +/- 5.14 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r 1aurent/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
chrisjsc96/data
chrisjsc96
2023-07-13T14:52:25Z
0
0
null
[ "arxiv:1910.09700", "region:us" ]
null
2023-07-13T14:31:16Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dead-owwl/falcon7b-ft-haystack
dead-owwl
2023-07-13T14:50:57Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-13T14:45:40Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
T-Systems-onsite/cross-en-es-pt-roberta-sentence-transformer
T-Systems-onsite
2023-07-13T14:26:33Z
18
1
transformers
[ "transformers", "pytorch", "tf", "safetensors", "xlm-roberta", "feature-extraction", "sentence_embedding", "en", "es", "pt", "dataset:stsb_multi_mt", "license:mit", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: - en - es - pt license: mit tags: - sentence_embedding datasets: - stsb_multi_mt ---
haris001/code_58rows
haris001
2023-07-13T14:09:58Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-11T18:55:10Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
peft-internal-testing/tiny_OPTForSequenceClassification-lora
peft-internal-testing
2023-07-13T13:48:21Z
25,195
0
peft
[ "peft", "region:us" ]
null
2023-07-13T13:48:20Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
Yntec/DucHaiten-Retro-Diffusers
Yntec
2023-07-13T13:39:06Z
1,798
4
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "Retro", "DucHaiten", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-13T13:02:56Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image - Retro - DucHaiten --- # DucHaiten Retro I don't know about you, but in my opinion this is the best retro model DucHaiten has ever created. It's sad to see it sitting at 0 downloads at huggingface, so here's a Diffusers version you can use with huggingface's pipeline! If you like their content, support them at: https://linktr.ee/Duc_Haiten Original page: https://civitai.com/models/103966?modelVersionId=111392
ayanban011/vit-base_tobacco_bs_16_lr_5e-6_e_300_wr_0.1_wd_0.2
ayanban011
2023-07-13T13:24:29Z
168
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-13T10:51:07Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: vit-base_tobacco_bs_16_lr_5e-6_e_300_wr_0.1_wd_0.2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base_tobacco_bs_16_lr_5e-6_e_300_wr_0.1_wd_0.2 This model is a fine-tuned version of [jordyvl/vit-base_tobacco](https://huggingface.co/jordyvl/vit-base_tobacco) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8461 - Accuracy: 0.775 - Brier Loss: 0.3632 - Nll: 1.4570 - F1 Micro: 0.775 - F1 Macro: 0.7418 - Ece: 0.2043 - Aurc: 0.1066 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 300 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:------:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | No log | 0.96 | 12 | 0.7447 | 0.815 | 0.3078 | 1.1882 | 0.815 | 0.7942 | 0.2385 | 0.0731 | | No log | 2.0 | 25 | 0.7442 | 0.815 | 0.3075 | 1.1872 | 0.815 | 0.7922 | 0.2401 | 0.0736 | | No log | 2.96 | 37 | 0.7439 | 0.815 | 0.3075 | 1.1883 | 0.815 | 0.7942 | 0.2292 | 0.0722 | | No log | 4.0 | 50 | 0.7463 | 0.815 | 0.3083 | 1.1904 | 0.815 | 0.7942 | 0.2454 | 0.0762 | | No log | 4.96 | 62 | 0.7441 | 0.805 | 0.3077 | 1.1886 | 0.805 | 0.7819 | 0.2322 | 0.0731 | | No log | 6.0 | 75 | 0.7408 | 0.81 | 0.3064 | 1.1842 | 0.81 | 0.7914 | 0.2217 | 0.0704 | | No log | 6.96 | 87 | 0.7448 | 0.81 | 0.3082 | 1.1852 | 0.81 | 0.7847 | 0.2341 | 0.0748 | | No log | 8.0 | 100 | 0.7454 | 0.815 | 0.3084 | 1.1882 | 0.815 | 0.7942 | 0.2129 | 0.0767 | | No log | 8.96 | 112 | 0.7462 | 0.815 | 0.3080 | 1.1954 | 0.815 | 0.7922 | 0.2535 | 0.0775 | | No log | 10.0 | 125 | 0.7427 | 0.81 | 0.3067 | 1.1924 | 0.81 | 0.7876 | 0.2280 | 0.0767 | | No log | 10.96 | 137 | 0.7420 | 0.815 | 0.3067 | 1.2033 | 0.815 | 0.7942 | 0.2611 | 0.0755 | | No log | 12.0 | 150 | 0.7417 | 0.805 | 0.3063 | 1.1881 | 0.805 | 0.7820 | 0.2456 | 0.0774 | | No log | 12.96 | 162 | 0.7442 | 0.815 | 0.3089 | 1.1895 | 0.815 | 0.8059 | 0.2230 | 0.0768 | | No log | 14.0 | 175 | 0.7398 | 0.805 | 0.3061 | 1.2547 | 0.805 | 0.7843 | 0.2310 | 0.0766 | | No log | 14.96 | 187 | 0.7355 | 0.81 | 0.3046 | 1.1887 | 0.81 | 0.7914 | 0.2328 | 0.0746 | | No log | 16.0 | 200 | 0.7368 | 0.81 | 0.3053 | 1.1894 | 0.81 | 0.7922 | 0.2256 | 0.0774 | | No log | 16.96 | 212 | 0.7355 | 0.81 | 0.3037 | 1.2537 | 0.81 | 0.7947 | 0.2077 | 0.0788 | | No log | 18.0 | 225 | 0.7407 | 0.81 | 0.3065 | 1.1882 | 0.81 | 0.7871 | 0.2421 | 0.0767 | | No log | 18.96 | 237 | 0.7279 | 0.8 | 0.2999 | 1.2540 | 0.8000 | 0.7796 | 0.2159 | 0.0742 | | No log | 20.0 | 250 | 0.7324 | 0.805 | 0.3042 | 1.1811 | 0.805 | 0.7841 | 0.2269 | 0.0763 | | No log | 20.96 | 262 | 0.7421 | 0.805 | 0.3079 | 1.1827 | 0.805 | 0.7850 | 0.2339 | 0.0797 | | No log | 22.0 | 275 | 0.7343 | 0.81 | 0.3050 | 1.1689 | 0.81 | 0.7877 | 0.2223 | 0.0784 | | No log | 22.96 | 287 | 0.7308 | 0.81 | 0.3032 | 1.1901 | 0.81 | 0.7922 | 0.2190 | 0.0774 | | No log | 24.0 | 300 | 0.7381 | 0.805 | 0.3057 | 1.3200 | 0.805 | 0.7853 | 0.2500 | 0.0819 | | No log | 24.96 | 312 | 0.7336 | 0.81 | 0.3042 | 1.3123 | 0.81 | 0.7903 | 0.2082 | 0.0795 | | No log | 26.0 | 325 | 0.7282 | 0.805 | 0.3020 | 1.2465 | 0.805 | 0.7847 | 0.2248 | 0.0792 | | No log | 26.96 | 337 | 0.7346 | 0.81 | 0.3050 | 1.2538 | 0.81 | 0.7956 | 0.2095 | 0.0818 | | No log | 28.0 | 350 | 0.7305 | 0.805 | 0.3031 | 1.2443 | 0.805 | 0.7850 | 0.2488 | 0.0823 | | No log | 28.96 | 362 | 0.7395 | 0.8 | 0.3071 | 1.3235 | 0.8000 | 0.7818 | 0.2223 | 0.0843 | | No log | 30.0 | 375 | 0.7349 | 0.8 | 0.3058 | 1.2511 | 0.8000 | 0.7733 | 0.2004 | 0.0817 | | No log | 30.96 | 387 | 0.7344 | 0.8 | 0.3048 | 1.2516 | 0.8000 | 0.7818 | 0.2183 | 0.0837 | | No log | 32.0 | 400 | 0.7332 | 0.795 | 0.3037 | 1.3836 | 0.795 | 0.7686 | 0.2185 | 0.0844 | | No log | 32.96 | 412 | 0.7306 | 0.81 | 0.3042 | 1.1767 | 0.81 | 0.7905 | 0.2117 | 0.0837 | | No log | 34.0 | 425 | 0.7326 | 0.8 | 0.3040 | 1.2058 | 0.8000 | 0.7783 | 0.2106 | 0.0857 | | No log | 34.96 | 437 | 0.7317 | 0.8 | 0.3045 | 1.3068 | 0.8000 | 0.7733 | 0.2337 | 0.0843 | | No log | 36.0 | 450 | 0.7345 | 0.805 | 0.3073 | 1.3065 | 0.805 | 0.7782 | 0.1928 | 0.0823 | | No log | 36.96 | 462 | 0.7367 | 0.8 | 0.3074 | 1.3259 | 0.8000 | 0.7733 | 0.1941 | 0.0860 | | No log | 38.0 | 475 | 0.7349 | 0.8 | 0.3073 | 1.3074 | 0.8000 | 0.7731 | 0.2138 | 0.0853 | | No log | 38.96 | 487 | 0.7331 | 0.81 | 0.3057 | 1.3149 | 0.81 | 0.7909 | 0.1981 | 0.0865 | | 0.1577 | 40.0 | 500 | 0.7269 | 0.8 | 0.3018 | 1.3700 | 0.8000 | 0.7746 | 0.2033 | 0.0865 | | 0.1577 | 40.96 | 512 | 0.7270 | 0.8 | 0.3020 | 1.3687 | 0.8000 | 0.7737 | 0.2108 | 0.0860 | | 0.1577 | 42.0 | 525 | 0.7356 | 0.805 | 0.3078 | 1.3105 | 0.805 | 0.7784 | 0.2053 | 0.0892 | | 0.1577 | 42.96 | 537 | 0.7291 | 0.8 | 0.3031 | 1.3687 | 0.8000 | 0.7746 | 0.2066 | 0.0876 | | 0.1577 | 44.0 | 550 | 0.7276 | 0.81 | 0.3034 | 1.3655 | 0.81 | 0.7844 | 0.2189 | 0.0872 | | 0.1577 | 44.96 | 562 | 0.7318 | 0.805 | 0.3050 | 1.3684 | 0.805 | 0.7793 | 0.2209 | 0.0893 | | 0.1577 | 46.0 | 575 | 0.7300 | 0.805 | 0.3041 | 1.3679 | 0.805 | 0.7793 | 0.2040 | 0.0885 | | 0.1577 | 46.96 | 587 | 0.7342 | 0.805 | 0.3060 | 1.3679 | 0.805 | 0.7797 | 0.2059 | 0.0893 | | 0.1577 | 48.0 | 600 | 0.7303 | 0.805 | 0.3045 | 1.3672 | 0.805 | 0.7797 | 0.1862 | 0.0889 | | 0.1577 | 48.96 | 612 | 0.7401 | 0.8 | 0.3090 | 1.3710 | 0.8000 | 0.7746 | 0.1930 | 0.0915 | | 0.1577 | 50.0 | 625 | 0.7329 | 0.795 | 0.3054 | 1.3696 | 0.795 | 0.7654 | 0.1984 | 0.0891 | | 0.1577 | 50.96 | 637 | 0.7363 | 0.795 | 0.3072 | 1.3689 | 0.795 | 0.7654 | 0.2196 | 0.0907 | | 0.1577 | 52.0 | 650 | 0.7402 | 0.805 | 0.3101 | 1.3646 | 0.805 | 0.7784 | 0.2028 | 0.0911 | | 0.1577 | 52.96 | 662 | 0.7347 | 0.8 | 0.3065 | 1.3687 | 0.8000 | 0.7746 | 0.2062 | 0.0894 | | 0.1577 | 54.0 | 675 | 0.7388 | 0.805 | 0.3097 | 1.3649 | 0.805 | 0.7784 | 0.2027 | 0.0907 | | 0.1577 | 54.96 | 687 | 0.7381 | 0.8 | 0.3087 | 1.3681 | 0.8000 | 0.7704 | 0.2120 | 0.0908 | | 0.1577 | 56.0 | 700 | 0.7372 | 0.805 | 0.3088 | 1.3646 | 0.805 | 0.7749 | 0.1866 | 0.0903 | | 0.1577 | 56.96 | 712 | 0.7403 | 0.805 | 0.3102 | 1.3682 | 0.805 | 0.7749 | 0.2287 | 0.0922 | | 0.1577 | 58.0 | 725 | 0.7352 | 0.8 | 0.3069 | 1.3680 | 0.8000 | 0.7704 | 0.2117 | 0.0900 | | 0.1577 | 58.96 | 737 | 0.7373 | 0.8 | 0.3079 | 1.3699 | 0.8000 | 0.7704 | 0.1990 | 0.0923 | | 0.1577 | 60.0 | 750 | 0.7353 | 0.795 | 0.3065 | 1.3690 | 0.795 | 0.7656 | 0.2078 | 0.0900 | | 0.1577 | 60.96 | 762 | 0.7357 | 0.805 | 0.3071 | 1.3657 | 0.805 | 0.7732 | 0.2076 | 0.0899 | | 0.1577 | 62.0 | 775 | 0.7409 | 0.79 | 0.3103 | 1.3737 | 0.79 | 0.7623 | 0.2066 | 0.0920 | | 0.1577 | 62.96 | 787 | 0.7393 | 0.795 | 0.3082 | 1.4518 | 0.795 | 0.7670 | 0.2047 | 0.0912 | | 0.1577 | 64.0 | 800 | 0.7417 | 0.8 | 0.3093 | 1.3304 | 0.8000 | 0.7684 | 0.1955 | 0.0917 | | 0.1577 | 64.96 | 812 | 0.7438 | 0.8 | 0.3121 | 1.3714 | 0.8000 | 0.7707 | 0.1782 | 0.0920 | | 0.1577 | 66.0 | 825 | 0.7408 | 0.8 | 0.3100 | 1.3758 | 0.8000 | 0.7709 | 0.1965 | 0.0931 | | 0.1577 | 66.96 | 837 | 0.7434 | 0.8 | 0.3112 | 1.3767 | 0.8000 | 0.7707 | 0.2124 | 0.0935 | | 0.1577 | 68.0 | 850 | 0.7393 | 0.8 | 0.3107 | 1.3038 | 0.8000 | 0.7704 | 0.1786 | 0.0901 | | 0.1577 | 68.96 | 862 | 0.7383 | 0.8 | 0.3090 | 1.3689 | 0.8000 | 0.7704 | 0.2041 | 0.0913 | | 0.1577 | 70.0 | 875 | 0.7436 | 0.8 | 0.3119 | 1.3658 | 0.8000 | 0.7704 | 0.1983 | 0.0932 | | 0.1577 | 70.96 | 887 | 0.7463 | 0.8 | 0.3130 | 1.3700 | 0.8000 | 0.7707 | 0.1932 | 0.0947 | | 0.1577 | 72.0 | 900 | 0.7464 | 0.795 | 0.3135 | 1.3720 | 0.795 | 0.7656 | 0.2089 | 0.0932 | | 0.1577 | 72.96 | 912 | 0.7469 | 0.8 | 0.3137 | 1.3703 | 0.8000 | 0.7707 | 0.2004 | 0.0943 | | 0.1577 | 74.0 | 925 | 0.7435 | 0.8 | 0.3124 | 1.3674 | 0.8000 | 0.7704 | 0.1958 | 0.0930 | | 0.1577 | 74.96 | 937 | 0.7427 | 0.8 | 0.3117 | 1.3708 | 0.8000 | 0.7707 | 0.2224 | 0.0921 | | 0.1577 | 76.0 | 950 | 0.7420 | 0.8 | 0.3111 | 1.3664 | 0.8000 | 0.7704 | 0.2145 | 0.0928 | | 0.1577 | 76.96 | 962 | 0.7457 | 0.8 | 0.3135 | 1.3690 | 0.8000 | 0.7707 | 0.2178 | 0.0934 | | 0.1577 | 78.0 | 975 | 0.7513 | 0.8 | 0.3163 | 1.3707 | 0.8000 | 0.7707 | 0.1964 | 0.0947 | | 0.1577 | 78.96 | 987 | 0.7466 | 0.8 | 0.3139 | 1.3722 | 0.8000 | 0.7704 | 0.2001 | 0.0936 | | 0.1081 | 80.0 | 1000 | 0.7491 | 0.8 | 0.3154 | 1.3712 | 0.8000 | 0.7707 | 0.2100 | 0.0943 | | 0.1081 | 80.96 | 1012 | 0.7483 | 0.8 | 0.3150 | 1.3675 | 0.8000 | 0.7704 | 0.2083 | 0.0939 | | 0.1081 | 82.0 | 1025 | 0.7523 | 0.8 | 0.3163 | 1.3742 | 0.8000 | 0.7707 | 0.2095 | 0.0958 | | 0.1081 | 82.96 | 1037 | 0.7511 | 0.8 | 0.3166 | 1.3703 | 0.8000 | 0.7707 | 0.2034 | 0.0944 | | 0.1081 | 84.0 | 1050 | 0.7481 | 0.8 | 0.3150 | 1.3687 | 0.8000 | 0.7704 | 0.2113 | 0.0941 | | 0.1081 | 84.96 | 1062 | 0.7501 | 0.8 | 0.3164 | 1.3668 | 0.8000 | 0.7693 | 0.2053 | 0.0932 | | 0.1081 | 86.0 | 1075 | 0.7539 | 0.8 | 0.3177 | 1.3725 | 0.8000 | 0.7707 | 0.2025 | 0.0951 | | 0.1081 | 86.96 | 1087 | 0.7550 | 0.8 | 0.3182 | 1.3731 | 0.8000 | 0.7707 | 0.1969 | 0.0953 | | 0.1081 | 88.0 | 1100 | 0.7553 | 0.8 | 0.3183 | 1.3697 | 0.8000 | 0.7707 | 0.1972 | 0.0952 | | 0.1081 | 88.96 | 1112 | 0.7535 | 0.8 | 0.3176 | 1.3719 | 0.8000 | 0.7707 | 0.2073 | 0.0945 | | 0.1081 | 90.0 | 1125 | 0.7558 | 0.795 | 0.3186 | 1.3742 | 0.795 | 0.7681 | 0.2018 | 0.0959 | | 0.1081 | 90.96 | 1137 | 0.7573 | 0.8 | 0.3193 | 1.3739 | 0.8000 | 0.7704 | 0.1919 | 0.0965 | | 0.1081 | 92.0 | 1150 | 0.7565 | 0.8 | 0.3193 | 1.3743 | 0.8000 | 0.7698 | 0.1967 | 0.0959 | | 0.1081 | 92.96 | 1162 | 0.7619 | 0.795 | 0.3218 | 1.3758 | 0.795 | 0.7681 | 0.1989 | 0.0974 | | 0.1081 | 94.0 | 1175 | 0.7577 | 0.8 | 0.3198 | 1.3793 | 0.8000 | 0.7696 | 0.1996 | 0.0957 | | 0.1081 | 94.96 | 1187 | 0.7575 | 0.795 | 0.3201 | 1.3781 | 0.795 | 0.7666 | 0.1954 | 0.0964 | | 0.1081 | 96.0 | 1200 | 0.7573 | 0.8 | 0.3199 | 1.3752 | 0.8000 | 0.7693 | 0.1863 | 0.0955 | | 0.1081 | 96.96 | 1212 | 0.7615 | 0.795 | 0.3216 | 1.3753 | 0.795 | 0.7681 | 0.1997 | 0.0975 | | 0.1081 | 98.0 | 1225 | 0.7603 | 0.795 | 0.3215 | 1.3731 | 0.795 | 0.7681 | 0.2051 | 0.0963 | | 0.1081 | 98.96 | 1237 | 0.7596 | 0.795 | 0.3209 | 1.3744 | 0.795 | 0.7673 | 0.2081 | 0.0959 | | 0.1081 | 100.0 | 1250 | 0.7582 | 0.795 | 0.3203 | 1.3743 | 0.795 | 0.7673 | 0.2024 | 0.0955 | | 0.1081 | 100.96 | 1262 | 0.7609 | 0.795 | 0.3223 | 1.3761 | 0.795 | 0.7681 | 0.1823 | 0.0968 | | 0.1081 | 102.0 | 1275 | 0.7632 | 0.785 | 0.3233 | 1.3758 | 0.785 | 0.7528 | 0.1833 | 0.0970 | | 0.1081 | 102.96 | 1287 | 0.7618 | 0.785 | 0.3219 | 1.3785 | 0.785 | 0.7516 | 0.2141 | 0.0970 | | 0.1081 | 104.0 | 1300 | 0.7633 | 0.795 | 0.3230 | 1.4970 | 0.795 | 0.7664 | 0.1956 | 0.0952 | | 0.1081 | 104.96 | 1312 | 0.7657 | 0.79 | 0.3243 | 1.4406 | 0.79 | 0.7639 | 0.1960 | 0.0961 | | 0.1081 | 106.0 | 1325 | 0.7673 | 0.785 | 0.3251 | 1.4424 | 0.785 | 0.7516 | 0.2083 | 0.0978 | | 0.1081 | 106.96 | 1337 | 0.7667 | 0.79 | 0.3250 | 1.4392 | 0.79 | 0.7639 | 0.1875 | 0.0976 | | 0.1081 | 108.0 | 1350 | 0.7690 | 0.785 | 0.3250 | 1.3876 | 0.785 | 0.7526 | 0.2078 | 0.0990 | | 0.1081 | 108.96 | 1362 | 0.7676 | 0.785 | 0.3252 | 1.3872 | 0.785 | 0.7554 | 0.2073 | 0.0985 | | 0.1081 | 110.0 | 1375 | 0.7662 | 0.79 | 0.3249 | 1.4335 | 0.79 | 0.7639 | 0.1939 | 0.0980 | | 0.1081 | 110.96 | 1387 | 0.7723 | 0.785 | 0.3273 | 1.4567 | 0.785 | 0.7554 | 0.2066 | 0.0995 | | 0.1081 | 112.0 | 1400 | 0.7665 | 0.78 | 0.3250 | 1.3960 | 0.78 | 0.7488 | 0.2066 | 0.0976 | | 0.1081 | 112.96 | 1412 | 0.7722 | 0.785 | 0.3275 | 1.4410 | 0.785 | 0.7573 | 0.2063 | 0.0991 | | 0.1081 | 114.0 | 1425 | 0.7722 | 0.79 | 0.3271 | 1.4039 | 0.79 | 0.7639 | 0.1902 | 0.0990 | | 0.1081 | 114.96 | 1437 | 0.7699 | 0.79 | 0.3264 | 1.3849 | 0.79 | 0.7644 | 0.1914 | 0.0982 | | 0.1081 | 116.0 | 1450 | 0.7749 | 0.785 | 0.3285 | 1.3854 | 0.785 | 0.7573 | 0.1942 | 0.0999 | | 0.1081 | 116.96 | 1462 | 0.7722 | 0.78 | 0.3279 | 1.4365 | 0.78 | 0.7488 | 0.1973 | 0.0991 | | 0.1081 | 118.0 | 1475 | 0.7763 | 0.78 | 0.3293 | 1.3823 | 0.78 | 0.7488 | 0.2050 | 0.1006 | | 0.1081 | 118.96 | 1487 | 0.7740 | 0.78 | 0.3287 | 1.3822 | 0.78 | 0.7488 | 0.2105 | 0.0991 | | 0.0821 | 120.0 | 1500 | 0.7761 | 0.785 | 0.3294 | 1.4414 | 0.785 | 0.7573 | 0.1996 | 0.0995 | | 0.0821 | 120.96 | 1512 | 0.7749 | 0.78 | 0.3289 | 1.4387 | 0.78 | 0.7488 | 0.1981 | 0.0991 | | 0.0821 | 122.0 | 1525 | 0.7763 | 0.78 | 0.3297 | 1.4395 | 0.78 | 0.7488 | 0.2175 | 0.0993 | | 0.0821 | 122.96 | 1537 | 0.7775 | 0.78 | 0.3305 | 1.4407 | 0.78 | 0.7488 | 0.2073 | 0.0993 | | 0.0821 | 124.0 | 1550 | 0.7770 | 0.78 | 0.3299 | 1.4411 | 0.78 | 0.7488 | 0.2096 | 0.0996 | | 0.0821 | 124.96 | 1562 | 0.7785 | 0.78 | 0.3309 | 1.4415 | 0.78 | 0.7488 | 0.2174 | 0.1004 | | 0.0821 | 126.0 | 1575 | 0.7808 | 0.78 | 0.3321 | 1.4431 | 0.78 | 0.7488 | 0.2082 | 0.1005 | | 0.0821 | 126.96 | 1587 | 0.7791 | 0.78 | 0.3312 | 1.4405 | 0.78 | 0.7488 | 0.2087 | 0.0998 | | 0.0821 | 128.0 | 1600 | 0.7789 | 0.78 | 0.3312 | 1.4386 | 0.78 | 0.7488 | 0.2047 | 0.0995 | | 0.0821 | 128.96 | 1612 | 0.7829 | 0.78 | 0.3330 | 1.4423 | 0.78 | 0.7488 | 0.1920 | 0.1005 | | 0.0821 | 130.0 | 1625 | 0.7797 | 0.78 | 0.3317 | 1.4400 | 0.78 | 0.7488 | 0.2013 | 0.1006 | | 0.0821 | 130.96 | 1637 | 0.7849 | 0.78 | 0.3336 | 1.4446 | 0.78 | 0.7491 | 0.2064 | 0.1006 | | 0.0821 | 132.0 | 1650 | 0.7817 | 0.78 | 0.3322 | 1.4396 | 0.78 | 0.7488 | 0.2060 | 0.1003 | | 0.0821 | 132.96 | 1662 | 0.7823 | 0.78 | 0.3329 | 1.4407 | 0.78 | 0.7488 | 0.1990 | 0.0999 | | 0.0821 | 134.0 | 1675 | 0.7869 | 0.78 | 0.3354 | 1.4482 | 0.78 | 0.7488 | 0.1999 | 0.1009 | | 0.0821 | 134.96 | 1687 | 0.7859 | 0.78 | 0.3349 | 1.4429 | 0.78 | 0.7488 | 0.1934 | 0.1013 | | 0.0821 | 136.0 | 1700 | 0.7867 | 0.78 | 0.3352 | 1.4437 | 0.78 | 0.7488 | 0.2114 | 0.1006 | | 0.0821 | 136.96 | 1712 | 0.7867 | 0.78 | 0.3350 | 1.4403 | 0.78 | 0.7488 | 0.2070 | 0.1011 | | 0.0821 | 138.0 | 1725 | 0.7851 | 0.78 | 0.3341 | 1.4439 | 0.78 | 0.7488 | 0.1906 | 0.1009 | | 0.0821 | 138.96 | 1737 | 0.7892 | 0.78 | 0.3360 | 1.4495 | 0.78 | 0.7488 | 0.2009 | 0.1020 | | 0.0821 | 140.0 | 1750 | 0.7893 | 0.78 | 0.3366 | 1.4434 | 0.78 | 0.7488 | 0.1976 | 0.1013 | | 0.0821 | 140.96 | 1762 | 0.7848 | 0.78 | 0.3344 | 1.4383 | 0.78 | 0.7488 | 0.1995 | 0.1001 | | 0.0821 | 142.0 | 1775 | 0.7911 | 0.78 | 0.3372 | 1.4487 | 0.78 | 0.7488 | 0.1995 | 0.1020 | | 0.0821 | 142.96 | 1787 | 0.7890 | 0.78 | 0.3362 | 1.4416 | 0.78 | 0.7488 | 0.2075 | 0.1010 | | 0.0821 | 144.0 | 1800 | 0.7915 | 0.78 | 0.3372 | 1.4476 | 0.78 | 0.7488 | 0.1842 | 0.1019 | | 0.0821 | 144.96 | 1812 | 0.7876 | 0.78 | 0.3351 | 1.4999 | 0.78 | 0.7488 | 0.1904 | 0.0995 | | 0.0821 | 146.0 | 1825 | 0.7933 | 0.78 | 0.3378 | 1.4469 | 0.78 | 0.7488 | 0.1973 | 0.1023 | | 0.0821 | 146.96 | 1837 | 0.7932 | 0.78 | 0.3383 | 1.4441 | 0.78 | 0.7488 | 0.2070 | 0.1016 | | 0.0821 | 148.0 | 1850 | 0.7907 | 0.78 | 0.3369 | 1.4439 | 0.78 | 0.7488 | 0.1932 | 0.1014 | | 0.0821 | 148.96 | 1862 | 0.7939 | 0.78 | 0.3386 | 1.4462 | 0.78 | 0.7488 | 0.1906 | 0.1015 | | 0.0821 | 150.0 | 1875 | 0.7943 | 0.78 | 0.3386 | 1.4449 | 0.78 | 0.7488 | 0.1965 | 0.1016 | | 0.0821 | 150.96 | 1887 | 0.7955 | 0.78 | 0.3393 | 1.5025 | 0.78 | 0.7488 | 0.2112 | 0.1015 | | 0.0821 | 152.0 | 1900 | 0.7936 | 0.78 | 0.3386 | 1.4407 | 0.78 | 0.7488 | 0.2112 | 0.1012 | | 0.0821 | 152.96 | 1912 | 0.7966 | 0.78 | 0.3400 | 1.5033 | 0.78 | 0.7488 | 0.1963 | 0.1012 | | 0.0821 | 154.0 | 1925 | 0.7981 | 0.78 | 0.3405 | 1.4495 | 0.78 | 0.7488 | 0.1895 | 0.1020 | | 0.0821 | 154.96 | 1937 | 0.7972 | 0.78 | 0.3401 | 1.4417 | 0.78 | 0.7488 | 0.1953 | 0.1018 | | 0.0821 | 156.0 | 1950 | 0.7922 | 0.78 | 0.3381 | 1.4395 | 0.78 | 0.7488 | 0.2056 | 0.0999 | | 0.0821 | 156.96 | 1962 | 0.8013 | 0.775 | 0.3425 | 1.4473 | 0.775 | 0.7451 | 0.1869 | 0.1028 | | 0.0821 | 158.0 | 1975 | 0.7977 | 0.78 | 0.3403 | 1.4446 | 0.78 | 0.7488 | 0.1872 | 0.1014 | | 0.0821 | 158.96 | 1987 | 0.7990 | 0.78 | 0.3412 | 1.4413 | 0.78 | 0.7488 | 0.1939 | 0.1017 | | 0.0668 | 160.0 | 2000 | 0.8048 | 0.775 | 0.3435 | 1.4532 | 0.775 | 0.7451 | 0.1966 | 0.1049 | | 0.0668 | 160.96 | 2012 | 0.8064 | 0.77 | 0.3448 | 1.4529 | 0.7700 | 0.7358 | 0.1953 | 0.1044 | | 0.0668 | 162.0 | 2025 | 0.7989 | 0.78 | 0.3412 | 1.4423 | 0.78 | 0.7488 | 0.2038 | 0.1022 | | 0.0668 | 162.96 | 2037 | 0.8001 | 0.78 | 0.3414 | 1.4440 | 0.78 | 0.7488 | 0.1972 | 0.1015 | | 0.0668 | 164.0 | 2050 | 0.8068 | 0.775 | 0.3448 | 1.4523 | 0.775 | 0.7396 | 0.2031 | 0.1036 | | 0.0668 | 164.96 | 2062 | 0.8046 | 0.785 | 0.3438 | 1.4475 | 0.785 | 0.7536 | 0.2070 | 0.1037 | | 0.0668 | 166.0 | 2075 | 0.8016 | 0.78 | 0.3426 | 1.4451 | 0.78 | 0.7488 | 0.1975 | 0.1012 | | 0.0668 | 166.96 | 2087 | 0.8053 | 0.78 | 0.3442 | 1.4485 | 0.78 | 0.7477 | 0.2112 | 0.1022 | | 0.0668 | 168.0 | 2100 | 0.8040 | 0.78 | 0.3433 | 1.4459 | 0.78 | 0.7422 | 0.2014 | 0.1031 | | 0.0668 | 168.96 | 2112 | 0.8048 | 0.785 | 0.3437 | 1.4479 | 0.785 | 0.7515 | 0.2046 | 0.1033 | | 0.0668 | 170.0 | 2125 | 0.8054 | 0.775 | 0.3447 | 1.5060 | 0.775 | 0.7450 | 0.1896 | 0.1017 | | 0.0668 | 170.96 | 2137 | 0.8067 | 0.775 | 0.3451 | 1.5079 | 0.775 | 0.7450 | 0.1898 | 0.1018 | | 0.0668 | 172.0 | 2150 | 0.8060 | 0.78 | 0.3447 | 1.4508 | 0.78 | 0.7488 | 0.1842 | 0.1022 | | 0.0668 | 172.96 | 2162 | 0.8127 | 0.77 | 0.3484 | 1.4513 | 0.7700 | 0.7358 | 0.2006 | 0.1042 | | 0.0668 | 174.0 | 2175 | 0.8080 | 0.77 | 0.3457 | 1.4453 | 0.7700 | 0.7349 | 0.2198 | 0.1034 | | 0.0668 | 174.96 | 2187 | 0.8095 | 0.775 | 0.3460 | 1.4471 | 0.775 | 0.7384 | 0.2029 | 0.1027 | | 0.0668 | 176.0 | 2200 | 0.8112 | 0.775 | 0.3467 | 1.4559 | 0.775 | 0.7395 | 0.1995 | 0.1036 | | 0.0668 | 176.96 | 2212 | 0.8089 | 0.77 | 0.3460 | 1.4485 | 0.7700 | 0.7357 | 0.2050 | 0.1019 | | 0.0668 | 178.0 | 2225 | 0.8093 | 0.77 | 0.3461 | 1.4459 | 0.7700 | 0.7357 | 0.1989 | 0.1021 | | 0.0668 | 178.96 | 2237 | 0.8118 | 0.775 | 0.3473 | 1.4499 | 0.775 | 0.7384 | 0.2085 | 0.1029 | | 0.0668 | 180.0 | 2250 | 0.8112 | 0.775 | 0.3472 | 1.4471 | 0.775 | 0.7384 | 0.2070 | 0.1027 | | 0.0668 | 180.96 | 2262 | 0.8124 | 0.77 | 0.3478 | 1.4484 | 0.7700 | 0.7357 | 0.1983 | 0.1029 | | 0.0668 | 182.0 | 2275 | 0.8140 | 0.77 | 0.3484 | 1.4489 | 0.7700 | 0.7357 | 0.1987 | 0.1038 | | 0.0668 | 182.96 | 2287 | 0.8137 | 0.77 | 0.3483 | 1.4491 | 0.7700 | 0.7357 | 0.2036 | 0.1030 | | 0.0668 | 184.0 | 2300 | 0.8133 | 0.77 | 0.3481 | 1.4468 | 0.7700 | 0.7357 | 0.2012 | 0.1024 | | 0.0668 | 184.96 | 2312 | 0.8152 | 0.77 | 0.3489 | 1.4525 | 0.7700 | 0.7357 | 0.1996 | 0.1029 | | 0.0668 | 186.0 | 2325 | 0.8149 | 0.77 | 0.3490 | 1.4511 | 0.7700 | 0.7357 | 0.1917 | 0.1027 | | 0.0668 | 186.96 | 2337 | 0.8151 | 0.77 | 0.3490 | 1.4489 | 0.7700 | 0.7357 | 0.1956 | 0.1028 | | 0.0668 | 188.0 | 2350 | 0.8175 | 0.77 | 0.3500 | 1.5084 | 0.7700 | 0.7357 | 0.2011 | 0.1038 | | 0.0668 | 188.96 | 2362 | 0.8181 | 0.765 | 0.3499 | 1.4506 | 0.765 | 0.7323 | 0.1975 | 0.1056 | | 0.0668 | 190.0 | 2375 | 0.8180 | 0.765 | 0.3504 | 1.4499 | 0.765 | 0.7323 | 0.2162 | 0.1050 | | 0.0668 | 190.96 | 2387 | 0.8168 | 0.77 | 0.3498 | 1.4510 | 0.7700 | 0.7357 | 0.2014 | 0.1039 | | 0.0668 | 192.0 | 2400 | 0.8183 | 0.77 | 0.3505 | 1.4483 | 0.7700 | 0.7379 | 0.2114 | 0.1032 | | 0.0668 | 192.96 | 2412 | 0.8193 | 0.775 | 0.3507 | 1.4508 | 0.775 | 0.7384 | 0.2025 | 0.1042 | | 0.0668 | 194.0 | 2425 | 0.8181 | 0.77 | 0.3503 | 1.4565 | 0.7700 | 0.7357 | 0.2090 | 0.1027 | | 0.0668 | 194.96 | 2437 | 0.8192 | 0.77 | 0.3507 | 1.4513 | 0.7700 | 0.7357 | 0.1953 | 0.1032 | | 0.0668 | 196.0 | 2450 | 0.8214 | 0.77 | 0.3520 | 1.4519 | 0.7700 | 0.7349 | 0.2112 | 0.1045 | | 0.0668 | 196.96 | 2462 | 0.8231 | 0.765 | 0.3531 | 1.4517 | 0.765 | 0.7323 | 0.2042 | 0.1049 | | 0.0668 | 198.0 | 2475 | 0.8219 | 0.77 | 0.3521 | 1.4512 | 0.7700 | 0.7349 | 0.2152 | 0.1044 | | 0.0668 | 198.96 | 2487 | 0.8223 | 0.77 | 0.3523 | 1.4507 | 0.7700 | 0.7349 | 0.1888 | 0.1050 | | 0.0571 | 200.0 | 2500 | 0.8235 | 0.77 | 0.3529 | 1.4533 | 0.7700 | 0.7349 | 0.2029 | 0.1050 | | 0.0571 | 200.96 | 2512 | 0.8227 | 0.77 | 0.3525 | 1.4718 | 0.7700 | 0.7357 | 0.2170 | 0.1033 | | 0.0571 | 202.0 | 2525 | 0.8226 | 0.77 | 0.3525 | 1.4505 | 0.7700 | 0.7349 | 0.1954 | 0.1041 | | 0.0571 | 202.96 | 2537 | 0.8231 | 0.765 | 0.3530 | 1.4506 | 0.765 | 0.7321 | 0.1962 | 0.1046 | | 0.0571 | 204.0 | 2550 | 0.8255 | 0.77 | 0.3535 | 1.4520 | 0.7700 | 0.7380 | 0.2078 | 0.1060 | | 0.0571 | 204.96 | 2562 | 0.8276 | 0.77 | 0.3550 | 1.4594 | 0.7700 | 0.7349 | 0.2013 | 0.1046 | | 0.0571 | 206.0 | 2575 | 0.8257 | 0.77 | 0.3542 | 1.4532 | 0.7700 | 0.7349 | 0.1987 | 0.1040 | | 0.0571 | 206.96 | 2587 | 0.8248 | 0.775 | 0.3536 | 1.4499 | 0.775 | 0.7406 | 0.1903 | 0.1043 | | 0.0571 | 208.0 | 2600 | 0.8250 | 0.77 | 0.3534 | 1.4537 | 0.7700 | 0.7349 | 0.2070 | 0.1040 | | 0.0571 | 208.96 | 2612 | 0.8277 | 0.77 | 0.3548 | 1.4521 | 0.7700 | 0.7380 | 0.1867 | 0.1058 | | 0.0571 | 210.0 | 2625 | 0.8271 | 0.77 | 0.3545 | 1.4543 | 0.7700 | 0.7349 | 0.2213 | 0.1036 | | 0.0571 | 210.96 | 2637 | 0.8284 | 0.775 | 0.3552 | 1.4516 | 0.775 | 0.7406 | 0.1992 | 0.1053 | | 0.0571 | 212.0 | 2650 | 0.8278 | 0.77 | 0.3545 | 1.4533 | 0.7700 | 0.7360 | 0.1938 | 0.1056 | | 0.0571 | 212.96 | 2662 | 0.8289 | 0.77 | 0.3552 | 1.4533 | 0.7700 | 0.7380 | 0.2017 | 0.1057 | | 0.0571 | 214.0 | 2675 | 0.8290 | 0.775 | 0.3556 | 1.4530 | 0.775 | 0.7406 | 0.2005 | 0.1052 | | 0.0571 | 214.96 | 2687 | 0.8282 | 0.77 | 0.3551 | 1.4517 | 0.7700 | 0.7379 | 0.1985 | 0.1037 | | 0.0571 | 216.0 | 2700 | 0.8294 | 0.77 | 0.3555 | 1.4588 | 0.7700 | 0.7349 | 0.1941 | 0.1045 | | 0.0571 | 216.96 | 2712 | 0.8305 | 0.775 | 0.3562 | 1.4516 | 0.775 | 0.7406 | 0.1977 | 0.1057 | | 0.0571 | 218.0 | 2725 | 0.8310 | 0.77 | 0.3565 | 1.4539 | 0.7700 | 0.7380 | 0.1926 | 0.1054 | | 0.0571 | 218.96 | 2737 | 0.8304 | 0.775 | 0.3560 | 1.4516 | 0.775 | 0.7406 | 0.1986 | 0.1054 | | 0.0571 | 220.0 | 2750 | 0.8320 | 0.775 | 0.3568 | 1.4545 | 0.775 | 0.7406 | 0.1953 | 0.1054 | | 0.0571 | 220.96 | 2762 | 0.8316 | 0.775 | 0.3569 | 1.4523 | 0.775 | 0.7406 | 0.1945 | 0.1045 | | 0.0571 | 222.0 | 2775 | 0.8330 | 0.77 | 0.3573 | 1.4547 | 0.7700 | 0.7380 | 0.1892 | 0.1067 | | 0.0571 | 222.96 | 2787 | 0.8309 | 0.77 | 0.3563 | 1.4548 | 0.7700 | 0.7379 | 0.2060 | 0.1033 | | 0.0571 | 224.0 | 2800 | 0.8323 | 0.775 | 0.3572 | 1.4515 | 0.775 | 0.7406 | 0.1910 | 0.1050 | | 0.0571 | 224.96 | 2812 | 0.8329 | 0.775 | 0.3569 | 1.4530 | 0.775 | 0.7406 | 0.1931 | 0.1055 | | 0.0571 | 226.0 | 2825 | 0.8319 | 0.78 | 0.3567 | 1.4513 | 0.78 | 0.7444 | 0.2038 | 0.1043 | | 0.0571 | 226.96 | 2837 | 0.8354 | 0.77 | 0.3586 | 1.4556 | 0.7700 | 0.7380 | 0.1969 | 0.1068 | | 0.0571 | 228.0 | 2850 | 0.8340 | 0.78 | 0.3575 | 1.4550 | 0.78 | 0.7444 | 0.2043 | 0.1062 | | 0.0571 | 228.96 | 2862 | 0.8355 | 0.775 | 0.3584 | 1.4546 | 0.775 | 0.7406 | 0.2048 | 0.1055 | | 0.0571 | 230.0 | 2875 | 0.8350 | 0.78 | 0.3579 | 1.4538 | 0.78 | 0.7444 | 0.2069 | 0.1064 | | 0.0571 | 230.96 | 2887 | 0.8358 | 0.77 | 0.3584 | 1.4550 | 0.7700 | 0.7380 | 0.1899 | 0.1061 | | 0.0571 | 232.0 | 2900 | 0.8366 | 0.77 | 0.3587 | 1.4564 | 0.7700 | 0.7380 | 0.1921 | 0.1070 | | 0.0571 | 232.96 | 2912 | 0.8364 | 0.775 | 0.3587 | 1.4557 | 0.775 | 0.7418 | 0.1970 | 0.1065 | | 0.0571 | 234.0 | 2925 | 0.8359 | 0.775 | 0.3585 | 1.4543 | 0.775 | 0.7406 | 0.1912 | 0.1061 | | 0.0571 | 234.96 | 2937 | 0.8360 | 0.775 | 0.3587 | 1.4540 | 0.775 | 0.7406 | 0.2017 | 0.1049 | | 0.0571 | 236.0 | 2950 | 0.8362 | 0.78 | 0.3587 | 1.4527 | 0.78 | 0.7444 | 0.1985 | 0.1060 | | 0.0571 | 236.96 | 2962 | 0.8375 | 0.78 | 0.3593 | 1.4554 | 0.78 | 0.7444 | 0.2035 | 0.1061 | | 0.0571 | 238.0 | 2975 | 0.8378 | 0.775 | 0.3593 | 1.4544 | 0.775 | 0.7418 | 0.1971 | 0.1068 | | 0.0571 | 238.96 | 2987 | 0.8369 | 0.78 | 0.3588 | 1.4557 | 0.78 | 0.7444 | 0.2178 | 0.1057 | | 0.0512 | 240.0 | 3000 | 0.8388 | 0.77 | 0.3600 | 1.4558 | 0.7700 | 0.7380 | 0.1939 | 0.1067 | | 0.0512 | 240.96 | 3012 | 0.8375 | 0.78 | 0.3593 | 1.4540 | 0.78 | 0.7444 | 0.2071 | 0.1058 | | 0.0512 | 242.0 | 3025 | 0.8393 | 0.775 | 0.3602 | 1.4546 | 0.775 | 0.7406 | 0.1990 | 0.1066 | | 0.0512 | 242.96 | 3037 | 0.8391 | 0.775 | 0.3601 | 1.4551 | 0.775 | 0.7406 | 0.2025 | 0.1063 | | 0.0512 | 244.0 | 3050 | 0.8414 | 0.77 | 0.3610 | 1.4575 | 0.7700 | 0.7380 | 0.1924 | 0.1072 | | 0.0512 | 244.96 | 3062 | 0.8385 | 0.78 | 0.3597 | 1.4531 | 0.78 | 0.7444 | 0.2062 | 0.1059 | | 0.0512 | 246.0 | 3075 | 0.8394 | 0.78 | 0.3603 | 1.4583 | 0.78 | 0.7444 | 0.1962 | 0.1057 | | 0.0512 | 246.96 | 3087 | 0.8401 | 0.775 | 0.3604 | 1.4535 | 0.775 | 0.7406 | 0.1880 | 0.1060 | | 0.0512 | 248.0 | 3100 | 0.8400 | 0.78 | 0.3605 | 1.4550 | 0.78 | 0.7444 | 0.2156 | 0.1058 | | 0.0512 | 248.96 | 3112 | 0.8404 | 0.78 | 0.3606 | 1.4554 | 0.78 | 0.7444 | 0.1977 | 0.1061 | | 0.0512 | 250.0 | 3125 | 0.8406 | 0.78 | 0.3607 | 1.4542 | 0.78 | 0.7444 | 0.2055 | 0.1062 | | 0.0512 | 250.96 | 3137 | 0.8408 | 0.78 | 0.3608 | 1.4545 | 0.78 | 0.7444 | 0.2036 | 0.1062 | | 0.0512 | 252.0 | 3150 | 0.8414 | 0.78 | 0.3611 | 1.4560 | 0.78 | 0.7444 | 0.2054 | 0.1063 | | 0.0512 | 252.96 | 3162 | 0.8424 | 0.775 | 0.3614 | 1.4580 | 0.775 | 0.7418 | 0.2037 | 0.1072 | | 0.0512 | 254.0 | 3175 | 0.8423 | 0.775 | 0.3616 | 1.4558 | 0.775 | 0.7406 | 0.2057 | 0.1064 | | 0.0512 | 254.96 | 3187 | 0.8422 | 0.775 | 0.3613 | 1.4562 | 0.775 | 0.7418 | 0.2070 | 0.1066 | | 0.0512 | 256.0 | 3200 | 0.8419 | 0.78 | 0.3612 | 1.4562 | 0.78 | 0.7444 | 0.2196 | 0.1063 | | 0.0512 | 256.96 | 3212 | 0.8434 | 0.775 | 0.3620 | 1.4565 | 0.775 | 0.7406 | 0.2033 | 0.1065 | | 0.0512 | 258.0 | 3225 | 0.8431 | 0.775 | 0.3619 | 1.4557 | 0.775 | 0.7418 | 0.2072 | 0.1064 | | 0.0512 | 258.96 | 3237 | 0.8435 | 0.77 | 0.3620 | 1.4567 | 0.7700 | 0.7380 | 0.1985 | 0.1066 | | 0.0512 | 260.0 | 3250 | 0.8433 | 0.78 | 0.3619 | 1.4567 | 0.78 | 0.7444 | 0.2179 | 0.1065 | | 0.0512 | 260.96 | 3262 | 0.8430 | 0.78 | 0.3619 | 1.4558 | 0.78 | 0.7444 | 0.2120 | 0.1060 | | 0.0512 | 262.0 | 3275 | 0.8432 | 0.78 | 0.3619 | 1.4552 | 0.78 | 0.7444 | 0.2058 | 0.1060 | | 0.0512 | 262.96 | 3287 | 0.8444 | 0.775 | 0.3623 | 1.4572 | 0.775 | 0.7418 | 0.2035 | 0.1068 | | 0.0512 | 264.0 | 3300 | 0.8442 | 0.775 | 0.3622 | 1.4574 | 0.775 | 0.7418 | 0.2054 | 0.1067 | | 0.0512 | 264.96 | 3312 | 0.8441 | 0.78 | 0.3623 | 1.4554 | 0.78 | 0.7444 | 0.2051 | 0.1062 | | 0.0512 | 266.0 | 3325 | 0.8446 | 0.775 | 0.3624 | 1.4561 | 0.775 | 0.7418 | 0.1975 | 0.1066 | | 0.0512 | 266.96 | 3337 | 0.8447 | 0.775 | 0.3624 | 1.4570 | 0.775 | 0.7418 | 0.2053 | 0.1065 | | 0.0512 | 268.0 | 3350 | 0.8448 | 0.78 | 0.3624 | 1.4573 | 0.78 | 0.7444 | 0.2085 | 0.1065 | | 0.0512 | 268.96 | 3362 | 0.8443 | 0.78 | 0.3624 | 1.4558 | 0.78 | 0.7444 | 0.2119 | 0.1065 | | 0.0512 | 270.0 | 3375 | 0.8453 | 0.775 | 0.3628 | 1.4571 | 0.775 | 0.7418 | 0.2035 | 0.1067 | | 0.0512 | 270.96 | 3387 | 0.8444 | 0.78 | 0.3623 | 1.4561 | 0.78 | 0.7444 | 0.2076 | 0.1063 | | 0.0512 | 272.0 | 3400 | 0.8455 | 0.775 | 0.3629 | 1.4569 | 0.775 | 0.7418 | 0.2034 | 0.1066 | | 0.0512 | 272.96 | 3412 | 0.8453 | 0.78 | 0.3628 | 1.4574 | 0.78 | 0.7444 | 0.2021 | 0.1065 | | 0.0512 | 274.0 | 3425 | 0.8450 | 0.78 | 0.3626 | 1.4560 | 0.78 | 0.7444 | 0.2058 | 0.1064 | | 0.0512 | 274.96 | 3437 | 0.8456 | 0.775 | 0.3629 | 1.4569 | 0.775 | 0.7418 | 0.2035 | 0.1066 | | 0.0512 | 276.0 | 3450 | 0.8454 | 0.775 | 0.3628 | 1.4565 | 0.775 | 0.7418 | 0.2033 | 0.1065 | | 0.0512 | 276.96 | 3462 | 0.8454 | 0.78 | 0.3628 | 1.4575 | 0.78 | 0.7444 | 0.2137 | 0.1063 | | 0.0512 | 278.0 | 3475 | 0.8457 | 0.78 | 0.3630 | 1.4567 | 0.78 | 0.7444 | 0.2092 | 0.1065 | | 0.0512 | 278.96 | 3487 | 0.8462 | 0.775 | 0.3632 | 1.4567 | 0.775 | 0.7418 | 0.1994 | 0.1067 | | 0.0481 | 280.0 | 3500 | 0.8456 | 0.78 | 0.3630 | 1.4572 | 0.78 | 0.7444 | 0.2192 | 0.1064 | | 0.0481 | 280.96 | 3512 | 0.8462 | 0.775 | 0.3632 | 1.4571 | 0.775 | 0.7418 | 0.2034 | 0.1066 | | 0.0481 | 282.0 | 3525 | 0.8457 | 0.775 | 0.3630 | 1.4563 | 0.775 | 0.7418 | 0.2042 | 0.1065 | | 0.0481 | 282.96 | 3537 | 0.8460 | 0.775 | 0.3631 | 1.4570 | 0.775 | 0.7418 | 0.2106 | 0.1066 | | 0.0481 | 284.0 | 3550 | 0.8462 | 0.775 | 0.3632 | 1.4570 | 0.775 | 0.7418 | 0.2106 | 0.1067 | | 0.0481 | 284.96 | 3562 | 0.8460 | 0.775 | 0.3631 | 1.4567 | 0.775 | 0.7418 | 0.2042 | 0.1065 | | 0.0481 | 286.0 | 3575 | 0.8461 | 0.775 | 0.3632 | 1.4568 | 0.775 | 0.7418 | 0.2043 | 0.1066 | | 0.0481 | 286.96 | 3587 | 0.8461 | 0.775 | 0.3632 | 1.4570 | 0.775 | 0.7418 | 0.2043 | 0.1066 | | 0.0481 | 288.0 | 3600 | 0.8461 | 0.775 | 0.3632 | 1.4570 | 0.775 | 0.7418 | 0.2043 | 0.1066 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.13.1 - Datasets 2.13.1 - Tokenizers 0.13.3
plncmm/roberta-clinical-wl-es
plncmm
2023-07-13T13:16:20Z
111
0
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "fill-mask", "generated_from_trainer", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-06-07T22:52:54Z
--- license: apache-2.0 language: - es widget: - text: "Periodontitis <mask> generalizada severa." - text: "Caries dentinaria <mask>." - text: "Movilidad aumentada en pza <mask>." - text: "Pcte con dm en tto con <mask>." - text: "Pcte con erc en tto con <mask>." tags: - generated_from_trainer model-index: - name: roberta-clinical-wl-es results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # plncmm/roberta-clinical-wl-es This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-biomedical-clinical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es) on the Chilean waiting list dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.2.1 - Tokenizers 0.12.1
aga3134/my_awesome_eli5_clm-model
aga3134
2023-07-13T13:13:34Z
202
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-13T04:54:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: my_awesome_eli5_clm-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_eli5_clm-model This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.7230 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.8799 | 1.0 | 1132 | 3.7450 | | 3.7747 | 2.0 | 2264 | 3.7267 | | 3.7347 | 3.0 | 3396 | 3.7230 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
T-Systems-onsite/cross-en-pt-roberta-sentence-transformer
T-Systems-onsite
2023-07-13T13:09:58Z
20
0
transformers
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "feature-extraction", "sentence_embedding", "en", "pt", "license:mit", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: - en - pt license: mit tags: - sentence_embedding ---
peft-internal-testing/tiny_OPTForQuestionAnswering-lora
peft-internal-testing
2023-07-13T13:09:34Z
25,146
0
peft
[ "peft", "region:us" ]
null
2023-07-13T13:09:33Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
zohaib99k/QnA_model_training
zohaib99k
2023-07-13T13:04:41Z
121
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-11T04:12:35Z
--- license: other --- LLaMA-13B converted to work with Transformers/HuggingFace. This is under a special license, please see the LICENSE file for details. -- license: other --- # LLaMA Model Card ## Model details **Organization developing the model** The FAIR team of Meta AI. **Model date** LLaMA was trained between December. 2022 and Feb. 2023. **Model version** This is version 1 of the model. **Model type** LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters. **Paper or resources for more information** More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/. **Citations details** https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/ **License** Non-commercial bespoke license **Where to send questions or comments about the model** Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue. ## Intended use **Primary intended uses** The primary use of LLaMA is research on large language models, including: exploring potential applications such as question answering, natural language understanding or reading comprehension, understanding capabilities and limitations of current language models, and developing techniques to improve those, evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations. **Primary intended users** The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence. **Out-of-scope use cases** LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers. ## Factors **Relevant factors** One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model. **Evaluation factors** As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model. ## Metrics **Model performance measures** We use the following measure to evaluate the model: - Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs, - Exact match for question answering, - The toxicity score from Perspective API on RealToxicityPrompts. **Decision thresholds** Not applicable. **Approaches to uncertainty and variability** Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training. ## Evaluation datasets The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs. ## Training dataset The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing. ## Quantitative analysis Hyperparameters for the model architecture <table> <thead> <tr> <th >LLaMA</th> <th colspan=6>Model hyper parameters </th> </tr> <tr> <th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th> </tr> </thead> <tbody> <tr> <th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T </tr> <tr> <th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T </tr> <tr> <th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T </tr> <tr> <th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T </tr> </tbody> </table> *Table 1 - Summary of LLama Model Hyperparameters* We present our results on eight standard common sense reasoning benchmarks in the table below. <table> <thead> <tr> <th>LLaMA</th> <th colspan=9>Reasoning tasks </th> </tr> <tr> <th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th> </tr> </thead> <tbody> <tr> <th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93 </th> <tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94 </th> <tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92 </th> <tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr> </tbody> </table> *Table 2 - Summary of LLama Model Performance on Reasoning tasks* We present our results on bias in the table below. Note that lower value is better indicating lower bias. | No | Category | FAIR LLM | | --- | -------------------- | -------- | | 1 | Gender | 70.6 | | 2 | Religion | 79 | | 3 | Race/Color | 57 | | 4 | Sexual orientation | 81 | | 5 | Age | 70.1 | | 6 | Nationality | 64.2 | | 7 | Disability | 66.7 | | 8 | Physical appearance | 77.8 | | 9 | Socioeconomic status | 71.5 | | | LLaMA Average | 66.6 | *Table 3 - Summary bias of our model output* ## Ethical considerations **Data** The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data. **Human life** The model is not intended to inform decisions about matters central to human life, and should not be used in such a way. **Mitigations** We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier. **Risks and harms** Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard. **Use cases** LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
mistdmar/sd-class-butterflies-32
mistdmar
2023-07-13T12:51:55Z
30
0
diffusers
[ "diffusers", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2023-07-13T12:51:34Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('mistdmar/sd-class-butterflies-32') image = pipeline().images[0] image ```
youlun77/finetuning-sentiment-model-25000-samples
youlun77
2023-07-13T12:42:09Z
117
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-13T09:25:14Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: finetuning-sentiment-model-25000-samples results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-25000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - eval_loss: 0.2321 - eval_accuracy: 0.932 - eval_f1: 0.9327 - eval_runtime: 421.5501 - eval_samples_per_second: 59.305 - eval_steps_per_second: 3.708 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
dsfsi/nr-en-m2m100-gov
dsfsi
2023-07-13T12:40:43Z
104
0
transformers
[ "transformers", "pytorch", "m2m_100", "text2text-generation", "m2m100", "translation", "africanlp", "african", "ndebele", "nr", "en", "arxiv:2303.03750", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-22T10:22:19Z
--- license: cc-by-4.0 language: - nr - en pipeline_tag: text2text-generation tags: - m2m100 - translation - africanlp - african - ndebele --- # [nr-en] South Ndebele to English Translation Model based on M2M100 and The South African Gov-ZA multilingual corpus Model created from South Ndebele to English aligned sentences from [The South African Gov-ZA multilingual corpus](https://github.com/dsfsi/gov-za-multilingual) The data set contains cabinet statements from the South African government, maintained by the Government Communication and Information System (GCIS). Data was scraped from the governments website: https://www.gov.za/cabinet-statements ## Authors - Vukosi Marivate - [@vukosi](https://twitter.com/vukosi) - Matimba Shingange - Richard Lastrucci - Isheanesu Joseph Dzingirai - Jenalea Rajab ## BibTeX entry and citation info ``` @inproceedings{lastrucci-etal-2023-preparing, title = "Preparing the Vuk{'}uzenzele and {ZA}-gov-multilingual {S}outh {A}frican multilingual corpora", author = "Richard Lastrucci and Isheanesu Dzingirai and Jenalea Rajab and Andani Madodonga and Matimba Shingange and Daniel Njini and Vukosi Marivate", booktitle = "Proceedings of the Fourth workshop on Resources for African Indigenous Languages (RAIL 2023)", month = may, year = "2023", address = "Dubrovnik, Croatia", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.rail-1.3", pages = "18--25" } ``` [Paper - Preparing the Vuk'uzenzele and ZA-gov-multilingual South African multilingual corpora](https://arxiv.org/abs/2303.03750)
Ne01ynx/GXA-temp
Ne01ynx
2023-07-13T12:31:53Z
5
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-13T12:25:06Z
<p><strong><font size="5">Information</font></strong></p> GPT4-X-Alpaca 30B 4-bit working with GPTQ versions used in Oobabooga's Text Generation Webui and KoboldAI. <p>There are 2 quantized versions, one is using <i>--true-sequential</i> and <i>--act-order</i> optimizations, and the other is using <i>--true-sequential</i> and <i>--groupsize 128</i> optimizations.</p> This was made using Chansung's GPT4-Alpaca Lora: https://huggingface.co/chansung/gpt4-alpaca-lora-30b <p><strong>Training Parameters</strong></p> <ul><li>num_epochs=10</li><li>cutoff_len=512</li><li>group_by_length</li><li>lora_target_modules='[q_proj,k_proj,v_proj,o_proj]'</li><li>lora_r=16</li><li>micro_batch_size=8</li></ul> <p><strong><font size="5">Benchmarks</font></strong></p> <p><strong><font size="4">--true-sequential --act-order</font></strong></p> <strong>Wikitext2</strong>: 4.481280326843262 <strong>Ptb-New</strong>: 8.539161682128906 <strong>C4-New</strong>: 6.451964855194092 <strong>Note</strong>: This version does not use <i>--groupsize 128</i>, therefore evaluations are minimally higher. However, this version allows fitting the whole model at full context using only 24GB VRAM. <p><strong><font size="4">--true-sequential --groupsize 128</font></strong></p> <strong>Wikitext2</strong>: 4.285132884979248 <strong>Ptb-New</strong>: 8.34856128692627 <strong>C4-New</strong>: 6.292652130126953 <strong>Note</strong>: This version uses <i>--groupsize 128</i>, resulting in better evaluations. However, it consumes more VRAM.
phatjk/bloomz-lora-vi-QA-NLLB-viquad_ver2
phatjk
2023-07-13T12:24:58Z
1
0
peft
[ "peft", "region:us" ]
null
2023-07-13T12:24:55Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
vilsonrodrigues/falcon-7b-instruct-sharded
vilsonrodrigues
2023-07-13T12:22:04Z
6,899
26
transformers
[ "transformers", "safetensors", "falcon", "text-generation", "custom_code", "en", "dataset:tiiuae/falcon-refinedweb", "arxiv:2205.14135", "arxiv:1911.02150", "arxiv:2005.14165", "arxiv:2104.09864", "arxiv:2306.01116", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-09T01:06:04Z
--- datasets: - tiiuae/falcon-refinedweb language: - en inference: true widget: - text: "Hey Falcon! Any recommendations for my holidays in Abu Dhabi?" example_title: "Abu Dhabi Trip" - text: "What's the Everett interpretation of quantum mechanics?" example_title: "Q/A: Quantum & Answers" - text: "Give me a list of the top 10 dive sites you would recommend around the world." example_title: "Diving Top 10" - text: "Can you tell me more about deep-water soloing?" example_title: "Extreme sports" - text: "Can you write a short tweet about the Apache 2.0 release of our latest AI model, Falcon LLM?" example_title: "Twitter Helper" - text: "What are the responsabilities of a Chief Llama Officer?" example_title: "Trendy Jobs" license: apache-2.0 --- # Resharded Resharded version of https://huggingface.co/tiiuae/falcon-7b-instruct for low RAM enviroments (e.g. Colab, Kaggle) in safetensors Tutorial: https://medium.com/@vilsonrodrigues/run-your-private-llm-falcon-7b-instruct-with-less-than-6gb-of-gpu-using-4-bit-quantization-ff1d4ffbabcc --- # ✨ Falcon-7B-Instruct **Falcon-7B-Instruct is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and finetuned on a mixture of chat/instruct datasets. It is made available under the Apache 2.0 license.** *Paper coming soon 😊.* 🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)! ## Why use Falcon-7B-Instruct? * **You are looking for a ready-to-use chat/instruct model based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).** * **Falcon-7B is a strong base model, outperforming comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). * **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)). ⚠️ Falcon is now available as a core model in the `transformers` library! To use the in-library version, please install the latest version of `transformers` with `pip install git+https://github.com/ huggingface/transformers.git`, then simply remove the `trust_remote_code=True` argument from `from_pretrained()`. 💬 **This is an instruct model, which may not be ideal for further finetuning.** If you are interested in building your own instruct/chat model, we recommend starting from [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b). 🔥 **Looking for an even more powerful model?** [Falcon-40B-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) is Falcon-7B-Instruct's big brother! ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "tiiuae/falcon-7b-instruct" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, device_map="auto", ) sequences = pipeline( "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:", max_length=200, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` 💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!** For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon). You will need **at least 16GB of memory** to swiftly run inference with Falcon-7B-Instruct. # Model Card for Falcon-7B-Instruct ## Model Details ### Model Description - **Developed by:** [https://www.tii.ae](https://www.tii.ae); - **Model type:** Causal decoder-only; - **Language(s) (NLP):** English and French; - **License:** Apache 2.0; - **Finetuned from model:** [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b). ### Model Source - **Paper:** *coming soon*. ## Uses ### Direct Use Falcon-7B-Instruct has been finetuned on a mixture of instruct and chat datasets. ### Out-of-Scope Use Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful. ## Bias, Risks, and Limitations Falcon-7B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online. ### Recommendations We recommend users of Falcon-7B-Instruct to develop guardrails and to take appropriate precautions for any production use. ## How to Get Started with the Model ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "tiiuae/falcon-7b-instruct" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, device_map="auto", ) sequences = pipeline( "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:", max_length=200, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ## Training Details ### Training Data Falcon-7B-Instruct was finetuned on a 250M tokens mixture of instruct/chat datasets. | **Data source** | **Fraction** | **Tokens** | **Description** | |--------------------|--------------|------------|-----------------------------------| | [Bai ze](https://github.com/project-baize/baize-chatbot) | 65% | 164M | chat | | [GPT4All](https://github.com/nomic-ai/gpt4all) | 25% | 62M | instruct | | [GPTeacher](https://github.com/teknium1/GPTeacher) | 5% | 11M | instruct | | [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 5% | 13M | massive web crawl | The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer. ## Evaluation *Paper coming soon.* See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results. Note that this model variant is not optimized for NLP benchmarks. ## Technical Specifications For more information about pretraining, see [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b). ### Model Architecture and Objective Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token). The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences: * **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864)); * **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)); * **Decoder-block:** parallel attention/MLP with a single layer norm. | **Hyperparameter** | **Value** | **Comment** | |--------------------|-----------|----------------------------------------| | Layers | 32 | | | `d_model` | 4544 | Increased to compensate for multiquery | | `head_dim` | 64 | Reduced to optimise for FlashAttention | | Vocabulary | 65024 | | | Sequence length | 2048 | | ### Compute Infrastructure #### Hardware Falcon-7B-Instruct was trained on AWS SageMaker, on 32 A100 40GB GPUs in P4d instances. #### Software Falcon-7B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.) ## Citation *Paper coming soon* 😊. In the meanwhile, you can use the following information to cite: ``` @article{falcon40b, title={{Falcon-40B}: an open large language model with state-of-the-art performance}, author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme}, year={2023} } ``` To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116). ``` @article{refinedweb, title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only}, author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay}, journal={arXiv preprint arXiv:2306.01116}, eprint={2306.01116}, eprinttype = {arXiv}, url={https://arxiv.org/abs/2306.01116}, year={2023} } ``` ## License Falcon-7B-Instruct is made available under the Apache 2.0 license. ## Contact [email protected]
Bimantara/lcb3
Bimantara
2023-07-13T12:21:21Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-13T12:20:28Z
--- license: creativeml-openrail-m ---
Sasagi/Remusuzumori
Sasagi
2023-07-13T12:20:03Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-13T12:12:51Z
--- license: creativeml-openrail-m ---
jordyvl/vit-tiny_tobacco3482_dualsimkd_
jordyvl
2023-07-13T12:19:30Z
163
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-13T10:55:18Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: vit-tiny_tobacco3482_dualsimkd_ results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-tiny_tobacco3482_dualsimkd_ This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1401 - Accuracy: 0.385 - Brier Loss: 0.8709 - Nll: 8.8462 - F1 Micro: 0.3850 - F1 Macro: 0.1979 - Ece: 0.3606 - Aurc: 0.3874 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:| | No log | 1.0 | 100 | 0.5117 | 0.04 | 0.9009 | 19.1664 | 0.04 | 0.0077 | 0.1344 | 0.9445 | | No log | 2.0 | 200 | 0.3168 | 0.05 | 0.8997 | 15.0313 | 0.0500 | 0.0095 | 0.1344 | 0.8364 | | No log | 3.0 | 300 | 0.2703 | 0.18 | 0.8978 | 9.6860 | 0.18 | 0.0305 | 0.2180 | 0.7731 | | No log | 4.0 | 400 | 0.2266 | 0.18 | 0.8952 | 12.0957 | 0.18 | 0.0305 | 0.2223 | 0.7993 | | 1.1219 | 5.0 | 500 | 0.1687 | 0.18 | 0.8951 | 12.7136 | 0.18 | 0.0305 | 0.2215 | 0.7713 | | 1.1219 | 6.0 | 600 | 0.1331 | 0.165 | 0.8956 | 12.6737 | 0.165 | 0.0284 | 0.2044 | 0.7829 | | 1.1219 | 7.0 | 700 | 0.1139 | 0.18 | 0.8960 | 12.6380 | 0.18 | 0.0305 | 0.2283 | 0.7875 | | 1.1219 | 8.0 | 800 | 0.1143 | 0.18 | 0.8963 | 12.6385 | 0.18 | 0.0306 | 0.2183 | 0.7703 | | 1.1219 | 9.0 | 900 | 0.1246 | 0.18 | 0.8966 | 12.5389 | 0.18 | 0.0305 | 0.2223 | 0.7726 | | 0.0694 | 10.0 | 1000 | 0.1262 | 0.18 | 0.8961 | 12.6316 | 0.18 | 0.0305 | 0.2271 | 0.7894 | | 0.0694 | 11.0 | 1100 | 0.1186 | 0.155 | 0.8961 | 12.6309 | 0.155 | 0.0268 | 0.2169 | 0.6418 | | 0.0694 | 12.0 | 1200 | 0.1290 | 0.18 | 0.8960 | 12.6360 | 0.18 | 0.0305 | 0.2272 | 0.8014 | | 0.0694 | 13.0 | 1300 | 0.1202 | 0.18 | 0.8959 | 12.6644 | 0.18 | 0.0305 | 0.2274 | 0.7910 | | 0.0694 | 14.0 | 1400 | 0.1341 | 0.18 | 0.8960 | 12.6667 | 0.18 | 0.0305 | 0.2273 | 0.7916 | | 0.0505 | 15.0 | 1500 | 0.1234 | 0.18 | 0.8961 | 12.6653 | 0.18 | 0.0305 | 0.2261 | 0.7819 | | 0.0505 | 16.0 | 1600 | 0.1375 | 0.18 | 0.8960 | 12.6951 | 0.18 | 0.0305 | 0.2283 | 0.7929 | | 0.0505 | 17.0 | 1700 | 0.1249 | 0.18 | 0.8959 | 12.7041 | 0.18 | 0.0305 | 0.2262 | 0.7820 | | 0.0505 | 18.0 | 1800 | 0.1263 | 0.18 | 0.8964 | 12.6096 | 0.18 | 0.0305 | 0.2228 | 0.7900 | | 0.0505 | 19.0 | 1900 | 0.1243 | 0.18 | 0.8961 | 12.6667 | 0.18 | 0.0305 | 0.2229 | 0.7896 | | 0.0483 | 20.0 | 2000 | 0.1246 | 0.18 | 0.8960 | 12.6285 | 0.18 | 0.0305 | 0.2172 | 0.7913 | | 0.0483 | 21.0 | 2100 | 0.1218 | 0.18 | 0.8961 | 12.6375 | 0.18 | 0.0305 | 0.2250 | 0.8003 | | 0.0483 | 22.0 | 2200 | 0.1228 | 0.18 | 0.8964 | 12.5765 | 0.18 | 0.0305 | 0.2258 | 0.7938 | | 0.0483 | 23.0 | 2300 | 0.1270 | 0.18 | 0.8963 | 12.6332 | 0.18 | 0.0305 | 0.2239 | 0.8055 | | 0.0483 | 24.0 | 2400 | 0.1303 | 0.18 | 0.8963 | 12.5914 | 0.18 | 0.0305 | 0.2270 | 0.8006 | | 0.0484 | 25.0 | 2500 | 0.1234 | 0.18 | 0.8960 | 12.6429 | 0.18 | 0.0305 | 0.2208 | 0.7990 | | 0.0484 | 26.0 | 2600 | 0.1313 | 0.18 | 0.8965 | 12.5721 | 0.18 | 0.0305 | 0.2205 | 0.8069 | | 0.0484 | 27.0 | 2700 | 0.1314 | 0.18 | 0.8963 | 12.5982 | 0.18 | 0.0305 | 0.2247 | 0.8110 | | 0.0484 | 28.0 | 2800 | 0.1326 | 0.18 | 0.8962 | 12.6539 | 0.18 | 0.0305 | 0.2143 | 0.8083 | | 0.0484 | 29.0 | 2900 | 0.1337 | 0.18 | 0.8964 | 12.5814 | 0.18 | 0.0305 | 0.2225 | 0.8106 | | 0.0473 | 30.0 | 3000 | 0.1369 | 0.18 | 0.8962 | 12.6021 | 0.18 | 0.0305 | 0.2258 | 0.8095 | | 0.0473 | 31.0 | 3100 | 0.1295 | 0.18 | 0.8958 | 12.6587 | 0.18 | 0.0305 | 0.2273 | 0.8104 | | 0.0473 | 32.0 | 3200 | 0.1343 | 0.18 | 0.8959 | 12.6740 | 0.18 | 0.0305 | 0.2220 | 0.8119 | | 0.0473 | 33.0 | 3300 | 0.1359 | 0.18 | 0.8960 | 12.6790 | 0.18 | 0.0305 | 0.2273 | 0.8134 | | 0.0473 | 34.0 | 3400 | 0.1367 | 0.18 | 0.8961 | 12.6336 | 0.18 | 0.0305 | 0.2228 | 0.8159 | | 0.0476 | 35.0 | 3500 | 0.1378 | 0.18 | 0.8963 | 12.6119 | 0.18 | 0.0305 | 0.2270 | 0.8172 | | 0.0476 | 36.0 | 3600 | 0.1286 | 0.18 | 0.8961 | 12.6340 | 0.18 | 0.0305 | 0.2218 | 0.8148 | | 0.0476 | 37.0 | 3700 | 0.1333 | 0.18 | 0.8960 | 12.6328 | 0.18 | 0.0305 | 0.2207 | 0.8164 | | 0.0476 | 38.0 | 3800 | 0.1328 | 0.18 | 0.8963 | 12.6294 | 0.18 | 0.0305 | 0.2196 | 0.8180 | | 0.0476 | 39.0 | 3900 | 0.1344 | 0.18 | 0.8961 | 12.6417 | 0.18 | 0.0305 | 0.2207 | 0.8209 | | 0.0474 | 40.0 | 4000 | 0.1362 | 0.18 | 0.8959 | 12.6775 | 0.18 | 0.0305 | 0.2187 | 0.8198 | | 0.0474 | 41.0 | 4100 | 0.1340 | 0.18 | 0.8961 | 12.6746 | 0.18 | 0.0305 | 0.2249 | 0.8215 | | 0.0474 | 42.0 | 4200 | 0.1308 | 0.18 | 0.8958 | 12.6621 | 0.18 | 0.0305 | 0.2208 | 0.8215 | | 0.0474 | 43.0 | 4300 | 0.1372 | 0.18 | 0.8960 | 12.6133 | 0.18 | 0.0305 | 0.2249 | 0.8204 | | 0.0474 | 44.0 | 4400 | 0.1436 | 0.18 | 0.8963 | 12.6014 | 0.18 | 0.0305 | 0.2280 | 0.8201 | | 0.0472 | 45.0 | 4500 | 0.1374 | 0.18 | 0.8960 | 12.6316 | 0.18 | 0.0305 | 0.2228 | 0.8193 | | 0.0472 | 46.0 | 4600 | 0.1261 | 0.18 | 0.8957 | 12.6840 | 0.18 | 0.0305 | 0.2251 | 0.8220 | | 0.0472 | 47.0 | 4700 | 0.1340 | 0.18 | 0.8956 | 12.6704 | 0.18 | 0.0305 | 0.2251 | 0.8221 | | 0.0472 | 48.0 | 4800 | 0.1320 | 0.18 | 0.8959 | 12.6111 | 0.18 | 0.0305 | 0.2227 | 0.8203 | | 0.0472 | 49.0 | 4900 | 0.1336 | 0.18 | 0.8956 | 12.6838 | 0.18 | 0.0305 | 0.2294 | 0.8209 | | 0.0474 | 50.0 | 5000 | 0.1342 | 0.18 | 0.8959 | 12.3426 | 0.18 | 0.0305 | 0.2292 | 0.8218 | | 0.0474 | 51.0 | 5100 | 0.1362 | 0.18 | 0.8957 | 12.3611 | 0.18 | 0.0305 | 0.2261 | 0.8224 | | 0.0474 | 52.0 | 5200 | 0.1368 | 0.18 | 0.8958 | 11.5617 | 0.18 | 0.0305 | 0.2205 | 0.8222 | | 0.0474 | 53.0 | 5300 | 0.1391 | 0.18 | 0.8955 | 11.5519 | 0.18 | 0.0305 | 0.2312 | 0.8225 | | 0.0474 | 54.0 | 5400 | 0.1366 | 0.18 | 0.8947 | 12.2068 | 0.18 | 0.0305 | 0.2231 | 0.8231 | | 0.047 | 55.0 | 5500 | 0.1355 | 0.19 | 0.8943 | 11.5922 | 0.19 | 0.0641 | 0.2299 | 0.8248 | | 0.047 | 56.0 | 5600 | 0.1386 | 0.17 | 0.8930 | 11.8204 | 0.17 | 0.0705 | 0.2240 | 0.5968 | | 0.047 | 57.0 | 5700 | 0.1364 | 0.33 | 0.8936 | 11.0092 | 0.33 | 0.1878 | 0.3195 | 0.4381 | | 0.047 | 58.0 | 5800 | 0.1368 | 0.27 | 0.8923 | 11.0463 | 0.27 | 0.1541 | 0.2874 | 0.5187 | | 0.047 | 59.0 | 5900 | 0.1328 | 0.325 | 0.8915 | 10.5269 | 0.325 | 0.1702 | 0.3247 | 0.4469 | | 0.0469 | 60.0 | 6000 | 0.1402 | 0.235 | 0.8945 | 9.2940 | 0.235 | 0.1141 | 0.2558 | 0.6612 | | 0.0469 | 61.0 | 6100 | 0.1387 | 0.345 | 0.8913 | 9.2678 | 0.345 | 0.1657 | 0.3422 | 0.4100 | | 0.0469 | 62.0 | 6200 | 0.1386 | 0.31 | 0.8891 | 10.1100 | 0.31 | 0.1637 | 0.3134 | 0.4609 | | 0.0469 | 63.0 | 6300 | 0.1379 | 0.34 | 0.8892 | 9.1965 | 0.34 | 0.1582 | 0.3388 | 0.4344 | | 0.0469 | 64.0 | 6400 | 0.1375 | 0.335 | 0.8876 | 9.2252 | 0.335 | 0.1624 | 0.3356 | 0.4239 | | 0.0469 | 65.0 | 6500 | 0.1357 | 0.345 | 0.8868 | 9.1887 | 0.345 | 0.1659 | 0.3361 | 0.4061 | | 0.0469 | 66.0 | 6600 | 0.1394 | 0.345 | 0.8850 | 9.1819 | 0.345 | 0.1641 | 0.3398 | 0.4265 | | 0.0469 | 67.0 | 6700 | 0.1410 | 0.34 | 0.8850 | 9.1158 | 0.34 | 0.1590 | 0.3328 | 0.4302 | | 0.0469 | 68.0 | 6800 | 0.1387 | 0.295 | 0.8814 | 9.2693 | 0.295 | 0.1374 | 0.3039 | 0.4572 | | 0.0469 | 69.0 | 6900 | 0.1385 | 0.335 | 0.8814 | 9.1526 | 0.335 | 0.1668 | 0.3324 | 0.4205 | | 0.0463 | 70.0 | 7000 | 0.1392 | 0.34 | 0.8814 | 9.1159 | 0.34 | 0.1546 | 0.3405 | 0.4263 | | 0.0463 | 71.0 | 7100 | 0.1418 | 0.35 | 0.8820 | 9.1363 | 0.35 | 0.1692 | 0.3436 | 0.4019 | | 0.0463 | 72.0 | 7200 | 0.1379 | 0.35 | 0.8791 | 9.0483 | 0.35 | 0.1726 | 0.3402 | 0.4226 | | 0.0463 | 73.0 | 7300 | 0.1405 | 0.33 | 0.8760 | 9.3563 | 0.33 | 0.1731 | 0.3207 | 0.4307 | | 0.0463 | 74.0 | 7400 | 0.1401 | 0.31 | 0.8769 | 9.4413 | 0.31 | 0.1676 | 0.3099 | 0.4383 | | 0.0458 | 75.0 | 7500 | 0.1393 | 0.38 | 0.8778 | 9.0788 | 0.38 | 0.1985 | 0.3518 | 0.3976 | | 0.0458 | 76.0 | 7600 | 0.1384 | 0.39 | 0.8779 | 9.0233 | 0.39 | 0.2027 | 0.3673 | 0.4144 | | 0.0458 | 77.0 | 7700 | 0.1403 | 0.365 | 0.8818 | 9.1567 | 0.3650 | 0.1953 | 0.3518 | 0.4181 | | 0.0458 | 78.0 | 7800 | 0.1400 | 0.27 | 0.8725 | 11.0592 | 0.27 | 0.1627 | 0.2896 | 0.4809 | | 0.0458 | 79.0 | 7900 | 0.1402 | 0.375 | 0.8739 | 9.1158 | 0.375 | 0.1961 | 0.3540 | 0.3929 | | 0.0455 | 80.0 | 8000 | 0.1401 | 0.315 | 0.8722 | 9.9114 | 0.315 | 0.1771 | 0.3220 | 0.4443 | | 0.0455 | 81.0 | 8100 | 0.1378 | 0.39 | 0.8761 | 9.0128 | 0.39 | 0.2048 | 0.3642 | 0.4020 | | 0.0455 | 82.0 | 8200 | 0.1401 | 0.38 | 0.8729 | 9.1624 | 0.38 | 0.2006 | 0.3612 | 0.3924 | | 0.0455 | 83.0 | 8300 | 0.1391 | 0.38 | 0.8742 | 8.8982 | 0.38 | 0.2048 | 0.3561 | 0.3991 | | 0.0455 | 84.0 | 8400 | 0.1381 | 0.375 | 0.8734 | 9.0598 | 0.375 | 0.1901 | 0.3567 | 0.4010 | | 0.0453 | 85.0 | 8500 | 0.1398 | 0.39 | 0.8718 | 9.1407 | 0.39 | 0.2057 | 0.3693 | 0.3892 | | 0.0453 | 86.0 | 8600 | 0.1389 | 0.37 | 0.8721 | 9.3494 | 0.37 | 0.2006 | 0.3505 | 0.3914 | | 0.0453 | 87.0 | 8700 | 0.1390 | 0.395 | 0.8743 | 8.7444 | 0.395 | 0.2113 | 0.3724 | 0.3854 | | 0.0453 | 88.0 | 8800 | 0.1404 | 0.395 | 0.8739 | 8.7654 | 0.395 | 0.2134 | 0.3657 | 0.3925 | | 0.0453 | 89.0 | 8900 | 0.1409 | 0.385 | 0.8726 | 8.7763 | 0.3850 | 0.2032 | 0.3643 | 0.3963 | | 0.0451 | 90.0 | 9000 | 0.1403 | 0.39 | 0.8717 | 8.8363 | 0.39 | 0.2055 | 0.3668 | 0.3926 | | 0.0451 | 91.0 | 9100 | 0.1388 | 0.39 | 0.8719 | 9.2985 | 0.39 | 0.2099 | 0.3662 | 0.3847 | | 0.0451 | 92.0 | 9200 | 0.1397 | 0.385 | 0.8702 | 9.4449 | 0.3850 | 0.2050 | 0.3535 | 0.3877 | | 0.0451 | 93.0 | 9300 | 0.1403 | 0.385 | 0.8709 | 8.9790 | 0.3850 | 0.1989 | 0.3473 | 0.3887 | | 0.0451 | 94.0 | 9400 | 0.1400 | 0.39 | 0.8705 | 9.1647 | 0.39 | 0.2053 | 0.3569 | 0.3865 | | 0.045 | 95.0 | 9500 | 0.1404 | 0.395 | 0.8712 | 9.1707 | 0.395 | 0.2087 | 0.3688 | 0.3815 | | 0.045 | 96.0 | 9600 | 0.1404 | 0.385 | 0.8711 | 8.6711 | 0.3850 | 0.1980 | 0.3566 | 0.3867 | | 0.045 | 97.0 | 9700 | 0.1399 | 0.39 | 0.8706 | 9.1288 | 0.39 | 0.2035 | 0.3610 | 0.3845 | | 0.045 | 98.0 | 9800 | 0.1400 | 0.385 | 0.8708 | 9.1302 | 0.3850 | 0.1982 | 0.3538 | 0.3870 | | 0.045 | 99.0 | 9900 | 0.1398 | 0.39 | 0.8712 | 8.8257 | 0.39 | 0.2002 | 0.3660 | 0.3825 | | 0.0449 | 100.0 | 10000 | 0.1401 | 0.385 | 0.8709 | 8.8462 | 0.3850 | 0.1979 | 0.3606 | 0.3874 | ### Framework versions - Transformers 4.28.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.12.0 - Tokenizers 0.12.1
grace-pro/xlmr-base-finetuned-hausa-2e-4
grace-pro
2023-07-13T12:14:31Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-13T10:57:32Z
--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: xlmr-base-finetuned-hausa-2e-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr-base-finetuned-hausa-2e-4 This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2708 - Precision: 0.1719 - Recall: 0.0235 - F1: 0.0414 - Accuracy: 0.9247 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2716 | 1.0 | 1312 | 0.2690 | 0.1719 | 0.0235 | 0.0414 | 0.9247 | | 0.2744 | 2.0 | 2624 | 0.2697 | 0.1719 | 0.0235 | 0.0414 | 0.9247 | | 0.2735 | 3.0 | 3936 | 0.2693 | 0.1719 | 0.0235 | 0.0414 | 0.9247 | | 0.2739 | 4.0 | 5248 | 0.2697 | 0.1719 | 0.0235 | 0.0414 | 0.9247 | | 0.2709 | 5.0 | 6560 | 0.2708 | 0.1719 | 0.0235 | 0.0414 | 0.9247 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
hoanghoavienvo/roberta-large-stage-2-v1
hoanghoavienvo
2023-07-13T12:04:50Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-13T11:20:44Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: roberta-large-stage-2-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large-stage-2-v1 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4892 - Accuracy: 0.83 - F1: 0.8982 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 469 | 0.3906 | 0.8617 | 0.9225 | | 0.4236 | 2.0 | 938 | 0.3811 | 0.865 | 0.9232 | | 0.3352 | 3.0 | 1407 | 0.4892 | 0.83 | 0.8982 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
AllenQ/model_archive
AllenQ
2023-07-13T11:53:36Z
1
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "controlnet", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-07-13T11:30:15Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - controlnet inference: true --- # controlnet-AllenQ/model_archive These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning. You can find some example images below. prompt: car ![images_0)](./images_0.png)
pigliketoeat/distilroberta-base-finetuned-wikitext2
pigliketoeat
2023-07-13T11:41:14Z
161
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-13T11:09:53Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilroberta-base-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-wikitext2 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8349 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0852 | 1.0 | 2406 | 1.9234 | | 1.992 | 2.0 | 4812 | 1.8828 | | 1.9603 | 3.0 | 7218 | 1.8223 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
IbrahemVX2000/kandiskyai2-1
IbrahemVX2000
2023-07-13T11:29:14Z
0
0
null
[ "text-to-image", "kandinsky", "license:apache-2.0", "region:us" ]
text-to-image
2023-07-13T11:27:16Z
--- license: apache-2.0 prior: kandinsky-community/kandinsky-2-1-prior tags: - text-to-image - kandinsky --- # Kandinsky 2.1 Kandinsky 2.1 inherits best practices from Dall-E 2 and Latent diffusion while introducing some new ideas. It uses the CLIP model as a text and image encoder, and diffusion image prior (mapping) between latent spaces of CLIP modalities. This approach increases the visual performance of the model and unveils new horizons in blending images and text-guided image manipulation. The Kandinsky model is created by [Arseniy Shakhmatov](https://github.com/cene555), [Anton Razzhigaev](https://github.com/razzant), [Aleksandr Nikolich](https://github.com/AlexWortega), [Igor Pavlov](https://github.com/boomb0om), [Andrey Kuznetsov](https://github.com/kuznetsoffandrey) and [Denis Dimitrov](https://github.com/denndimitrov) ## Usage Kandinsky 2.1 is available in diffusers! ```python pip install diffusers transformers accelerate ``` ### Text to image ```python from diffusers import DiffusionPipeline import torch pipe_prior = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16) pipe_prior.to("cuda") t2i_pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) t2i_pipe.to("cuda") prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting" negative_prompt = "low quality, bad quality" image_embeds, negative_image_embeds = pipe_prior(prompt, negative_prompt, guidance_scale=1.0).to_tuple() image = t2i_pipe(prompt, negative_prompt=negative_prompt, image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, height=768, width=768).images[0] image.save("cheeseburger_monster.png") ``` ![img](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/kandinsky-docs/cheeseburger.png) ### Text Guided Image-to-Image Generation ```python from diffusers import KandinskyImg2ImgPipeline, KandinskyPriorPipeline import torch from PIL import Image import requests from io import BytesIO url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" response = requests.get(url) original_image = Image.open(BytesIO(response.content)).convert("RGB") original_image = original_image.resize((768, 512)) # create prior pipe_prior = KandinskyPriorPipeline.from_pretrained( "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16 ) pipe_prior.to("cuda") # create img2img pipeline pipe = KandinskyImg2ImgPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) pipe.to("cuda") prompt = "A fantasy landscape, Cinematic lighting" negative_prompt = "low quality, bad quality" image_embeds, negative_image_embeds = pipe_prior(prompt, negative_prompt).to_tuple() out = pipe( prompt, image=original_image, image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, height=768, width=768, strength=0.3, ) out.images[0].save("fantasy_land.png") ``` ![img](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/kandinsky-docs/img2img_fantasyland.png) ### Interpolate ```python from diffusers import KandinskyPriorPipeline, KandinskyPipeline from diffusers.utils import load_image import PIL import torch pipe_prior = KandinskyPriorPipeline.from_pretrained( "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16 ) pipe_prior.to("cuda") img1 = load_image( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png" ) img2 = load_image( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/starry_night.jpeg" ) # add all the conditions we want to interpolate, can be either text or image images_texts = ["a cat", img1, img2] # specify the weights for each condition in images_texts weights = [0.3, 0.3, 0.4] # We can leave the prompt empty prompt = "" prior_out = pipe_prior.interpolate(images_texts, weights) pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) pipe.to("cuda") image = pipe(prompt, **prior_out, height=768, width=768).images[0] image.save("starry_cat.png") ``` ![img](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/kandinsky-docs/starry_cat.png) ## Model Architecture ### Overview Kandinsky 2.1 is a text-conditional diffusion model based on unCLIP and latent diffusion, composed of a transformer-based image prior model, a unet diffusion model, and a decoder. The model architectures are illustrated in the figure below - the chart on the left describes the process to train the image prior model, the figure in the center is the text-to-image generation process, and the figure on the right is image interpolation. <p float="left"> <img src="https://raw.githubusercontent.com/ai-forever/Kandinsky-2/main/content/kandinsky21.png"/> </p> Specifically, the image prior model was trained on CLIP text and image embeddings generated with a pre-trained [mCLIP model](https://huggingface.co/M-CLIP/XLM-Roberta-Large-Vit-L-14). The trained image prior model is then used to generate mCLIP image embeddings for input text prompts. Both the input text prompts and its mCLIP image embeddings are used in the diffusion process. A [MoVQGAN](https://openreview.net/forum?id=Qb-AoSw4Jnm) model acts as the final block of the model, which decodes the latent representation into an actual image. ### Details The image prior training of the model was performed on the [LAION Improved Aesthetics dataset](https://huggingface.co/datasets/bhargavsdesai/laion_improved_aesthetics_6.5plus_with_images), and then fine-tuning was performed on the [LAION HighRes data](https://huggingface.co/datasets/laion/laion-high-resolution). The main Text2Image diffusion model was trained on the basis of 170M text-image pairs from the [LAION HighRes dataset](https://huggingface.co/datasets/laion/laion-high-resolution) (an important condition was the presence of images with a resolution of at least 768x768). The use of 170M pairs is due to the fact that we kept the UNet diffusion block from Kandinsky 2.0, which allowed us not to train it from scratch. Further, at the stage of fine-tuning, a dataset of 2M very high-quality high-resolution images with descriptions (COYO, anime, landmarks_russia, and a number of others) was used separately collected from open sources. ### Evaluation We quantitatively measure the performance of Kandinsky 2.1 on the COCO_30k dataset, in zero-shot mode. The table below presents FID. FID metric values ​​for generative models on COCO_30k | | FID (30k)| |:------|----:| | eDiff-I (2022) | 6.95 | | Image (2022) | 7.27 | | Kandinsky 2.1 (2023) | 8.21| | Stable Diffusion 2.1 (2022) | 8.59 | | GigaGAN, 512x512 (2023) | 9.09 | | DALL-E 2 (2022) | 10.39 | | GLIDE (2022) | 12.24 | | Kandinsky 1.0 (2022) | 15.40 | | DALL-E (2021) | 17.89 | | Kandinsky 2.0 (2022) | 20.00 | | GLIGEN (2022) | 21.04 | For more information, please refer to the upcoming technical report. ## BibTex If you find this repository useful in your research, please cite: ``` @misc{kandinsky 2.1, title = {kandinsky 2.1}, author = {Arseniy Shakhmatov, Anton Razzhigaev, Aleksandr Nikolich, Vladimir Arkhipkin, Igor Pavlov, Andrey Kuznetsov, Denis Dimitrov}, year = {2023}, howpublished = {}, } ```
offlinehq/autotrain-slovenian-swear-words-74310139575
offlinehq
2023-07-13T11:28:35Z
111
0
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "text-classification", "autotrain", "unk", "dataset:offlinehq/autotrain-data-slovenian-swear-words", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-13T11:22:57Z
--- tags: - autotrain - text-classification language: - unk widget: - text: "I love AutoTrain" datasets: - offlinehq/autotrain-data-slovenian-swear-words co2_eq_emissions: emissions: 3.733207533466129 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 74310139575 - CO2 Emissions (in grams): 3.7332 ## Validation Metrics - Loss: 0.575 - Accuracy: 0.702 - Precision: 0.682 - Recall: 0.708 - AUC: 0.764 - F1: 0.695 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/offlinehq/autotrain-slovenian-swear-words-74310139575 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("offlinehq/autotrain-slovenian-swear-words-74310139575", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("offlinehq/autotrain-slovenian-swear-words-74310139575", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
EAV123/ppo-LunarLander-v2
EAV123
2023-07-13T11:25:12Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-13T11:24:53Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 260.70 +/- 15.93 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
CanSukru/YORUvoicemodel
CanSukru
2023-07-13T11:23:45Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-13T11:12:34Z
--- license: creativeml-openrail-m ---
Graphcore/mt5-small-ipu
Graphcore
2023-07-13T11:20:36Z
4
0
null
[ "optimum_graphcore", "arxiv:1910.10683", "arxiv:2010.11934", "license:apache-2.0", "region:us" ]
null
2023-05-19T15:01:28Z
--- license: apache-2.0 --- # Graphcore/mt5-small-ipu Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project. ## Model description Multilingual Text-to-Text Transfer Transformer (mT5) is the multilingual variant of [T5](https://arxiv.org/abs/1910.10683). T5 is a Transformer based model that uses a text-to-text approach for translation, question answering, and classification. It introduces an unified framework that converts all text-based language problems into a text-to-text format for transfer learning for NLP. This allows for the use of the same model, loss function, hyperparameters, etc. across our diverse set of tasks. mT5 is pretrained on the mC4 corpus, covering 101 languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu. Note: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task. Paper link :[mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) ## Intended uses & limitations This model contains just the `IPUConfig` files for running the mT5 Small model (e.g. [HuggingFace/google/mt5-small](https://huggingface.co/google/mt5-small)) on Graphcore IPUs. **This model contains no model weights, only an IPUConfig.** ## Usage ``` from optimum.graphcore import IPUConfig ipu_config = IPUConfig.from_pretrained("Graphcore/mt5-small-ipu") ```
Fixedbot/q-FrozenLake-v1-4x4-noSlippery
Fixedbot
2023-07-13T11:13:27Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-13T11:08:04Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage model = load_from_hub(repo_id="Fixedbot/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"])
PraveenJesu/openai-whisper-medium-murf
PraveenJesu
2023-07-13T11:13:14Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-13T11:13:07Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
mkobos/joules-lorretta-jersey-blouse
mkobos
2023-07-13T11:06:11Z
0
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:emilianJR/HRA_hyperrealism_art", "base_model:adapter:emilianJR/HRA_hyperrealism_art", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-07-13T11:06:11Z
--- license: creativeml-openrail-m base_model: emilianJR/HRA_hyperrealism_art instance_prompt: Joules Lorretta Jersey Blouse tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - joules-lorretta-jersey-blouse These are LoRA adaption weights for [emilianJR/HRA_hyperrealism_art](https://huggingface.co/emilianJR/HRA_hyperrealism_art). The weights were trained on the instance prompt "Joules Lorretta Jersey Blouse" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
jpandeinge/DialoGPT-medium-Oshiwambo-Bot
jpandeinge
2023-07-13T10:48:52Z
154
1
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-11T06:12:35Z
--- pipeline_tag: conversational ---
Shishir1807/Indication_Training-1
Shishir1807
2023-07-13T10:42:46Z
164
0
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "gpt", "llm", "large language model", "h2o-llmstudio", "en", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-07-13T10:40:21Z
--- language: - en library_name: transformers tags: - gpt - llm - large language model - h2o-llmstudio inference: false thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico --- # Model Card ## Summary This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio). - Base model: [EleutherAI/pythia-2.8b-deduped](https://huggingface.co/EleutherAI/pythia-2.8b-deduped) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed. ```bash pip install transformers==4.28.1 pip install accelerate==0.18.0 pip install torch==2.0.0 ``` ```python import torch from transformers import pipeline generate_text = pipeline( model="Shishir1807/Indication_Training-1", torch_dtype=torch.float16, trust_remote_code=True, use_fast=True, device_map={"": "cuda:0"}, ) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=2, temperature=float(0.0), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer: ```python print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"]) ``` ```bash <|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|> ``` Alternatively, if you prefer to not use `trust_remote_code=True` you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer: ```python import torch from h2oai_pipeline import H2OTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "Shishir1807/Indication_Training-1", use_fast=True, padding_side="left" ) model = AutoModelForCausalLM.from_pretrained( "Shishir1807/Indication_Training-1", torch_dtype=torch.float16, device_map={"": "cuda:0"} ) generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=2, temperature=float(0.0), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Shishir1807/Indication_Training-1" # either local folder or huggingface model name # Important: The prompt needs to be in the same format the model was trained with. # You can find an example prompt in the experiment logs. prompt = "<|prompt|>How are you?<|endoftext|><|answer|>" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True) model = AutoModelForCausalLM.from_pretrained(model_name) model.cuda().eval() inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda") # generate configuration can be modified to your needs tokens = model.generate( **inputs, min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=2, temperature=float(0.0), repetition_penalty=float(1.2), renormalize_logits=True )[0] tokens = tokens[inputs["input_ids"].shape[1]:] answer = tokenizer.decode(tokens, skip_special_tokens=True) print(answer) ``` ## Model Architecture ``` GPTNeoXForCausalLM( (gpt_neox): GPTNeoXModel( (embed_in): Embedding(50304, 2560) (layers): ModuleList( (0-31): 32 x GPTNeoXLayer( (input_layernorm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True) (post_attention_layernorm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True) (attention): GPTNeoXAttention( (rotary_emb): RotaryEmbedding() (query_key_value): Linear(in_features=2560, out_features=7680, bias=True) (dense): Linear(in_features=2560, out_features=2560, bias=True) ) (mlp): GPTNeoXMLP( (dense_h_to_4h): Linear(in_features=2560, out_features=10240, bias=True) (dense_4h_to_h): Linear(in_features=10240, out_features=2560, bias=True) (act): GELUActivation() ) ) ) (final_layer_norm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True) ) (embed_out): Linear(in_features=2560, out_features=50304, bias=False) ) ``` ## Model Configuration This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models. ## Model Validation Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). ```bash CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=Shishir1807/Indication_Training-1 --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log ``` ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
zlwang19/autotrain-randengq-74291139565
zlwang19
2023-07-13T10:38:00Z
112
0
transformers
[ "transformers", "pytorch", "safetensors", "mt5", "text2text-generation", "autotrain", "summarization", "zh", "dataset:zlwang19/autotrain-data-randengq", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2023-07-13T10:32:56Z
--- tags: - autotrain - summarization language: - zh widget: - text: "I love AutoTrain" datasets: - zlwang19/autotrain-data-randengq co2_eq_emissions: emissions: 2.4988443809859002 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 74291139565 - CO2 Emissions (in grams): 2.4988 ## Validation Metrics - Loss: 4.728 - Rouge1: 8.502 - Rouge2: 2.226 - RougeL: 8.053 - RougeLsum: 7.996 - Gen Len: 17.022 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/zlwang19/autotrain-randengq-74291139565 ```
ivivnov/ppo-Huggy
ivivnov
2023-07-13T10:36:38Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-07-13T10:36:25Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: ivivnov/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
madroid/autotrain-text-chat-74266139562
madroid
2023-07-13T10:25:06Z
108
1
transformers
[ "transformers", "pytorch", "safetensors", "deberta", "text-classification", "autotrain", "en", "dataset:madroid/autotrain-data-text-chat", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-13T10:24:08Z
--- tags: - autotrain - text-classification language: - en widget: - text: "I love AutoTrain" datasets: - madroid/autotrain-data-text-chat co2_eq_emissions: emissions: 0.3508472536259808 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 74266139562 - CO2 Emissions (in grams): 0.3508 ## Validation Metrics - Loss: 0.005 - Accuracy: 1.000 - Macro F1: 1.000 - Micro F1: 1.000 - Weighted F1: 1.000 - Macro Precision: 1.000 - Micro Precision: 1.000 - Weighted Precision: 1.000 - Macro Recall: 1.000 - Micro Recall: 1.000 - Weighted Recall: 1.000 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/madroid/autotrain-text-chat-74266139562 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("madroid/autotrain-text-chat-74266139562", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("madroid/autotrain-text-chat-74266139562", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
sarakolding/daT5-summariser
sarakolding
2023-07-13T10:12:54Z
120
9
transformers
[ "transformers", "pytorch", "safetensors", "mt5", "text2text-generation", "summarization", "da", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-05-30T19:12:46Z
--- language: - da tags: - summarization widget: - text: "Det nye studie Cognitive Science på Aarhus Universitet, som i år havde Østjyllands højeste adgangskrav på 11,7 i karaktergennemsnit, udklækker det første hold bachelorer til sommer. Men når de skal læse videre på kandidaten må de til udlandet, hvis ikke de vil skifte til et andet fag. Aarhus Universitet kan nemlig ikke nå at oprette en kandidat i Cognitive Science til næste sommer, hvor det første hold bachelorer er færdige. Det rammer blandt andre Julie Sohn, der startede på uddannelsen i sommeren 2015, og derfor kun mangler et år, før hun er bachelor. - Jeg synes, at det er ærgerligt, at vi som nye studerende på et populært studie ikke kan tage en kandidat i Danmark, siger hun. Bacheloruddannelsen i Cognitive Science blev oprettet af Aarhus Universitet i 2015, og uddannelsen kombinerer viden om menneskelig adfærd med avanceret statistik. Da der endnu ikke er oprettet en kandidatuddannelse indenfor dette område, har Julie Sohn i stedet mulighed for at læse en kandidatgrad i for eksempel informationsvidenskab. Hun vil dog hellere fortsætte på Cognitive Science, og derfor overvejer hun nu at læse videre i udlandet. - Det ser ud til, at det er den eneste mulighed, hvis man gerne vil læse videre på noget, der faktisk passer ind til vores studie, siger hun. Nye regler giver forsinkelse På Aarhus Universitet havde man håbet på at have kandidatuddannelsen klar, når det første hold bachelorer bliver færdige til sommer. Arbejdet er dog blevet forsinket, fordi der er kommet nye regler for, hvornår man må oprette en uddannelse, fortæller Niels Lehmann, prodekan på fakultetet Arts, som Cognitive Science hører under. Det er nogle meget dygtige studerende, der kommer ind på uddannelsen, og det er klart, at de i et vist omfang vil orientere sig mod udlandet, hvor man så kan forestille sig, at de bider sig fast. NIELS LEHMANN, PRODEKAN, AARHUS UNIVERSITET Tidligere skulle Danmarks Akkrediteringsinstitution se alle nye uddannelser efter i sømmene for at sikre, at kvaliteten var i orden. Nu skal uddannelsesinstitutionerne selv stå for det kvalitetstjek. Men det tjek har Aarhus Universitet endnu ikke fået grønt lys til selv at udføre, fortæller prodekanen. - Vi ville meget gerne have kunnet nå at få et udbud på kandidaten i gang i 2018, men så længe man er under institutionsakkreditering, så kan man ikke ansøge om nye uddannelser, siger han. Det er endnu usikkert, hvornår Aarhus Universitet kan oprette kandidaten i Cognitive Science. Hvis de får alle de nødvendige godkendelser, kan den tidligst være klar i 2019. Prodekan Niels Lehmann frygter, at Danmark kommer til at miste nogle af landets skarpeste studerende, hvis de er nødt til at rejse til udlandet for at gøre deres uddannelse færdig. - Det er nogle meget, meget dygtige studerende, der kommer ind på denne uddannelse, og det er klart, at de i et vist omfang vil orientere sig mod udlandet, hvor man så kan forestille sig, at de bider sig fast, siger han. Hos Danmarks Akkrediteringsinstitution forstår man godt, at universitets ansatte og studenrede ærgrer sig. - Jeg kan godt forstå, at Aarhus Universitet ærgrer sig over, at det trækker ud, og at der går noget tid, før man får mulighed for at oprette nye uddannelser, og at man ikke har fået den genvej til at oprette nye uddannelser, som ville være fuldt med, hvis man havde opnået en positiv institutionsakkreditering, siger kommunikationsansvarlig Daniel Sebastian Larsen. I år var Cognitive Science i Aarhus den uddannelse i Danmark, der havde det fjerde højeste karakterkrav - det højeste var 'AP Graduate in Marketing Management' på Erhvervsakademi Sjælland med et krav på 12,3." example_title: "Summarization" --- This repository contains a model for Danish abstractive summarisation of news articles. The summariser is based on a language-specific mT5-base, where the vocabulary is condensed to include tokens used in Danish and English. The model is fine-tuned using an abstractive subset of the DaNewsroom dataset (Varab & Schluter, 2020), according to the binned density categories employed in Newsroom (Grusky et al., 2019).
saeedehj/t5-small-finetune-cnn
saeedehj
2023-07-13T10:09:57Z
44
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-13T08:10:17Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: t5-small-finetune-cnn results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetune-cnn This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9579 - Rouge1: 24.7426 - Rouge2: 10.4667 - Rougel: 20.2334 - Rougelsum: 23.0122 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.9721 | 1.0 | 2000 | 1.9608 | 25.1804 | 10.8327 | 20.5778 | 23.3974 | 19.0 | | 1.9466 | 2.0 | 4000 | 1.9549 | 25.0152 | 10.6784 | 20.4465 | 23.2601 | 19.0 | | 1.8932 | 3.0 | 6000 | 1.9515 | 25.0464 | 10.7024 | 20.3992 | 23.2249 | 19.0 | | 1.8564 | 4.0 | 8000 | 1.9489 | 25.0313 | 10.642 | 20.3601 | 23.2032 | 19.0 | | 1.862 | 5.0 | 10000 | 1.9510 | 24.9582 | 10.614 | 20.3625 | 23.1762 | 19.0 | | 1.8478 | 6.0 | 12000 | 1.9502 | 25.032 | 10.7084 | 20.4506 | 23.2435 | 19.0 | | 1.819 | 7.0 | 14000 | 1.9495 | 24.7874 | 10.4848 | 20.2893 | 23.0832 | 19.0 | | 1.7869 | 8.0 | 16000 | 1.9470 | 24.7095 | 10.4465 | 20.1705 | 22.9248 | 19.0 | | 1.8068 | 9.0 | 18000 | 1.9510 | 24.705 | 10.4407 | 20.1684 | 22.9817 | 19.0 | | 1.768 | 10.0 | 20000 | 1.9517 | 24.6067 | 10.4281 | 20.0765 | 22.9034 | 19.0 | | 1.7713 | 11.0 | 22000 | 1.9524 | 24.6871 | 10.4126 | 20.1802 | 22.962 | 19.0 | | 1.7635 | 12.0 | 24000 | 1.9548 | 24.5998 | 10.3969 | 20.1427 | 22.9191 | 19.0 | | 1.7625 | 13.0 | 26000 | 1.9561 | 24.66 | 10.4032 | 20.1732 | 22.9256 | 19.0 | | 1.7461 | 14.0 | 28000 | 1.9551 | 24.7071 | 10.4209 | 20.1833 | 22.9803 | 19.0 | | 1.7271 | 15.0 | 30000 | 1.9558 | 24.6682 | 10.4162 | 20.198 | 22.9445 | 19.0 | | 1.7452 | 16.0 | 32000 | 1.9563 | 24.8148 | 10.4558 | 20.2123 | 23.0374 | 19.0 | | 1.7489 | 17.0 | 34000 | 1.9576 | 24.6459 | 10.3782 | 20.1213 | 22.8918 | 19.0 | | 1.724 | 18.0 | 36000 | 1.9581 | 24.7384 | 10.427 | 20.2088 | 22.9971 | 19.0 | | 1.7236 | 19.0 | 38000 | 1.9581 | 24.7366 | 10.4394 | 20.2028 | 23.0286 | 19.0 | | 1.7331 | 20.0 | 40000 | 1.9579 | 24.7426 | 10.4667 | 20.2334 | 23.0122 | 19.0 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
preetham/rpanda_lora
preetham
2023-07-13T10:08:51Z
0
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-07-13T09:52:28Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: a photo of sks panda tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - preetham/rpanda_lora These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks panda using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False.
ZoeVN/segformer-scene-parse-150-lora-50-epoch
ZoeVN
2023-07-13T10:02:46Z
1
0
peft
[ "peft", "region:us" ]
null
2023-07-13T10:02:45Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0
fnlp/moss-rlhf-reward-model-7B-en
fnlp
2023-07-13T09:54:07Z
0
9
null
[ "llm", "reward model", "moss", "rlhf", "zh", "arxiv:2307.04964", "license:agpl-3.0", "region:us" ]
null
2023-07-13T03:12:42Z
--- license: agpl-3.0 language: - zh tags: - llm - reward model - moss - rlhf --- # MOSS-RLHF ### *MOSS-RLHF & "Secrets of RLHF in Large Language Models Part I: PPO" <br>👉 <a href="https://arxiv.org/abs/2307.04964" target="_blank">[Technical report]</a> <a href="https://openlmlab.github.io/MOSS-RLHF/" target="_blank">[Home page]* ## 🌟 News ### 👉 Wed, 12. July 2023. We have released Chinese reward model based OpenChineseLlama-7B! [moss-rlhf-reward-model-7B-zh](https://huggingface.co/Ablustrund/moss-rlhf-reward-model-7B-zh/tree/main) <br> ### 👉 Thu, 13. July 2023. We have released English reward model and SFT model based Llama-7B! [moss-rlhf-reward-model-7B-en](https://huggingface.co/fnlp/moss-rlhf-reward-model-7B-en) [moss-rlhf-sft-model-7B-en](https://huggingface.co/fnlp/moss-rlhf-sft-model-7B-en) <br> ## 🧾 Open-source List - [x] Open source code for RL training in large language models. - [x] A 7B Chinese reward model based on openChineseLlama. - [x] A 7B English reward model based on Llama-7B. - [x] SFT model for English. - [ ] Policy model for English after RLHF. - ... ## 🌠 Introduction Due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In this technical report, we intend to help researchers to train their models stably with human feedback. Contributions are summarized as follows: 1) We release competitive Chinese and English reward models, respectively, which have good cross-model generalization ability, alleviating the cost of relabeling human preference data; 2) We conduct in-depth analysis on the inner workings of PPO algorithm and propose the PPO-max algorithm to ensure stable model training; 3) We release the complete PPO-max codes to ensure that the LLMs in the current SFT stage can be better aligned with humans. ## 🔩 Requirements & Setup This repository works on Python 3.8 and PyTorch 1.13.1. We recommend using the **conda** virtual environment to run the code. #### Step 1: Create a new Python virtual environment ```bash conda update conda -n base -c defaults conda create -n rlhf python=3.8 conda activate rlhf ``` #### Step 2: Install PyTorch and TensorBoard ```bash conda install pytorch==1.13.1 pytorch-cuda=11.7 tensorboard -c pytorch -c nvidia ``` #### Step 3: Install the remaining dependencies ```bash conda install datasets accelerate safetensors chardet cchardet -c huggingface -c conda-forge pip3 install transformers sentencepiece einops triton==1.0.0 rouge jionlp==1.4.14 nltk sacrebleu cpm_kernels apt install libaio-dev DS_BUILD_OPS=1 pip install deepspeed ``` ## ✨ Start training your own model! Run code in a few steps. ### Step 1: Recover Reward model weights We can not directly release the full weight of the reward model because of protocol restrictions. You can merge the diff weight with original Llama-7B to recover the reward model we used. We upload the diff models, thanks to tatsu-lab, you can recover the reward model follow these steps: ```bash 1) Download the weight diff into your local machine. The weight diff is located at: # For English: TODO # For Chinese: https://huggingface.co/Ablustrund/moss-rlhf-reward-model-7B-zh/tree/main 2) Merge the weight diff with the original Llama-7B: # For English: # Reward model python merge_weight_en.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-reward-model-7B-en/diff --path_tuned ./models/moss-rlhf-reward-model-7B-en/recover --model_type reward # SFT model python merge_weight_en.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-sft-model-7B-en/diff --path_tuned ./models/moss-rlhf-sft-model-7B-en/recover --model_type sft # Policy model TODO # For Chinese: python merge_weight_zh.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-reward-model-7B-zh/diff --path_tuned ./models/moss-rlhf-reward-model-7B-zh/recover ``` ### Step 2: Select your own SFT model. Because of some limitations, we can not release the **Chinese** SFT model (Currently). You can use your own SFT model, or a strong base model instead of our SFT model. ### Step 3: Start training Run the command below. ``` # For Chinese: # You need to use your own sft model currently. bash run_zh.sh # For English: # We have loaded the sft model and reward model to huggingface. bash run_en.sh ``` ## Citation ```bibtex @article{zheng2023secrets, title={Secrets of RLHF in Large Language Models Part I: PPO}, author={Rui Zheng and Shihan Dou and Songyang Gao and Wei Shen and Binghai Wang and Yan Liu and Senjie Jin and Qin Liu and Limao Xiong and Lu Chen and Zhiheng Xi and Yuhao Zhou and Nuo Xu and Wenbin Lai and Minghao Zhu and Rongxiang Weng and Wensen Cheng and Cheng Chang and Zhangyue Yin and Yuan Hua and Haoran Huang and Tianxiang Sun and Hang Yan and Tao Gui and Qi Zhang and Xipeng Qiu and Xuanjing Huang}, year={2023}, eprint={2307.04964}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
vincenzodeleo/distilbert-base-uncased-finetuned-squad
vincenzodeleo
2023-07-13T09:53:55Z
106
0
transformers
[ "transformers", "pytorch", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-12T16:59:26Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
pigliketoeat/distilgpt2-finetuned-wikitext2
pigliketoeat
2023-07-13T09:45:58Z
200
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-13T08:51:35Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6421 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7602 | 1.0 | 2334 | 3.6669 | | 3.653 | 2.0 | 4668 | 3.6472 | | 3.6006 | 3.0 | 7002 | 3.6421 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
dada325/Taxi-v3-qLearning-test
dada325
2023-07-13T09:34:57Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-13T09:34:46Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3-qLearning-test results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.50 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="dada325/Taxi-v3-qLearning-test", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ashokurlana/mBART-TeSum
ashokurlana
2023-07-13T09:33:38Z
105
1
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "generated_from_trainer", "multilingual", "te", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-12T18:19:29Z
--- language: - multilingual - te license: mit tags: - generated_from_trainer metrics: - rouge model-index: - name: mBART-TeSum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mBART-TeSum This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on [TeSum](https://ltrc.iiit.ac.in/showfile.php?filename=downloads/teSum/) dataset. More details about the training and analysis mentioned in the [paper](https://aclanthology.org/2022.lrec-1.614.pdf). ## Model description mBART-50 is a multilingual Sequence-to-Sequence model. It was introduced to show that multilingual translation models can be created through multilingual fine-tuning. Instead of fine-tuning on one direction, a pre-trained model is fine-tuned on many directions simultaneously. mBART-50 is created using the original mBART model and extended to add extra 25 languages to support multilingual machine translation models of 50 languages. The pre-training objective is explained below. **Multilingual Denoising Pretraining**: The model incorporates N languages by concatenating data: `D = {D1, ..., DN }` where each Di is a collection of monolingual documents in language `i`. The source documents are noised using two schemes, first randomly shuffling the original sentences' order, and second a novel in-filling scheme, where spans of text are replaced with a single mask token. The model is then tasked to reconstruct the original text. 35% of each instance's words are masked by random sampling a span length according to a Poisson distribution `(λ = 3.5)`. The decoder input is the original text with one position offset. A language id symbol `LID` is used as the initial token to predict the sentence. ## Intended uses & limitations mbart-large-50 is pre-trained model and primarily aimed at being fine-tuned on translation tasks. It can also be fine-tuned on other multilingual sequence-to-sequence tasks. See the [model hub](https://huggingface.co/models?other=mbart-50) to look for fine-tuned versions. ## Training As the model is multilingual, it expects the sequences in a different format. A special language id token is used as a prefix in both the source and target text. The text format is `[lang_code] X [eos]` with `X` being the source or target text respectively and lang_code is `source_lang_code` for source text and `tgt_lang_code` for target text. `bos` is never used. Once the examples are prepared in this format, it can be trained as any other sequence-to-sequence model. ```python from transformers import MBartForConditionalGeneration, MBart50TokenizerFast model = MBartForConditionalGeneration.from_pretrained("ashokurlana/mBART-TeSum") tokenizer = MBart50TokenizerFast.from_pretrained("ashokurlana/mBART-TeSum", src_lang="te_IN", tgt_lang="te_IN") src_text = "తెలంగాణలో సచలనం సృష్టించిన టీఎస్‌పీఎస్సీ పేపర్ లీకేజీ వ్యవహారంపై ప్రభుత్వం తరపున మంత్రి కేటీఆర్ తొలిసారి స్పందించారు. ఇది వ్యవస్థ వైఫల్యం కాదని.., ఇద్దరు వ్యక్తులు చేసిన తప్పు అని కేటీఆర్ వెల్లడించారు. ఈ వ్యవహారం వెనుక ఏ పార్టీకి చెందిన వారున్నా.., ఎంతటి వారైనా కఠినంగా శిక్షిస్తామని చెప్పారు. నిరుద్యోగుల్లో ఆందోళనలు రేకెత్తించేలా ప్రతిపక్షాలు మాట్లాడటం సరికాదని హితవు పలికారు." tgt_text = "తెలంగాణలో సచలనం సృష్టించిన టీఎస్ పీఎస్సీ పేపర్ లీకేజీ వ్యవహారంపై ప్రభుత్వం తరపున మంత్రి కేటీఆర్ స్పందించారు. ఇది వ్యవస్థ వైఫల్యం కాదని, ఇద్దరు వ్యక్తులు చేసిన తప్పు అని, ఈ వ్యవహారం వెనుక ఏ పార్టీకి చెందిన వారున్నా కఠినంగా శిక్షిస్తామని చెప్పారు." model_inputs = tokenizer(src_text, return_tensors="pt") with tokenizer.as_target_tokenizer(): labels = tokenizer(tgt_text, return_tensors="pt").input_ids model(**model_inputs, labels=labels) # forward pass ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 ### Evaluation results It achieves the following results on the evaluation set: - Loss: 1.4009 - Rouge1: 32.8603 - Rouge2: 12.2822 - Rougel: 31.7473 - Rougelsum: 32.505 - Gen Len: 117.6326 ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1 ### BibTeX entry and citation info ``` @inproceedings{urlana-etal-2022-tesum, title = "{T}e{S}um: Human-Generated Abstractive Summarization Corpus for {T}elugu", author = "Urlana, Ashok and Surange, Nirmal and Baswani, Pavan and Ravva, Priyanka and Shrivastava, Manish", booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference", month = jun, year = "2022", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2022.lrec-1.614", pages = "5712--5722", abstract = "Expert human annotation for summarization is definitely an expensive task, and can not be done on huge scales. But with this work, we show that even with a crowd sourced summary generation approach, quality can be controlled by aggressive expert informed filtering and sampling-based human evaluation. We propose a pipeline that crowd-sources summarization data and then aggressively filters the content via: automatic and partial expert evaluation. Using this pipeline we create a high-quality Telugu Abstractive Summarization dataset (TeSum) which we validate with sampling-based human evaluation. We also provide baseline numbers for various models commonly used for summarization. A number of recently released datasets for summarization, scraped the web-content relying on the assumption that summary is made available with the article by the publishers. While this assumption holds for multiple resources (or news-sites) in English, it should not be generalised across languages without thorough analysis and verification. Our analysis clearly shows that this assumption does not hold true for most Indian language news resources. We show that our proposed filtration pipeline can even be applied to these large-scale scraped datasets to extract better quality article-summary pairs.", } ```
Fixedbot/ppo-Huggy
Fixedbot
2023-07-13T09:33:07Z
23
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-07-13T09:32:52Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Fixedbot/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
predictia/cerra_tas_vqvae
predictia
2023-07-13T09:31:45Z
3
0
diffusers
[ "diffusers", "tensorboard", "climate", "transformers", "image-to-image", "es", "en", "license:apache-2.0", "region:us" ]
image-to-image
2023-06-28T11:28:11Z
--- license: apache-2.0 language: - es - en metrics: - mse pipeline_tag: image-to-image tags: - climate - transformers --- # Europe Reanalysis Super Resolution The aim of the project is to create a Machine learning (ML) model that can generate high-resolution regional reanalysis data (similar to the one produced by CERRA) by downscaling global reanalysis data from ERA5. This will be accomplished by using state-of-the-art Deep Learning (DL) techniques like U-Net, conditional GAN, and diffusion models (among others). Additionally, an ingestion module will be implemented to assess the possible benefit of using CERRA pseudo-observations as extra predictors. Once the model is designed and trained, a detailed validation framework takes the place. It combines classical deterministic error metrics with in-depth validations, including time series, maps, spatio-temporal correlations, and computer vision metrics, disaggregated by months, seasons, and geographical regions, to evaluate the effectiveness of the model in reducing errors and representing physical processes. This level of granularity allows for a more comprehensive and accurate assessment, which is critical for ensuring that the model is effective in practice. Moreover, tools for interpretability of DL models can be used to understand the inner workings and decision-making processes of these complex structures by analyzing the activations of different neurons and the importance of different features in the input data. This work is funded by [Code for Earth 2023](https://codeforearth.ecmwf.int/) initiative.
KyriaAnnwyn/vit-large-artifacts
KyriaAnnwyn
2023-07-13T09:26:30Z
55
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-07T12:11:49Z
--- tags: - image-classification - generated_from_trainer metrics: - accuracy model-index: - name: vit-large-artifacts results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-artifacts This model is a fine-tuned version of [kakaobrain/vit-large-patch16-512](https://huggingface.co/kakaobrain/vit-large-patch16-512) on the KyriaAnnwyn/artifacts_ds dataset. It achieves the following results on the evaluation set: - Loss: 0.5995 - Accuracy: 0.6705 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.7001 | 0.01 | 100 | 0.6414 | 0.6559 | | 0.6288 | 0.01 | 200 | 0.6666 | 0.6559 | | 0.7237 | 0.02 | 300 | 0.7087 | 0.6559 | | 0.8741 | 0.03 | 400 | 0.6739 | 0.6257 | | 0.6093 | 0.04 | 500 | 0.6462 | 0.6559 | | 0.5801 | 0.04 | 600 | 0.6822 | 0.6559 | | 0.594 | 0.05 | 700 | 1.9948 | 0.6395 | | 0.7724 | 0.06 | 800 | 0.6566 | 0.6553 | | 0.6976 | 0.07 | 900 | 0.6774 | 0.6325 | | 0.6583 | 0.07 | 1000 | 0.7175 | 0.3517 | | 0.6779 | 0.08 | 1100 | 0.7012 | 0.6559 | | 0.6478 | 0.09 | 1200 | 0.6336 | 0.6559 | | 0.7405 | 0.1 | 1300 | 0.6577 | 0.6559 | | 0.7362 | 0.1 | 1400 | 0.6630 | 0.6142 | | 0.535 | 0.11 | 1500 | 0.7445 | 0.6559 | | 0.7338 | 0.12 | 1600 | 0.7046 | 0.4718 | | 0.6519 | 0.13 | 1700 | 0.6601 | 0.6426 | | 0.5969 | 0.13 | 1800 | 0.6518 | 0.6559 | | 0.5992 | 0.14 | 1900 | 0.6544 | 0.6559 | | 0.5762 | 0.15 | 2000 | 0.6608 | 0.6559 | | 0.6483 | 0.16 | 2100 | 0.6436 | 0.6331 | | 0.7594 | 0.16 | 2200 | 0.7562 | 0.5213 | | 0.6423 | 0.17 | 2300 | 0.6326 | 0.6433 | | 0.7006 | 0.18 | 2400 | 0.6669 | 0.6108 | | 0.833 | 0.19 | 2500 | 0.7043 | 0.6559 | | 0.6133 | 0.19 | 2600 | 0.6356 | 0.6532 | | 0.5285 | 0.2 | 2700 | 0.6619 | 0.6606 | | 0.7209 | 0.21 | 2800 | 0.7306 | 0.4196 | | 0.682 | 0.22 | 2900 | 0.6400 | 0.6539 | | 0.7148 | 0.22 | 3000 | 0.6421 | 0.6559 | | 0.6288 | 0.23 | 3100 | 0.7416 | 0.6559 | | 0.666 | 0.24 | 3200 | 0.6368 | 0.6293 | | 0.772 | 0.25 | 3300 | 0.6973 | 0.4985 | | 0.6778 | 0.25 | 3400 | 0.6288 | 0.6604 | | 0.5939 | 0.26 | 3500 | 0.6566 | 0.6559 | | 0.6246 | 0.27 | 3600 | 0.6347 | 0.6618 | | 0.649 | 0.28 | 3700 | 0.6353 | 0.6277 | | 0.7122 | 0.28 | 3800 | 0.6407 | 0.6559 | | 0.6292 | 0.29 | 3900 | 0.6776 | 0.6560 | | 0.6079 | 0.3 | 4000 | 0.6220 | 0.6609 | | 0.6971 | 0.31 | 4100 | 0.6258 | 0.6394 | | 0.7131 | 0.31 | 4200 | 0.7202 | 0.6556 | | 0.5346 | 0.32 | 4300 | 0.6394 | 0.6571 | | 0.5801 | 0.33 | 4400 | 0.6960 | 0.6664 | | 0.6806 | 0.34 | 4500 | 0.6339 | 0.6348 | | 0.6245 | 0.34 | 4600 | 0.6226 | 0.6477 | | 0.6905 | 0.35 | 4700 | 0.6203 | 0.6533 | | 0.741 | 0.36 | 4800 | 0.6464 | 0.6680 | | 0.5712 | 0.37 | 4900 | 0.6162 | 0.6640 | | 0.5566 | 0.37 | 5000 | 0.6182 | 0.6507 | | 0.6443 | 0.38 | 5100 | 0.6457 | 0.6664 | | 0.6107 | 0.39 | 5200 | 0.6092 | 0.6617 | | 0.5824 | 0.4 | 5300 | 0.6383 | 0.6571 | | 0.4775 | 0.4 | 5400 | 0.6606 | 0.6621 | | 0.7114 | 0.41 | 5500 | 0.6179 | 0.6619 | | 0.7701 | 0.42 | 5600 | 0.7982 | 0.4217 | | 0.6974 | 0.42 | 5700 | 0.6223 | 0.6540 | | 0.6669 | 0.43 | 5800 | 0.6249 | 0.6559 | | 0.6982 | 0.44 | 5900 | 0.6287 | 0.6564 | | 0.5811 | 0.45 | 6000 | 0.6104 | 0.6506 | | 0.4347 | 0.45 | 6100 | 1.0475 | 0.6559 | | 0.5885 | 0.46 | 6200 | 0.6125 | 0.6552 | | 0.6867 | 0.47 | 6300 | 0.6435 | 0.6468 | | 0.6088 | 0.48 | 6400 | 0.6047 | 0.6623 | | 0.8194 | 0.48 | 6500 | 0.6972 | 0.6589 | | 0.8182 | 0.49 | 6600 | 0.6053 | 0.6644 | | 0.6104 | 0.5 | 6700 | 0.7375 | 0.6571 | | 0.5552 | 0.51 | 6800 | 0.6231 | 0.6402 | | 0.6451 | 0.51 | 6900 | 0.6452 | 0.6561 | | 0.7849 | 0.52 | 7000 | 0.6177 | 0.6612 | | 0.64 | 0.53 | 7100 | 0.6307 | 0.6234 | | 0.6393 | 0.54 | 7200 | 0.6130 | 0.6554 | | 0.8326 | 0.54 | 7300 | 0.7210 | 0.6421 | | 0.6579 | 0.55 | 7400 | 0.6227 | 0.6544 | | 0.5195 | 0.56 | 7500 | 0.6619 | 0.6557 | | 0.6197 | 0.57 | 7600 | 0.6354 | 0.6498 | | 0.8507 | 0.57 | 7700 | 0.6820 | 0.6550 | | 0.7163 | 0.58 | 7800 | 0.6720 | 0.5328 | | 0.6896 | 0.59 | 7900 | 0.6530 | 0.6386 | | 0.62 | 0.6 | 8000 | 0.6296 | 0.6559 | | 0.8254 | 0.6 | 8100 | 0.6752 | 0.6200 | | 0.7653 | 0.61 | 8200 | 0.7118 | 0.6558 | | 0.7742 | 0.62 | 8300 | 0.6262 | 0.6497 | | 0.6861 | 0.63 | 8400 | 0.6799 | 0.5566 | | 0.5652 | 0.63 | 8500 | 0.6708 | 0.6559 | | 0.7486 | 0.64 | 8600 | 0.6319 | 0.6559 | | 0.6204 | 0.65 | 8700 | 0.6407 | 0.6530 | | 0.673 | 0.66 | 8800 | 0.7154 | 0.4672 | | 0.7272 | 0.66 | 8900 | 0.6323 | 0.6528 | | 0.7364 | 0.67 | 9000 | 0.6436 | 0.6188 | | 0.71 | 0.68 | 9100 | 0.6507 | 0.5924 | | 0.6767 | 0.69 | 9200 | 0.6347 | 0.6575 | | 0.7046 | 0.69 | 9300 | 0.6723 | 0.6127 | | 0.7486 | 0.7 | 9400 | 0.6328 | 0.6485 | | 0.7646 | 0.71 | 9500 | 0.6244 | 0.6550 | | 0.5971 | 0.72 | 9600 | 0.6610 | 0.6558 | | 0.6195 | 0.72 | 9700 | 0.6219 | 0.6515 | | 0.6891 | 0.73 | 9800 | 0.6300 | 0.6619 | | 0.6829 | 0.74 | 9900 | 0.6312 | 0.6568 | | 0.4786 | 0.75 | 10000 | 0.7160 | 0.6573 | | 0.6093 | 0.75 | 10100 | 0.6245 | 0.6503 | | 0.672 | 0.76 | 10200 | 0.6248 | 0.6577 | | 0.6734 | 0.77 | 10300 | 0.6541 | 0.6600 | | 0.7826 | 0.78 | 10400 | 0.6413 | 0.6559 | | 0.6851 | 0.78 | 10500 | 0.6478 | 0.6006 | | 0.6776 | 0.79 | 10600 | 0.6453 | 0.6175 | | 0.7322 | 0.8 | 10700 | 0.6188 | 0.6353 | | 0.5144 | 0.81 | 10800 | 0.6762 | 0.6571 | | 0.6977 | 0.81 | 10900 | 0.6559 | 0.6544 | | 0.5681 | 0.82 | 11000 | 0.7225 | 0.6559 | | 0.6449 | 0.83 | 11100 | 0.6372 | 0.6576 | | 0.6067 | 0.83 | 11200 | 0.6207 | 0.6391 | | 0.5921 | 0.84 | 11300 | 0.6178 | 0.6538 | | 0.5373 | 0.85 | 11400 | 0.7370 | 0.6559 | | 0.6926 | 0.86 | 11500 | 0.6346 | 0.6372 | | 0.6634 | 0.86 | 11600 | 0.6274 | 0.6489 | | 0.61 | 0.87 | 11700 | 0.6309 | 0.6427 | | 0.6214 | 0.88 | 11800 | 0.6273 | 0.6480 | | 0.6202 | 0.89 | 11900 | 0.6255 | 0.6559 | | 0.6153 | 0.89 | 12000 | 0.6348 | 0.6459 | | 0.7062 | 0.9 | 12100 | 0.6283 | 0.6512 | | 0.6977 | 0.91 | 12200 | 0.6159 | 0.6515 | | 0.6041 | 0.92 | 12300 | 0.6251 | 0.6504 | | 0.6609 | 0.92 | 12400 | 0.6633 | 0.5870 | | 0.7565 | 0.93 | 12500 | 0.6200 | 0.6562 | | 0.6133 | 0.94 | 12600 | 0.6193 | 0.6527 | | 0.7066 | 0.95 | 12700 | 0.6279 | 0.6180 | | 0.5706 | 0.95 | 12800 | 0.6128 | 0.6575 | | 0.6992 | 0.96 | 12900 | 0.6334 | 0.6449 | | 0.6834 | 0.97 | 13000 | 0.6258 | 0.6591 | | 0.6069 | 0.98 | 13100 | 0.6290 | 0.6620 | | 0.743 | 0.98 | 13200 | 0.6110 | 0.6562 | | 0.5226 | 0.99 | 13300 | 0.6165 | 0.6557 | | 0.7359 | 1.0 | 13400 | 0.6207 | 0.6376 | | 0.5812 | 1.01 | 13500 | 0.6192 | 0.6559 | | 0.666 | 1.01 | 13600 | 0.6347 | 0.6602 | | 0.5489 | 1.02 | 13700 | 0.6107 | 0.6459 | | 0.701 | 1.03 | 13800 | 0.6172 | 0.6518 | | 0.4873 | 1.04 | 13900 | 0.6786 | 0.6559 | | 0.5807 | 1.04 | 14000 | 0.6636 | 0.6433 | | 0.6824 | 1.05 | 14100 | 0.6176 | 0.6315 | | 0.6012 | 1.06 | 14200 | 0.6097 | 0.6617 | | 0.4865 | 1.07 | 14300 | 0.6103 | 0.6623 | | 0.5612 | 1.07 | 14400 | 0.6947 | 0.6559 | | 0.5968 | 1.08 | 14500 | 0.6559 | 0.5981 | | 0.5657 | 1.09 | 14600 | 0.6076 | 0.6509 | | 0.4778 | 1.1 | 14700 | 0.6808 | 0.6535 | | 0.6047 | 1.1 | 14800 | 0.6131 | 0.6480 | | 0.5999 | 1.11 | 14900 | 0.6120 | 0.6559 | | 0.5852 | 1.12 | 15000 | 0.6356 | 0.6553 | | 0.7033 | 1.13 | 15100 | 0.6578 | 0.6647 | | 0.5925 | 1.13 | 15200 | 0.6153 | 0.6633 | | 0.5959 | 1.14 | 15300 | 0.6306 | 0.6211 | | 0.5929 | 1.15 | 15400 | 0.6246 | 0.6655 | | 0.5621 | 1.16 | 15500 | 0.6126 | 0.6424 | | 0.5508 | 1.16 | 15600 | 0.6844 | 0.6559 | | 0.6276 | 1.17 | 15700 | 0.6066 | 0.6531 | | 1.0359 | 1.18 | 15800 | 0.6271 | 0.6617 | | 0.6191 | 1.19 | 15900 | 0.6166 | 0.6480 | | 0.7095 | 1.19 | 16000 | 0.6228 | 0.6462 | | 0.6567 | 1.2 | 16100 | 0.6066 | 0.6653 | | 0.5653 | 1.21 | 16200 | 0.6022 | 0.6605 | | 0.6894 | 1.21 | 16300 | 0.6216 | 0.6568 | | 0.608 | 1.22 | 16400 | 0.6041 | 0.6559 | | 0.665 | 1.23 | 16500 | 0.6111 | 0.6564 | | 0.6753 | 1.24 | 16600 | 0.6138 | 0.6581 | | 0.6213 | 1.24 | 16700 | 0.6121 | 0.6380 | | 0.6983 | 1.25 | 16800 | 0.6166 | 0.6661 | | 0.8521 | 1.26 | 16900 | 0.6202 | 0.6461 | | 0.4927 | 1.27 | 17000 | 0.6313 | 0.6547 | | 0.6414 | 1.27 | 17100 | 0.6011 | 0.6667 | | 0.539 | 1.28 | 17200 | 0.6451 | 0.6664 | | 0.5118 | 1.29 | 17300 | 0.6243 | 0.6641 | | 0.7512 | 1.3 | 17400 | 0.6257 | 0.6586 | | 0.5943 | 1.3 | 17500 | 0.6186 | 0.6423 | | 0.5861 | 1.31 | 17600 | 0.6435 | 0.6638 | | 0.7065 | 1.32 | 17700 | 0.6197 | 0.6279 | | 0.5973 | 1.33 | 17800 | 0.6081 | 0.6535 | | 0.5997 | 1.33 | 17900 | 0.6053 | 0.6608 | | 0.7091 | 1.34 | 18000 | 0.6013 | 0.6644 | | 0.691 | 1.35 | 18100 | 0.6103 | 0.6654 | | 0.5559 | 1.36 | 18200 | 0.6110 | 0.6658 | | 0.6309 | 1.36 | 18300 | 0.6067 | 0.6664 | | 0.6262 | 1.37 | 18400 | 0.6027 | 0.6616 | | 0.5551 | 1.38 | 18500 | 0.6106 | 0.6671 | | 0.6703 | 1.39 | 18600 | 0.6043 | 0.6576 | | 0.6849 | 1.39 | 18700 | 0.6018 | 0.6616 | | 0.6136 | 1.4 | 18800 | 0.6324 | 0.6629 | | 0.7075 | 1.41 | 18900 | 0.6057 | 0.6561 | | 0.6036 | 1.42 | 19000 | 0.6081 | 0.6559 | | 0.6549 | 1.42 | 19100 | 0.6352 | 0.6655 | | 0.5168 | 1.43 | 19200 | 0.6042 | 0.6632 | | 0.5864 | 1.44 | 19300 | 0.6111 | 0.6639 | | 0.5961 | 1.45 | 19400 | 0.6003 | 0.6644 | | 0.6077 | 1.45 | 19500 | 0.6125 | 0.6566 | | 0.6215 | 1.46 | 19600 | 0.6128 | 0.6582 | | 0.4005 | 1.47 | 19700 | 0.6348 | 0.6642 | | 0.5689 | 1.48 | 19800 | 0.6355 | 0.6647 | | 0.6026 | 1.48 | 19900 | 0.6127 | 0.6444 | | 0.4982 | 1.49 | 20000 | 0.6034 | 0.6654 | | 0.6189 | 1.5 | 20100 | 0.6202 | 0.6609 | | 0.5502 | 1.51 | 20200 | 0.6044 | 0.6621 | | 0.5924 | 1.51 | 20300 | 0.6107 | 0.6445 | | 0.744 | 1.52 | 20400 | 0.6164 | 0.6559 | | 0.5582 | 1.53 | 20500 | 0.6166 | 0.6559 | | 0.6994 | 1.54 | 20600 | 0.6109 | 0.6664 | | 0.5396 | 1.54 | 20700 | 0.6189 | 0.6670 | | 0.7232 | 1.55 | 20800 | 0.6104 | 0.6610 | | 0.9802 | 1.56 | 20900 | 0.6232 | 0.6642 | | 0.6487 | 1.57 | 21000 | 0.6056 | 0.6505 | | 0.5932 | 1.57 | 21100 | 0.5980 | 0.6702 | | 0.7897 | 1.58 | 21200 | 0.6012 | 0.6638 | | 0.6006 | 1.59 | 21300 | 0.6232 | 0.6672 | | 0.4481 | 1.6 | 21400 | 0.6124 | 0.6676 | | 0.6078 | 1.6 | 21500 | 0.6495 | 0.6664 | | 0.595 | 1.61 | 21600 | 0.7122 | 0.6675 | | 0.6388 | 1.62 | 21700 | 0.6227 | 0.6671 | | 0.5731 | 1.62 | 21800 | 0.6252 | 0.6682 | | 0.8603 | 1.63 | 21900 | 0.6026 | 0.6653 | | 0.6316 | 1.64 | 22000 | 0.6494 | 0.6669 | | 0.6712 | 1.65 | 22100 | 0.6097 | 0.6676 | | 0.6102 | 1.65 | 22200 | 0.6221 | 0.6585 | | 0.7099 | 1.66 | 22300 | 0.6006 | 0.6658 | | 0.621 | 1.67 | 22400 | 0.6026 | 0.6626 | | 0.478 | 1.68 | 22500 | 0.6062 | 0.6624 | | 0.6106 | 1.68 | 22600 | 0.5990 | 0.6669 | | 0.5793 | 1.69 | 22700 | 0.5980 | 0.6681 | | 0.5804 | 1.7 | 22800 | 0.6014 | 0.6626 | | 0.6304 | 1.71 | 22900 | 0.6107 | 0.6380 | | 0.7427 | 1.71 | 23000 | 0.6051 | 0.6682 | | 0.5794 | 1.72 | 23100 | 0.6105 | 0.6611 | | 0.5084 | 1.73 | 23200 | 0.6643 | 0.6673 | | 0.6518 | 1.74 | 23300 | 0.6366 | 0.6687 | | 0.5129 | 1.74 | 23400 | 0.6053 | 0.6682 | | 0.7593 | 1.75 | 23500 | 0.5977 | 0.6662 | | 0.6645 | 1.76 | 23600 | 0.5988 | 0.6683 | | 0.6144 | 1.77 | 23700 | 0.6130 | 0.6673 | | 0.6855 | 1.77 | 23800 | 0.6192 | 0.6596 | | 0.559 | 1.78 | 23900 | 0.6208 | 0.6574 | | 0.4202 | 1.79 | 24000 | 0.6125 | 0.6690 | | 0.6604 | 1.8 | 24100 | 0.6052 | 0.6685 | | 0.5487 | 1.8 | 24200 | 0.6086 | 0.6685 | | 0.6816 | 1.81 | 24300 | 0.5997 | 0.6620 | | 0.6057 | 1.82 | 24400 | 0.6128 | 0.6530 | | 0.4335 | 1.83 | 24500 | 0.6121 | 0.6676 | | 0.6147 | 1.83 | 24600 | 0.6225 | 0.6670 | | 0.7414 | 1.84 | 24700 | 0.6248 | 0.6718 | | 0.622 | 1.85 | 24800 | 0.6084 | 0.6722 | | 0.5356 | 1.86 | 24900 | 0.6003 | 0.6611 | | 0.7994 | 1.86 | 25000 | 0.6098 | 0.6657 | | 0.5389 | 1.87 | 25100 | 0.6052 | 0.6633 | | 0.6985 | 1.88 | 25200 | 0.6073 | 0.6694 | | 0.652 | 1.89 | 25300 | 0.6040 | 0.6709 | | 0.5409 | 1.89 | 25400 | 0.6065 | 0.6709 | | 0.6356 | 1.9 | 25500 | 0.6062 | 0.6699 | | 0.7588 | 1.91 | 25600 | 0.6025 | 0.6711 | | 0.5109 | 1.92 | 25700 | 0.5992 | 0.6693 | | 0.6766 | 1.92 | 25800 | 0.6004 | 0.6693 | | 0.6517 | 1.93 | 25900 | 0.6020 | 0.6701 | | 0.6561 | 1.94 | 26000 | 0.5995 | 0.6705 | | 0.6224 | 1.95 | 26100 | 0.6008 | 0.6717 | | 0.6054 | 1.95 | 26200 | 0.6005 | 0.6714 | | 0.5152 | 1.96 | 26300 | 0.6023 | 0.6709 | | 0.5503 | 1.97 | 26400 | 0.6032 | 0.6706 | | 0.5101 | 1.98 | 26500 | 0.6067 | 0.6709 | | 0.5229 | 1.98 | 26600 | 0.6079 | 0.6702 | | 0.8387 | 1.99 | 26700 | 0.6079 | 0.6700 | | 0.608 | 2.0 | 26800 | 0.6069 | 0.6699 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.13.1+cu116 - Datasets 2.13.1 - Tokenizers 0.13.3
Devops-hestabit/Othehalf-350m-onnx
Devops-hestabit
2023-07-13T09:23:52Z
3
0
transformers
[ "transformers", "onnx", "opt", "text-generation", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-07-13T09:19:29Z
--- license: creativeml-openrail-m ---
Daemon101/whisper-small-hi
Daemon101
2023-07-13T09:13:50Z
77
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "hi", "dataset:mozilla-foundation/common_voice_11_0", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-13T08:29:41Z
--- language: - hi license: apache-2.0 base_model: openai/whisper-small tags: - hf-asr-leaderboard - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 model-index: - name: Whisper Small Hi - Sanchit Gandhi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Hi - Sanchit Gandhi This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
youlun77/finetuning-sentiment-model-25000-samples-BERT
youlun77
2023-07-13T09:10:41Z
116
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-13T07:30:02Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: finetuning-sentiment-model-25000-samples-BERT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-25000-samples-BERT This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - eval_loss: 0.2154 - eval_accuracy: 0.9422 - eval_f1: 0.9427 - eval_runtime: 823.1435 - eval_samples_per_second: 30.371 - eval_steps_per_second: 1.899 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
YanJiangJerry/sentiment-bloom-large-e6
YanJiangJerry
2023-07-13T08:58:38Z
4
0
transformers
[ "transformers", "pytorch", "bloom", "text-classification", "generated_from_trainer", "license:bigscience-bloom-rail-1.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2023-07-13T07:52:49Z
--- license: bigscience-bloom-rail-1.0 tags: - generated_from_trainer model-index: - name: sentiment-bloom-large-e6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sentiment-bloom-large-e6 This model is a fine-tuned version of [LYTinn/bloom-finetuning-sentiment-model-3000-samples](https://huggingface.co/LYTinn/bloom-finetuning-sentiment-model-3000-samples) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
soonmo/distilbert-base-uncased-finetuned-clinc
soonmo
2023-07-13T08:58:26Z
110
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-12T01:45:07Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos config: plus split: validation args: plus metrics: - name: Accuracy type: accuracy value: 0.9161290322580645 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7754 - Accuracy: 0.9161 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.2893 | 1.0 | 318 | 3.2831 | 0.7397 | | 2.6289 | 2.0 | 636 | 1.8731 | 0.8345 | | 1.5481 | 3.0 | 954 | 1.1580 | 0.89 | | 1.0137 | 4.0 | 1272 | 0.8584 | 0.9077 | | 0.7969 | 5.0 | 1590 | 0.7754 | 0.9161 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
aditii09/whisper_eng_asr
aditii09
2023-07-13T08:58:20Z
76
1
transformers
[ "transformers", "pytorch", "tf", "jax", "whisper", "automatic-speech-recognition", "audio", "hf-asr-leaderboard", "en", "zh", "de", "es", "ru", "ko", "fr", "ja", "pt", "tr", "pl", "ca", "nl", "ar", "sv", "it", "id", "hi", "fi", "vi", "he", "uk", "el", "ms", "cs", "ro", "da", "hu", "ta", "no", "th", "ur", "hr", "bg", "lt", "la", "mi", "ml", "cy", "sk", "te", "fa", "lv", "bn", "sr", "az", "sl", "kn", "et", "mk", "br", "eu", "is", "hy", "ne", "mn", "bs", "kk", "sq", "sw", "gl", "mr", "pa", "si", "km", "sn", "yo", "so", "af", "oc", "ka", "be", "tg", "sd", "gu", "am", "yi", "lo", "uz", "fo", "ht", "ps", "tk", "nn", "mt", "sa", "lb", "my", "bo", "tl", "mg", "as", "tt", "haw", "ln", "ha", "ba", "jw", "su", "arxiv:2212.04356", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-13T08:45:39Z
--- language: - en - zh - de - es - ru - ko - fr - ja - pt - tr - pl - ca - nl - ar - sv - it - id - hi - fi - vi - he - uk - el - ms - cs - ro - da - hu - ta - no - th - ur - hr - bg - lt - la - mi - ml - cy - sk - te - fa - lv - bn - sr - az - sl - kn - et - mk - br - eu - is - hy - ne - mn - bs - kk - sq - sw - gl - mr - pa - si - km - sn - yo - so - af - oc - ka - be - tg - sd - gu - am - yi - lo - uz - fo - ht - ps - tk - nn - mt - sa - lb - my - bo - tl - mg - as - tt - haw - ln - ha - ba - jw - su tags: - audio - automatic-speech-recognition - hf-asr-leaderboard widget: - example_title: Librispeech sample 1 src: https://cdn-media.huggingface.co/speech_samples/sample1.flac - example_title: Librispeech sample 2 src: https://cdn-media.huggingface.co/speech_samples/sample2.flac model-index: - name: whisper-base results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (clean) type: librispeech_asr config: clean split: test args: language: en metrics: - name: Test WER type: wer value: 5.008769117619326 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (other) type: librispeech_asr config: other split: test args: language: en metrics: - name: Test WER type: wer value: 12.84936273212057 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 11.0 type: mozilla-foundation/common_voice_11_0 config: hi split: test args: language: hi metrics: - name: Test WER type: wer value: 131 pipeline_tag: automatic-speech-recognition license: apache-2.0 --- # Whisper Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need for fine-tuning. Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356) by Alec Radford et al from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper). **Disclaimer**: Content for this model card has partly been written by the Hugging Face team, and parts of it were copied and pasted from the original model card. ## Model details Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model. It was trained on 680k hours of labelled speech data annotated using large-scale weak supervision. The models were trained on either English-only data or multilingual data. The English-only models were trained on the task of speech recognition. The multilingual models were trained on both speech recognition and speech translation. For speech recognition, the model predicts transcriptions in the *same* language as the audio. For speech translation, the model predicts transcriptions to a *different* language to the audio. Whisper checkpoints come in five configurations of varying model sizes. The smallest four are trained on either English-only or multilingual data. The largest checkpoints are multilingual only. All ten of the pre-trained checkpoints are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The checkpoints are summarised in the following table with links to the models on the Hub: | Size | Parameters | English-only | Multilingual | |----------|------------|------------------------------------------------------|-----------------------------------------------------| | tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) | | base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) | | small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) | | medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) | | large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) | | large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) | # Usage To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor). The `WhisperProcessor` is used to: 1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model) 2. Post-process the model outputs (converting them from tokens to text) The model is informed of which task to perform (transcription or translation) by passing the appropriate "context tokens". These context tokens are a sequence of tokens that are given to the decoder at the start of the decoding process, and take the following order: 1. The transcription always starts with the `<|startoftranscript|>` token 2. The second token is the language token (e.g. `<|en|>` for English) 3. The third token is the "task token". It can take one of two values: `<|transcribe|>` for speech recognition or `<|translate|>` for speech translation 4. In addition, a `<|notimestamps|>` token is added if the model should not include timestamp prediction Thus, a typical sequence of context tokens might look as follows: ``` <|startoftranscript|> <|en|> <|transcribe|> <|notimestamps|> ``` Which tells the model to decode in English, under the task of speech recognition, and not to predict timestamps. These tokens can either be forced or un-forced. If they are forced, the model is made to predict each token at each position. This allows one to control the output language and task for the Whisper model. If they are un-forced, the Whisper model will automatically predict the output langauge and task itself. The context tokens can be set accordingly: ```python model.config.forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language="english", task="transcribe") ``` Which forces the model to predict in English under the task of speech recognition. ## Transcription ### English to English In this example, the context tokens are 'unforced', meaning the model automatically predicts the output language (English) and task (transcribe). ```python >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration >>> from datasets import load_dataset >>> # load model and processor >>> processor = WhisperProcessor.from_pretrained("openai/whisper-base") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-base") >>> model.config.forced_decoder_ids = None >>> # load dummy dataset and read audio files >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> sample = ds[0]["audio"] >>> input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features >>> # generate token ids >>> predicted_ids = model.generate(input_features) >>> # decode token ids to text >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False) ['<|startoftranscript|><|en|><|transcribe|><|notimestamps|> Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.<|endoftext|>'] >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) [' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.'] ``` The context tokens can be removed from the start of the transcription by setting `skip_special_tokens=True`. ### French to French The following example demonstrates French to French transcription by setting the decoder ids appropriately. ```python >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration >>> from datasets import Audio, load_dataset >>> # load model and processor >>> processor = WhisperProcessor.from_pretrained("openai/whisper-base") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-base") >>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="transcribe") >>> # load streaming dataset and read first audio sample >>> ds = load_dataset("common_voice", "fr", split="test", streaming=True) >>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000)) >>> input_speech = next(iter(ds))["audio"] >>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features >>> # generate token ids >>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids) >>> # decode token ids to text >>> transcription = processor.batch_decode(predicted_ids) ['<|startoftranscript|><|fr|><|transcribe|><|notimestamps|> Un vrai travail intéressant va enfin être mené sur ce sujet.<|endoftext|>'] >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) [' Un vrai travail intéressant va enfin être mené sur ce sujet.'] ``` ## Translation Setting the task to "translate" forces the Whisper model to perform speech translation. ### French to English ```python >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration >>> from datasets import Audio, load_dataset >>> # load model and processor >>> processor = WhisperProcessor.from_pretrained("openai/whisper-base") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-base") >>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="translate") >>> # load streaming dataset and read first audio sample >>> ds = load_dataset("common_voice", "fr", split="test", streaming=True) >>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000)) >>> input_speech = next(iter(ds))["audio"] >>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features >>> # generate token ids >>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids) >>> # decode token ids to text >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) [' A very interesting work, we will finally be given on this subject.'] ``` ## Evaluation This code snippet shows how to evaluate Whisper Base on [LibriSpeech test-clean](https://huggingface.co/datasets/librispeech_asr): ```python >>> from datasets import load_dataset >>> from transformers import WhisperForConditionalGeneration, WhisperProcessor >>> import torch >>> from evaluate import load >>> librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test") >>> processor = WhisperProcessor.from_pretrained("openai/whisper-base") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-base").to("cuda") >>> def map_to_pred(batch): >>> audio = batch["audio"] >>> input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features >>> batch["reference"] = processor.tokenizer._normalize(batch['text']) >>> >>> with torch.no_grad(): >>> predicted_ids = model.generate(input_features.to("cuda"))[0] >>> transcription = processor.decode(predicted_ids) >>> batch["prediction"] = processor.tokenizer._normalize(transcription) >>> return batch >>> result = librispeech_test_clean.map(map_to_pred) >>> wer = load("wer") >>> print(100 * wer.compute(references=result["reference"], predictions=result["prediction"])) 5.082316555716899 ``` ## Long-Form Transcription The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline) method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. With chunking enabled, the pipeline can be run with batched inference. It can also be extended to predict sequence level timestamps by passing `return_timestamps=True`: ```python >>> import torch >>> from transformers import pipeline >>> from datasets import load_dataset >>> device = "cuda:0" if torch.cuda.is_available() else "cpu" >>> pipe = pipeline( >>> "automatic-speech-recognition", >>> model="openai/whisper-base", >>> chunk_length_s=30, >>> device=device, >>> ) >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> sample = ds[0]["audio"] >>> prediction = pipe(sample.copy(), batch_size=8)["text"] " Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel." >>> # we can also return timestamps for the predictions >>> prediction = pipe(sample.copy(), batch_size=8, return_timestamps=True)["chunks"] [{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.', 'timestamp': (0.0, 5.44)}] ``` Refer to the blog post [ASR Chunking](https://huggingface.co/blog/asr-chunking) for more details on the chunking algorithm. ## Fine-Tuning The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However, its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step guide to fine-tuning the Whisper model with as little as 5 hours of labelled data. ### Evaluated Use The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research. The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them. In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes. ## Training Data The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages. As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language. ## Performance and Limitations Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level. However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself. Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf). In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages. ## Broader Implications We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications. There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects. ### BibTeX entry and citation info ```bibtex @misc{radford2022whisper, doi = {10.48550/ARXIV.2212.04356}, url = {https://arxiv.org/abs/2212.04356}, author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya}, title = {Robust Speech Recognition via Large-Scale Weak Supervision}, publisher = {arXiv}, year = {2022}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
Jorgeutd/bert-base-uncased-ade-Ade-corpus-v2
Jorgeutd
2023-07-13T08:54:20Z
113
0
transformers
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "sagemaker", "bert-base-uncased", "text classification", "en", "dataset:adecorpusv2", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- language: en widget: - text: "I got a rash from taking acetaminophen" tags: - sagemaker - bert-base-uncased - text classification license: apache-2.0 datasets: - adecorpusv2 model-index: - name: BERT-ade_corpus results: - task: name: Text Classification type: text-classification dataset: name: "ade_corpus_v2Ade_corpus_v2_classification" type: ade_corpus metrics: - name: Validation Accuracy type: accuracy value: 92.98 - name: Validation F1 type: f1 value: 82.73 --- ## bert-base-uncased This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container. - Problem type: Text Classification(adverse drug effects detection). ## Hyperparameters ```json { "do_eval": true, "do_train": true, "fp16": true, "load_best_model_at_end": true, "model_name": "bert-base-uncased", "num_train_epochs": 10, "per_device_eval_batch_size": 16, "per_device_train_batch_size": 16, "learning_rate":5e-5 } ``` ## Validation Metrics | key | value | | --- | ----- | | eval_accuracy | 0.9298021697511167 | | eval_auc | 0.8902672664394546 | | eval_f1 | 0.827315541601256 | | eval_loss | 0.17835010588169098 | | eval_recall | 0.8234375 | | eval_precision | 0.831230283911672 | ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I got a rash from taking acetaminophen"}' https://api-inference.huggingface.co/models/Jorgeutd/bert-base-uncased-ade-Ade-corpus-v2 ``` """
digiplay/hellopure_v2.24Beta
digiplay
2023-07-13T08:49:07Z
70
4
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-13T04:21:25Z
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- 👍👍👍👍👍 https://civitai.com/models/88202/hellopure Other models from Author: https://civitai.com/user/aji1/models ![Screenshot_20230713_153638_Vivaldi Browser Snapshot.jpg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/IvjW37lEUwWczHHKqwKke.jpeg) Sample image I made with AUTOMATIC1111 : ![tmp09akmbgp.png](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/i0K89WXyyvX5nc6wcGyaC.png) parameters very close-up ,(best beautiful:1.2), (masterpiece:1.2), (best quality:1.2),masterpiece, best quality, The image features a beautiful young woman with long light golden hair, beach near the ocean, white dress ,The beach is lined with palm trees, Negative prompt: worst quality ,normal quality , Steps: 17, Sampler: Euler, CFG scale: 5, Seed: 1097775045, Size: 480x680, Model hash: 8d4fa7988b, Clip skip: 2, Version: v1.4.1
daxiboy/vit-base-patch16-224-finetuned-flower
daxiboy
2023-07-13T08:47:12Z
165
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-13T08:35:53Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder model-index: - name: vit-base-patch16-224-finetuned-flower results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-finetuned-flower This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 2.0.1+cu118 - Datasets 2.7.1 - Tokenizers 0.13.3
gabrielgme/falcon-7b-spider-with-schema
gabrielgme
2023-07-13T08:44:42Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-12T13:21:52Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0.dev0
jordiclive/falcon-40b-lora-sft-stage2-1.1k
jordiclive
2023-07-13T08:35:07Z
16
0
transformers
[ "transformers", "pytorch", "RefinedWeb", "text-generation", "sft", "custom_code", "en", "dataset:OpenAssistant/oasst1", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-12T17:24:51Z
--- license: mit datasets: - OpenAssistant/oasst1 language: - en tags: - sft pipeline_tag: text-generation widget: - text: >- <|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|> - text: <|prompter|>What's the Earth total population<|endoftext|><|assistant|> - text: <|prompter|>Write a story about future of AI development<|endoftext|><|assistant|> --- # Load Merged Model (Recommended, identical configuration to a fine-tuned model) ``` import torch from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig repo_id = "jordiclive/falcon-40b-lora-sft-stage2-1.1k" dtype = torch.bfloat16 tokenizer = AutoTokenizer.from_pretrained(repo_id) model = AutoModelForCausalLM.from_pretrained( repo_id, torch_dtype=dtype, trust_remote_code=True, ) ``` ## Model Details - **Developed** as part of the OpenAssistant Project - **Model type:** LoRA (PEFT) - **Language:** English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish); - **Finetuned from:** [tiiuae/falcon-40b](https://huggingface.co/tiiuae/falcon-4b) - **Model type:** Causal decoder-only transformer language model - **Weights & Biases:** [Training log1](https://wandb.ai/open-assistant/public-sft/runs/q0q9lce4) [Training log2](https://wandb.ai/open-assistant/public-sft/runs/qqok9ru2?workspace=user-jordanclive) # LoRA Adapter for Falcon 40B trained on oasst-top1 This repo contains a **Falcon 40B** LoRA fine-tuned model and the low-rank adapter fit on datasets part of the OpenAssistant project. This version of the weights was trained with the following hyperparameters: SFT 1 - Epochs: 2 - Batch size: 128 - Max Length: 2048 - Learning rate: 1e-4 - Lora _r_: 64 - Lora Alpha: 16 - Lora target modules: ["dense_4h_to_h", "dense", "query_key_value", "dense_h_to_4h"] SFT2 - Epochs: 10 - Batch size: 128 The model was trained with flash attention and gradient checkpointing and deepspeed stage 3 on 8 x A100 80gb Dataset: SFT1: ``` - oa_leet10k: val_split: 0.05 max_val_set: 250 - cmu_wiki_qa: val_split: 0.05 - joke: val_split: 0.05 - webgpt: val_split: 0.05 max_val_set: 250 - alpaca_gpt4: val_split: 0.025 max_val_set: 250 - gpteacher_roleplay: val_split: 0.05 - wizardlm_70k: val_split: 0.05 max_val_set: 500 - poem_instructions: val_split: 0.025 - tell_a_joke: val_split: 0.05 max_val_set: 250 - gpt4all: val_split: 0.01 max_val_set: 1000 - minimath: val_split: 0.05 - humaneval_mbpp_codegen_qa: val_split: 0.05 - humaneval_mbpp_testgen_qa: val_split: 0.05 - dolly15k: val_split: 0.05 max_val_set: 300 - recipes: val_split: 0.05 - code_alpaca: val_split: 0.05 max_val_set: 250 - vicuna: fraction: 0.5 val_split: 0.025 max_val_set: 250 - oa_wiki_qa_bart_10000row: val_split: 0.05 max_val_set: 250 - grade_school_math_instructions: val_split: 0.05 ``` SFT2 ``` - oasst_export: lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk" # sft-8.0 input_file_path: 2023-05-06_OASST_labels.jsonl.gz val_split: 0.05 top_k: 1 - lima: val_split: 0.05 max_val_set: 50 ``` ## Prompting Two special tokens are used to mark the beginning of user and assistant turns: `<|prompter|>` and `<|assistant|>`. Each turn ends with a `<|endoftext|>` token. Input prompt example: ``` <|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|> ``` The input ends with the `<|assistant|>` token to signal that the model should start generating the assistant reply. # Example Inference code (Prompt Template) ``` model = model.to(device) if dtype == torch.float16: model = model.half() # Choose Generation parameters generation_config = GenerationConfig( temperature=0.1, top_p=0.75, top_k=40, num_beams=4, ) def format_system_prompt(prompt, eos_token=tokenizer.eos_token): return "{}{}{}{}".format("<|prompter|>", prompt, eos_token, "<|assistant|>") def generate(prompt, generation_config=generation_config, max_new_tokens=2048, device=device): prompt = format_system_prompt(prompt,eos_token=tokenizer.eos_token) # OpenAssistant Prompt Format expected input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device) with torch.no_grad(): generation_output = model.generate( input_ids=input_ids, generation_config=generation_config, return_dict_in_generate=True, output_scores=True, max_new_tokens=max_new_tokens, eos_token_id=tokenizer.eos_token_id, ) s = generation_output.sequences[0] output = tokenizer.decode(s) print("Text generated:") print(output) return output ``` ## LoRA weights If you want to use the LoRA weights separately, several special token embeddings also need to be added. ``` base_model_id = "tiiuae/falcon-40b" import torch import transformers from huggingface_hub import hf_hub_download from peft import PeftModel def add_embeddings(model, embed_path, tokenizer): old_embeddings = model.get_input_embeddings() old_num_tokens, old_embedding_dim = old_embeddings.weight.size() new_embeddings = torch.nn.Embedding(old_num_tokens, old_embedding_dim) new_embeddings.to(old_embeddings.weight.device, dtype=old_embeddings.weight.dtype) model._init_weights(new_embeddings) embed_weights = torch.load(embed_path, map_location=old_embeddings.weight.device) vocab_size = tokenizer.vocab_size new_embeddings.weight.data[:vocab_size, :] = old_embeddings.weight.data[:vocab_size, :] new_embeddings.weight.data[vocab_size : vocab_size + embed_weights.shape[0], :] = embed_weights.to( new_embeddings.weight.dtype ).to(new_embeddings.weight.device) model.set_input_embeddings(new_embeddings) model.tie_weights() def load_peft_model(model, peft_model_path, tokenizer): embed_weights = hf_hub_download(peft_model_path, "extra_embeddings.pt") model.resize_token_embeddings(tokenizer.vocab_size + torch.load(embed_weights).shape[0]) model.config.eos_token_id = tokenizer.eos_token_id model.config.bos_token_id = tokenizer.bos_token_id model.config.pad_token_id = tokenizer.pad_token_id model = PeftModel.from_pretrained( model, model_id=peft_model_path, torch_dtype=model.dtype, ) model.eos_token_id = tokenizer.eos_token_id add_embeddings(model, embed_weights, tokenizer) return model def load_lora_model(base_model_id, tokenizer, device, dtype): model = transformers.AutoModelForCausalLM.from_pretrained( base_model_id, torch_dtype=dtype, trust_remote_code=True, ) model = load_peft_model(model, repo_id, tokenizer) model = model.to(device) return model model = load_lora_model(base_model_id=base_model_id, tokenizer=tokenizer, device=device, dtype=dtype) ```
soonmo/distilbert-base-uncased-distilled-clinc
soonmo
2023-07-13T08:19:01Z
110
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-12T06:31:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-distilled-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos config: plus split: validation args: plus metrics: - name: Accuracy type: accuracy value: 0.9493548387096774 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.2316 - Accuracy: 0.9494 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.1042 | 1.0 | 318 | 1.5124 | 0.7487 | | 1.1742 | 2.0 | 636 | 0.7825 | 0.8735 | | 0.6319 | 3.0 | 954 | 0.4544 | 0.9203 | | 0.3826 | 4.0 | 1272 | 0.3230 | 0.9345 | | 0.2712 | 5.0 | 1590 | 0.2731 | 0.9448 | | 0.2233 | 6.0 | 1908 | 0.2517 | 0.9484 | | 0.1992 | 7.0 | 2226 | 0.2402 | 0.95 | | 0.1863 | 8.0 | 2544 | 0.2354 | 0.9490 | | 0.1792 | 9.0 | 2862 | 0.2331 | 0.9497 | | 0.1766 | 10.0 | 3180 | 0.2316 | 0.9494 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
HoaAn2003/ppo-Huggy
HoaAn2003
2023-07-13T08:13:54Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-07-13T08:13:06Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: HoaAn2003/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Ablustrund/moss-rlhf-reward-model-7B-zh
Ablustrund
2023-07-13T08:10:42Z
3
23
null
[ "llm", "reward model", "moss", "rlhf", "zh", "arxiv:2307.04964", "license:agpl-3.0", "region:us" ]
null
2023-07-12T02:27:02Z
--- license: agpl-3.0 language: - zh tags: - llm - reward model - moss - rlhf --- # MOSS-RLHF ### *MOSS-RLHF & "Secrets of RLHF in Large Language Models Part I: PPO" <br>👉 <a href="https://arxiv.org/abs/2307.04964" target="_blank">[Technical report]</a> <a href="https://openlmlab.github.io/MOSS-RLHF/" target="_blank">[Home page]* ## 🌟 News ### 👉 Wed, 12. July 2023. We have released Chinese reward model based OpenChineseLlama-7B! [moss-rlhf-reward-model-7B-zh](https://huggingface.co/Ablustrund/moss-rlhf-reward-model-7B-zh/tree/main) <br> ### 👉 Thu, 13. July 2023. We have released English reward model and SFT model based Llama-7B! [moss-rlhf-reward-model-7B-en](https://huggingface.co/fnlp/moss-rlhf-reward-model-7B-en) [moss-rlhf-sft-model-7B-en](https://huggingface.co/fnlp/moss-rlhf-sft-model-7B-en) <br> ## 🧾 Open-source List - [x] Open source code for RL training in large language models. - [x] A 7B Chinese reward model based on openChineseLlama. - [x] A 7B English reward model based on Llama-7B. - [x] SFT model for English. - [ ] Policy model for English after RLHF. - ... ## 🌠 Introduction Due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In this technical report, we intend to help researchers to train their models stably with human feedback. Contributions are summarized as follows: 1) We release competitive Chinese and English reward models, respectively, which have good cross-model generalization ability, alleviating the cost of relabeling human preference data; 2) We conduct in-depth analysis on the inner workings of PPO algorithm and propose the PPO-max algorithm to ensure stable model training; 3) We release the complete PPO-max codes to ensure that the LLMs in the current SFT stage can be better aligned with humans. ## 🔩 Requirements & Setup This repository works on Python 3.8 and PyTorch 1.13.1. We recommend using the **conda** virtual environment to run the code. #### Step 1: Create a new Python virtual environment ```bash conda update conda -n base -c defaults conda create -n rlhf python=3.8 conda activate rlhf ``` #### Step 2: Install PyTorch and TensorBoard ```bash conda install pytorch==1.13.1 pytorch-cuda=11.7 tensorboard -c pytorch -c nvidia ``` #### Step 3: Install the remaining dependencies ```bash conda install datasets accelerate safetensors chardet cchardet -c huggingface -c conda-forge pip3 install transformers sentencepiece einops triton==1.0.0 rouge jionlp==1.4.14 nltk sacrebleu cpm_kernels apt install libaio-dev DS_BUILD_OPS=1 pip install deepspeed ``` ## ✨ Start training your own model! Run code in a few steps. ### Step 1: Recover Reward model weights We can not directly release the full weight of the reward model because of protocol restrictions. You can merge the diff weight with original Llama-7B to recover the reward model we used. We upload the diff models, thanks to tatsu-lab, you can recover the reward model follow these steps: ```bash 1) Download the weight diff into your local machine. The weight diff is located at: # For English: TODO # For Chinese: https://huggingface.co/Ablustrund/moss-rlhf-reward-model-7B-zh/tree/main 2) Merge the weight diff with the original Llama-7B: # For English: # Reward model python merge_weight_en.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-reward-model-7B-en/diff --path_tuned ./models/moss-rlhf-reward-model-7B-en/recover --model_type reward # SFT model python merge_weight_en.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-sft-model-7B-en/diff --path_tuned ./models/moss-rlhf-sft-model-7B-en/recover --model_type sft # Policy model TODO # For Chinese: python merge_weight_zh.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-reward-model-7B-zh/diff --path_tuned ./models/moss-rlhf-reward-model-7B-zh/recover ``` ### Step 2: Select your own SFT model. Because of some limitations, we can not release the **Chinese** SFT model (Currently). You can use your own SFT model, or a strong base model instead of our SFT model. ### Step 3: Start training Run the command below. ``` # For Chinese: # You need to use your own sft model currently. bash run_zh.sh # For English: # We have loaded the sft model and reward model to huggingface. bash run_en.sh ``` ## Citation ```bibtex @article{zheng2023secrets, title={Secrets of RLHF in Large Language Models Part I: PPO}, author={Rui Zheng and Shihan Dou and Songyang Gao and Wei Shen and Binghai Wang and Yan Liu and Senjie Jin and Qin Liu and Limao Xiong and Lu Chen and Zhiheng Xi and Yuhao Zhou and Nuo Xu and Wenbin Lai and Minghao Zhu and Rongxiang Weng and Wensen Cheng and Cheng Chang and Zhangyue Yin and Yuan Hua and Haoran Huang and Tianxiang Sun and Hang Yan and Tao Gui and Qi Zhang and Xipeng Qiu and Xuanjing Huang}, year={2023}, eprint={2307.04964}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
haxett333/RL-Reinforce-100TrainEpisodesInsteadof1000
haxett333
2023-07-13T08:00:13Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-13T08:00:09Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: RL-Reinforce-100TrainEpisodesInsteadof1000 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 98.70 +/- 36.77 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
jslin09/LegalChatbot-bloom-3b
jslin09
2023-07-13T07:45:16Z
19
0
peft
[ "peft", "region:us" ]
null
2023-07-06T02:44:57Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0
hoanghoavienvo/bert-large-uncased-stage-2-v1
hoanghoavienvo
2023-07-13T07:35:37Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-13T01:34:31Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: bert-large-uncased-stage-2-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased-stage-2-v1 This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4491 - Accuracy: 0.8317 - F1: 0.8995 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 469 | 0.3824 | 0.83 | 0.8998 | | 0.4209 | 2.0 | 938 | 0.3631 | 0.8533 | 0.9159 | | 0.3378 | 3.0 | 1407 | 0.4491 | 0.8317 | 0.8995 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
aiacademy131/opt-6.7b-lora
aiacademy131
2023-07-13T07:34:15Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-13T06:21:31Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
K024/chatglm2-6b-int4g32
K024
2023-07-13T07:25:25Z
53
3
transformers
[ "transformers", "ChatGLM2Model", "glm", "chatglm", "thudm", "zh", "en", "endpoints_compatible", "region:us" ]
null
2023-07-13T07:09:00Z
--- language: - zh - en tags: - glm - chatglm - thudm --- # ChatGLM2 6b int4 g32 量化模型 详情参考 [K024/chatglm-q](https://github.com/K024/chatglm-q)。 See [K024/chatglm-q](https://github.com/K024/chatglm-q) for more details. ```python import torch from chatglm_q.decoder import ChatGLMDecoder, chat_template device = torch.device("cuda") decoder = ChatGLMDecoder.from_pretrained("K024/chatglm2-6b-int4g32", device=device) prompt = chat_template([], "我是谁?") for text in decoder.generate(prompt): print(text) ``` 模型权重按 ChatGLM2-6b 许可发布,见 [MODEL LICENSE](https://huggingface.co/THUDM/chatglm2-6b/blob/main/MODEL_LICENSE)。 Model weights are released under the same license as ChatGLM2-6b, see [MODEL LICENSE](https://huggingface.co/THUDM/chatglm2-6b/blob/main/MODEL_LICENSE).
preetham/rpanda1
preetham
2023-07-13T07:10:56Z
1
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-13T06:22:15Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 instance_prompt: a photo of sks panda tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - preetham/rpanda1 This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks panda using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
kaelee/llava-lightning-mpt-7b-chat-pretrain
kaelee
2023-07-13T07:08:09Z
14
0
transformers
[ "transformers", "pytorch", "llava_mpt", "text-generation", "custom_code", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-07-13T00:20:35Z
--- license: cc-by-nc-sa-4.0 ---
ajaydvrj/dataset2
ajaydvrj
2023-07-13T06:48:15Z
114
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-12T12:07:08Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: dataset2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dataset2 This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 5.7431 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 1 | 5.9615 | | No log | 2.0 | 2 | 5.8187 | | No log | 3.0 | 3 | 5.7431 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cpu - Datasets 2.13.1 - Tokenizers 0.13.3
xian79/a2c-AntBulletEnv-v0
xian79
2023-07-13T06:43:39Z
0
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-13T06:28:44Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1080.97 +/- 252.97 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
anindya64/alpaca-bank-issue-summarization
anindya64
2023-07-13T06:41:24Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-13T06:41:22Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
markcberman/distilbert-base-uncased-finetuned-emotion
markcberman
2023-07-13T06:39:20Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-13T06:04:45Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: split metrics: - name: Accuracy type: accuracy value: 0.9275 - name: F1 type: f1 value: 0.9275012469136824 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2201 - Accuracy: 0.9275 - F1: 0.9275 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8326 | 1.0 | 250 | 0.3185 | 0.902 | 0.8983 | | 0.2499 | 2.0 | 500 | 0.2201 | 0.9275 | 0.9275 | ### Framework versions - Transformers 4.16.2 - Pytorch 2.0.1+cu118 - Datasets 1.16.1 - Tokenizers 0.13.3
aiacademy131/opt-2.7b-lora
aiacademy131
2023-07-13T06:34:01Z
1
0
peft
[ "peft", "region:us" ]
null
2023-07-13T05:36:48Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
smithlai/q-FrozenLake-v1-4x4-noSlippery
smithlai
2023-07-13T06:33:59Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-13T06:33:57Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="smithlai/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
sazzad-sit/whisper-small-bn-cv13-gf
sazzad-sit
2023-07-13T06:32:57Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-10T10:24:00Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: whisper-small-bn-cv13-gf results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-bn-cv13-gf This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 650 - training_steps: 1800 ### Framework versions - Transformers 4.28.0.dev0 - Pytorch 1.13.1+cu117 - Datasets 2.10.2.dev0 - Tokenizers 0.13.2
aniketr/mrl-resnet50
aniketr
2023-07-13T06:24:25Z
0
3
null
[ "code", "image-classification", "en", "dataset:imagenet-1k", "arxiv:2205.13147", "license:mit", "region:us" ]
image-classification
2023-07-13T03:45:55Z
--- license: mit datasets: - imagenet-1k language: - en metrics: - accuracy pipeline_tag: image-classification tags: - code --- # Matryoshka Representation Learning🪆 _Aditya Kusupati*, Gantavya Bhatt*, Aniket Rege*, Matthew Wallingford, Aditya Sinha, Vivek Ramanujan, William Howard-Snyder, Kaifeng Chen, Sham Kakade, Prateek Jain, Ali Farhadi_ GitHub: https://github.com/RAIVNLab/MRL Arxiv: https://arxiv.org/abs/2205.13147 We provide pretrained models trained with [FFCV](https://github.com/libffcv/ffcv) on ImageNet-1K: 1. `mrl` : ResNet50 __mrl__ models trained with Matryoshka loss (vanilla and efficient) with nesting starting from _d=8_ (default) and _d=16_ 2. `fixed-feature` : independently trained ResNet50 baselines at _log(d)_ granularities 3. `resnet-family` : __mrl__ and __ff__ models trained on ResNet18/34/101 ## Citation If you find this project useful in your research, please consider citing: ``` @inproceedings{kusupati2022matryoshka, title = {Matryoshka Representation Learning}, author = {Kusupati, Aditya and Bhatt, Gantavya and Rege, Aniket and Wallingford, Matthew and Sinha, Aditya and Ramanujan, Vivek and Howard-Snyder, William and Chen, Kaifeng and Kakade, Sham and Jain, Prateek and others}, title = {Matryoshka Representation Learning.}, booktitle = {Advances in Neural Information Processing Systems}, month = {December}, year = {2022}, } ```