modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-28 12:29:09
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 534
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-28 12:26:21
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Manish1903/finetunedllma
|
Manish1903
| 2023-09-07T09:52:17Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-07T07:27:09Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
rohitdavas/Taxi-V3-with-Q-Learning
|
rohitdavas
| 2023-09-07T09:50:44Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-07T09:50:39Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-V3-with-Q-Learning
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="rohitdavas/Taxi-V3-with-Q-Learning", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
GregaVrbancic/OTS_2023
|
GregaVrbancic
| 2023-09-07T09:43:46Z | 0 | 0 | null |
[
"onnx",
"region:us"
] | null | 2023-09-06T15:02:32Z |
# OTS 2023
## Ko se napovedni modeli strojnega učenja srečajo z realnim okoljem in končnimi uporabniki
### Napovedni modeli
- [minilm-ucased-squad2](https://huggingface.co/deepset/minilm-uncased-squad2)
- [roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2)
- [bert-large-uncased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad)
|
rohitdavas/q-FrozenLake-v1-4x4-noSlippery
|
rohitdavas
| 2023-09-07T09:43:07Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-07T09:43:02Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="rohitdavas/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
bigmorning/whisper_4_with_init_sun_char_0095
|
bigmorning
| 2023-09-07T09:42:51Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-07T09:42:43Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun_char_0095
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun_char_0095
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1133
- Train Accuracy: 0.0666
- Train Wermet: 0.7860
- Validation Loss: 2.3550
- Validation Accuracy: 0.0315
- Validation Wermet: 1.3283
- Epoch: 94
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 3.2071 | 0.0313 | 0.1237 | 2.8546 | 0.0225 | 0.1109 | 0 |
| 3.0365 | 0.0325 | 0.0375 | 2.8115 | 0.0228 | 0.1215 | 1 |
| 3.0162 | 0.0326 | 0.0484 | 2.7884 | 0.0231 | 0.1318 | 2 |
| 3.0042 | 0.0327 | 0.0555 | 2.7853 | 0.0233 | 0.1393 | 3 |
| 2.9934 | 0.0328 | 0.0614 | 2.7657 | 0.0232 | 0.1273 | 4 |
| 2.9858 | 0.0329 | 0.0654 | 2.7542 | 0.0234 | 0.1073 | 5 |
| 2.9735 | 0.0330 | 0.0673 | 2.7367 | 0.0234 | 0.1414 | 6 |
| 2.9574 | 0.0332 | 0.0704 | 2.6961 | 0.0240 | 0.1429 | 7 |
| 2.9320 | 0.0335 | 0.0723 | 2.6652 | 0.0239 | 0.0990 | 8 |
| 2.8976 | 0.0339 | 0.0729 | 2.5997 | 0.0245 | 0.0944 | 9 |
| 2.8460 | 0.0343 | 0.0728 | 2.5378 | 0.0248 | 0.1435 | 10 |
| 2.7781 | 0.0347 | 0.0741 | 2.4355 | 0.0254 | 0.1372 | 11 |
| 2.7083 | 0.0352 | 0.0747 | 2.5163 | 0.0248 | 0.0987 | 12 |
| 2.6445 | 0.0356 | 0.0720 | 2.2997 | 0.0261 | 0.1484 | 13 |
| 2.5838 | 0.0360 | 0.0724 | 2.2386 | 0.0266 | 0.1419 | 14 |
| 2.5294 | 0.0363 | 0.0721 | 2.1855 | 0.0269 | 0.1289 | 15 |
| 2.4760 | 0.0367 | 0.0711 | 2.1682 | 0.0271 | 0.1214 | 16 |
| 2.4339 | 0.0370 | 0.0698 | 2.1018 | 0.0273 | 0.1264 | 17 |
| 2.3867 | 0.0373 | 0.0684 | 2.0647 | 0.0275 | 0.1403 | 18 |
| 2.3528 | 0.0376 | 0.0669 | 2.0705 | 0.0275 | 0.1089 | 19 |
| 2.3145 | 0.0379 | 0.0658 | 2.0179 | 0.0280 | 0.1209 | 20 |
| 2.2765 | 0.0382 | 0.0654 | 2.0182 | 0.0279 | 0.1023 | 21 |
| 2.2415 | 0.0385 | 0.0650 | 1.9558 | 0.0284 | 0.1523 | 22 |
| 2.2102 | 0.0388 | 0.0643 | 1.9395 | 0.0285 | 0.1123 | 23 |
| 2.1717 | 0.0392 | 0.0635 | 1.9791 | 0.0282 | 0.0928 | 24 |
| 2.1457 | 0.0395 | 0.0626 | 1.8907 | 0.0291 | 0.1078 | 25 |
| 2.1159 | 0.0398 | 0.0633 | 1.8930 | 0.0290 | 0.1098 | 26 |
| 2.0892 | 0.0401 | 0.0638 | 1.8696 | 0.0292 | 0.1078 | 27 |
| 2.0609 | 0.0405 | 0.0659 | 1.8555 | 0.0296 | 0.1051 | 28 |
| 2.0342 | 0.0409 | 0.0639 | 1.8589 | 0.0293 | 0.1092 | 29 |
| 2.0044 | 0.0413 | 0.0653 | 1.8375 | 0.0299 | 0.1015 | 30 |
| 1.9831 | 0.0416 | 0.0649 | 1.7954 | 0.0302 | 0.1194 | 31 |
| 1.9535 | 0.0421 | 0.0689 | 1.7937 | 0.0302 | 0.1168 | 32 |
| 1.9290 | 0.0425 | 0.0706 | 1.8385 | 0.0299 | 0.1074 | 33 |
| 1.8933 | 0.0432 | 0.0682 | 1.8761 | 0.0295 | 0.1173 | 34 |
| 1.8724 | 0.0435 | 0.0752 | 1.7929 | 0.0304 | 0.1220 | 35 |
| 1.8407 | 0.0442 | 0.0760 | 1.7865 | 0.0306 | 0.1266 | 36 |
| 1.8179 | 0.0446 | 0.0832 | 1.8108 | 0.0304 | 0.1226 | 37 |
| 1.7977 | 0.0451 | 0.0888 | 1.8024 | 0.0306 | 0.1161 | 38 |
| 1.7846 | 0.0454 | 0.0855 | 1.8107 | 0.0305 | 0.1385 | 39 |
| 1.7516 | 0.0461 | 0.0922 | 1.8258 | 0.0307 | 0.1365 | 40 |
| 1.7358 | 0.0465 | 0.1070 | 1.8837 | 0.0302 | 0.1461 | 41 |
| 1.7036 | 0.0474 | 0.1106 | 1.8589 | 0.0306 | 0.1201 | 42 |
| 1.6779 | 0.0481 | 0.1052 | 1.8831 | 0.0305 | 0.1755 | 43 |
| 1.6539 | 0.0487 | 0.1192 | 1.8249 | 0.0309 | 0.1901 | 44 |
| 1.6500 | 0.0488 | 0.1149 | 1.8435 | 0.0310 | 0.1313 | 45 |
| 1.6401 | 0.0490 | 0.1468 | 1.8509 | 0.0310 | 0.1597 | 46 |
| 1.6232 | 0.0495 | 0.1443 | 1.8573 | 0.0310 | 0.1588 | 47 |
| 1.5947 | 0.0503 | 0.1315 | 1.8350 | 0.0311 | 0.1476 | 48 |
| 1.5659 | 0.0512 | 0.1890 | 1.8934 | 0.0310 | 0.1507 | 49 |
| 1.5409 | 0.0521 | 0.1410 | 1.9782 | 0.0299 | 0.1663 | 50 |
| 1.5417 | 0.0520 | 0.1805 | 1.9223 | 0.0309 | 0.2287 | 51 |
| 1.5330 | 0.0522 | 0.1907 | 1.9174 | 0.0313 | 0.2481 | 52 |
| 1.5182 | 0.0527 | 0.1963 | 1.9254 | 0.0312 | 0.1440 | 53 |
| 1.5008 | 0.0532 | 0.2386 | 1.9368 | 0.0309 | 0.2045 | 54 |
| 1.4700 | 0.0543 | 0.2347 | 1.9171 | 0.0310 | 0.3189 | 55 |
| 1.4517 | 0.0549 | 0.2159 | 1.9880 | 0.0308 | 0.4000 | 56 |
| 1.4421 | 0.0553 | 0.2616 | 1.9647 | 0.0310 | 0.3311 | 57 |
| 1.4393 | 0.0552 | 0.2959 | 1.9191 | 0.0314 | 0.3403 | 58 |
| 1.4163 | 0.0560 | 0.3296 | 2.0068 | 0.0313 | 0.3711 | 59 |
| 1.4174 | 0.0559 | 0.3499 | 2.0338 | 0.0310 | 0.2981 | 60 |
| 1.4112 | 0.0561 | 0.3553 | 2.0262 | 0.0312 | 0.3595 | 61 |
| 1.3840 | 0.0572 | 0.4110 | 1.9913 | 0.0313 | 0.2975 | 62 |
| 1.3662 | 0.0578 | 0.3471 | 2.0969 | 0.0307 | 0.2794 | 63 |
| 1.3596 | 0.0579 | 0.3211 | 2.0164 | 0.0314 | 0.9982 | 64 |
| 1.3819 | 0.0571 | 0.3542 | 1.9052 | 0.0315 | 0.9802 | 65 |
| 1.3823 | 0.0569 | 0.3757 | 1.9371 | 0.0315 | 1.0860 | 66 |
| 1.3364 | 0.0587 | 0.4048 | 2.0912 | 0.0311 | 0.2807 | 67 |
| 1.3494 | 0.0582 | 0.3723 | 1.9475 | 0.0317 | 0.3295 | 68 |
| 1.3321 | 0.0587 | 0.3546 | 2.1066 | 0.0314 | 0.6181 | 69 |
| 1.3198 | 0.0592 | 0.4076 | 2.0759 | 0.0314 | 0.4974 | 70 |
| 1.2896 | 0.0603 | 0.4556 | 1.9717 | 0.0316 | 0.7519 | 71 |
| 1.2842 | 0.0604 | 0.5363 | 2.0598 | 0.0315 | 0.5596 | 72 |
| 1.2841 | 0.0604 | 0.5000 | 1.9914 | 0.0314 | 0.5531 | 73 |
| 1.2803 | 0.0606 | 0.5457 | 2.0848 | 0.0316 | 0.9665 | 74 |
| 1.2412 | 0.0620 | 0.5956 | 2.2020 | 0.0307 | 0.9376 | 75 |
| 1.2320 | 0.0624 | 0.5726 | 2.2278 | 0.0308 | 1.5467 | 76 |
| 1.2235 | 0.0626 | 0.7086 | 2.1929 | 0.0314 | 0.5619 | 77 |
| 1.2520 | 0.0614 | 0.7158 | 2.1414 | 0.0315 | 0.8414 | 78 |
| 1.2306 | 0.0621 | 0.7386 | 2.2487 | 0.0313 | 0.8498 | 79 |
| 1.2182 | 0.0627 | 0.6691 | 2.0785 | 0.0317 | 1.2870 | 80 |
| 1.2080 | 0.0630 | 0.7715 | 2.2775 | 0.0310 | 1.6700 | 81 |
| 1.2217 | 0.0624 | 0.7984 | 2.1358 | 0.0314 | 2.0753 | 82 |
| 1.2117 | 0.0628 | 0.8299 | 2.2871 | 0.0305 | 1.4698 | 83 |
| 1.1786 | 0.0642 | 0.6979 | 2.2602 | 0.0315 | 1.6544 | 84 |
| 1.1776 | 0.0643 | 0.7391 | 2.2246 | 0.0314 | 1.0500 | 85 |
| 1.1613 | 0.0651 | 0.7607 | 2.2078 | 0.0316 | 0.9168 | 86 |
| 1.1323 | 0.0660 | 0.7046 | 2.3419 | 0.0315 | 0.8306 | 87 |
| 1.1172 | 0.0667 | 0.7140 | 2.3248 | 0.0310 | 1.3227 | 88 |
| 1.1247 | 0.0664 | 0.7725 | 2.1606 | 0.0315 | 0.8301 | 89 |
| 1.1395 | 0.0656 | 0.7530 | 2.3058 | 0.0313 | 2.6814 | 90 |
| 1.1289 | 0.0660 | 0.7383 | 2.4022 | 0.0304 | 1.8903 | 91 |
| 1.1743 | 0.0644 | 0.9273 | 2.1835 | 0.0312 | 0.8217 | 92 |
| 1.1036 | 0.0670 | 0.8103 | 2.3628 | 0.0311 | 1.3153 | 93 |
| 1.1133 | 0.0666 | 0.7860 | 2.3550 | 0.0315 | 1.3283 | 94 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
CyberHarem/reines_fgo
|
CyberHarem
| 2023-09-07T09:41:51Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/reines_fgo",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-01T04:42:32Z |
---
license: mit
datasets:
- CyberHarem/reines_fgo
pipeline_tag: text-to-image
tags:
- art
---
# Lora of reines_fgo
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5280, you need to download `5280/reines_fgo.pt` as the embedding and `5280/reines_fgo.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5280**, with the score of 0.940. The trigger words are:
1. `reines_fgo`
2. `bangs, blonde_hair, long_hair, smile, hat, flower, blue_eyes, black_headwear, tilted_headwear, hair_ornament, blush, closed_mouth, hair_flower`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 6600 | 0.931 | [Download](6600/reines_fgo.zip) |  |  |  |  |  | [<NSFW, click to see>](6600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6600/previews/nude.png) | [<NSFW, click to see>](6600/previews/nude2.png) |  |  |
| 6160 | 0.918 | [Download](6160/reines_fgo.zip) |  |  |  |  |  | [<NSFW, click to see>](6160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6160/previews/nude.png) | [<NSFW, click to see>](6160/previews/nude2.png) |  |  |
| 5720 | 0.933 | [Download](5720/reines_fgo.zip) |  |  |  |  |  | [<NSFW, click to see>](5720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5720/previews/nude.png) | [<NSFW, click to see>](5720/previews/nude2.png) |  |  |
| **5280** | **0.940** | [**Download**](5280/reines_fgo.zip) |  |  |  |  |  | [<NSFW, click to see>](5280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5280/previews/nude.png) | [<NSFW, click to see>](5280/previews/nude2.png) |  |  |
| 4840 | 0.918 | [Download](4840/reines_fgo.zip) |  |  |  |  |  | [<NSFW, click to see>](4840/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4840/previews/nude.png) | [<NSFW, click to see>](4840/previews/nude2.png) |  |  |
| 4400 | 0.903 | [Download](4400/reines_fgo.zip) |  |  |  |  |  | [<NSFW, click to see>](4400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4400/previews/nude.png) | [<NSFW, click to see>](4400/previews/nude2.png) |  |  |
| 3960 | 0.925 | [Download](3960/reines_fgo.zip) |  |  |  |  |  | [<NSFW, click to see>](3960/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3960/previews/nude.png) | [<NSFW, click to see>](3960/previews/nude2.png) |  |  |
| 3520 | 0.913 | [Download](3520/reines_fgo.zip) |  |  |  |  |  | [<NSFW, click to see>](3520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3520/previews/nude.png) | [<NSFW, click to see>](3520/previews/nude2.png) |  |  |
| 3080 | 0.890 | [Download](3080/reines_fgo.zip) |  |  |  |  |  | [<NSFW, click to see>](3080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3080/previews/nude.png) | [<NSFW, click to see>](3080/previews/nude2.png) |  |  |
| 2640 | 0.827 | [Download](2640/reines_fgo.zip) |  |  |  |  |  | [<NSFW, click to see>](2640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2640/previews/nude.png) | [<NSFW, click to see>](2640/previews/nude2.png) |  |  |
| 2200 | 0.903 | [Download](2200/reines_fgo.zip) |  |  |  |  |  | [<NSFW, click to see>](2200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2200/previews/nude.png) | [<NSFW, click to see>](2200/previews/nude2.png) |  |  |
| 1760 | 0.884 | [Download](1760/reines_fgo.zip) |  |  |  |  |  | [<NSFW, click to see>](1760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1760/previews/nude.png) | [<NSFW, click to see>](1760/previews/nude2.png) |  |  |
| 1320 | 0.856 | [Download](1320/reines_fgo.zip) |  |  |  |  |  | [<NSFW, click to see>](1320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1320/previews/nude.png) | [<NSFW, click to see>](1320/previews/nude2.png) |  |  |
| 880 | 0.823 | [Download](880/reines_fgo.zip) |  |  |  |  |  | [<NSFW, click to see>](880/previews/bondage.png) |  |  |  | [<NSFW, click to see>](880/previews/nude.png) | [<NSFW, click to see>](880/previews/nude2.png) |  |  |
| 440 | 0.796 | [Download](440/reines_fgo.zip) |  |  |  |  |  | [<NSFW, click to see>](440/previews/bondage.png) |  |  |  | [<NSFW, click to see>](440/previews/nude.png) | [<NSFW, click to see>](440/previews/nude2.png) |  |  |
|
CyberHarem/koga_koharu_theidolmastercinderellagirlsu149
|
CyberHarem
| 2023-09-07T09:41:11Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/koga_koharu_theidolmastercinderellagirlsu149",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-07T09:12:18Z |
---
license: mit
datasets:
- CyberHarem/koga_koharu_theidolmastercinderellagirlsu149
pipeline_tag: text-to-image
tags:
- art
---
# Lora of koga_koharu_theidolmastercinderellagirlsu149
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 3680, you need to download `3680/koga_koharu_theidolmastercinderellagirlsu149.pt` as the embedding and `3680/koga_koharu_theidolmastercinderellagirlsu149.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 3680**, with the score of 0.974. The trigger words are:
1. `koga_koharu_theidolmastercinderellagirlsu149`
2. `short_hair, brown_eyes, bow, hairband, brown_hair, smile, pink_bow, bangs, blonde_hair, open_mouth, upper_body, hair_bow`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | pattern_16 | pattern_17 | pattern_18 | pattern_19 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:----------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 6900 | 0.910 | [Download](6900/koga_koharu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6900/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6900/previews/nude.png) | [<NSFW, click to see>](6900/previews/nude2.png) |  |  |
| 6440 | 0.946 | [Download](6440/koga_koharu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6440/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6440/previews/nude.png) | [<NSFW, click to see>](6440/previews/nude2.png) |  |  |
| 5980 | 0.946 | [Download](5980/koga_koharu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5980/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5980/previews/nude.png) | [<NSFW, click to see>](5980/previews/nude2.png) |  |  |
| 5520 | 0.935 | [Download](5520/koga_koharu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5520/previews/nude.png) | [<NSFW, click to see>](5520/previews/nude2.png) |  |  |
| 5060 | 0.898 | [Download](5060/koga_koharu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5060/previews/nude.png) | [<NSFW, click to see>](5060/previews/nude2.png) |  |  |
| 4600 | 0.913 | [Download](4600/koga_koharu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4600/previews/nude.png) | [<NSFW, click to see>](4600/previews/nude2.png) |  |  |
| 4140 | 0.943 | [Download](4140/koga_koharu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4140/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4140/previews/nude.png) | [<NSFW, click to see>](4140/previews/nude2.png) |  |  |
| **3680** | **0.974** | [**Download**](3680/koga_koharu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3680/previews/nude.png) | [<NSFW, click to see>](3680/previews/nude2.png) |  |  |
| 3220 | 0.906 | [Download](3220/koga_koharu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3220/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3220/previews/nude.png) | [<NSFW, click to see>](3220/previews/nude2.png) |  |  |
| 2760 | 0.902 | [Download](2760/koga_koharu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2760/previews/nude.png) | [<NSFW, click to see>](2760/previews/nude2.png) |  |  |
| 2300 | 0.952 | [Download](2300/koga_koharu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2300/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2300/previews/nude.png) | [<NSFW, click to see>](2300/previews/nude2.png) |  |  |
| 1840 | 0.912 | [Download](1840/koga_koharu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1840/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1840/previews/nude.png) | [<NSFW, click to see>](1840/previews/nude2.png) |  |  |
| 1380 | 0.872 | [Download](1380/koga_koharu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1380/previews/nude.png) | [<NSFW, click to see>](1380/previews/nude2.png) |  |  |
| 920 | 0.852 | [Download](920/koga_koharu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](920/previews/bondage.png) |  |  |  | [<NSFW, click to see>](920/previews/nude.png) | [<NSFW, click to see>](920/previews/nude2.png) |  |  |
| 460 | 0.841 | [Download](460/koga_koharu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](460/previews/bondage.png) |  |  |  | [<NSFW, click to see>](460/previews/nude.png) | [<NSFW, click to see>](460/previews/nude2.png) |  |  |
|
YiYiXu/pokeman_kandinsky_prior_lora
|
YiYiXu
| 2023-09-07T09:40:55Z | 4 | 0 |
diffusers
|
[
"diffusers",
"kandinsky",
"text-to-image",
"lora",
"base_model:kandinsky-community/kandinsky-2-2-prior",
"base_model:adapter:kandinsky-community/kandinsky-2-2-prior",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-09-07T04:39:43Z |
---
license: creativeml-openrail-m
base_model: kandinsky-community/kandinsky-2-2-prior
tags:
- kandinsky
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - YiYiXu/pokeman_kandinsky_prior_lora
These are LoRA adaption weights for kandinsky-community/kandinsky-2-2-prior. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
ThuyNT03/PhoBERT-Final_Mixed-aug_backtranslation-2
|
ThuyNT03
| 2023-09-07T09:38:20Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T07:52:42Z |
---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-aug_backtranslation-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_backtranslation-2
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0525
- Accuracy: 0.69
- F1: 0.6891
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9186 | 1.0 | 87 | 0.7637 | 0.72 | 0.7176 |
| 0.6008 | 2.0 | 174 | 0.6915 | 0.69 | 0.6893 |
| 0.436 | 3.0 | 261 | 0.7517 | 0.73 | 0.7310 |
| 0.3092 | 4.0 | 348 | 0.8925 | 0.7 | 0.6927 |
| 0.1923 | 5.0 | 435 | 0.9679 | 0.68 | 0.6767 |
| 0.1371 | 6.0 | 522 | 1.0023 | 0.71 | 0.7091 |
| 0.1003 | 7.0 | 609 | 1.0508 | 0.68 | 0.6778 |
| 0.0796 | 8.0 | 696 | 1.0525 | 0.69 | 0.6891 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
revolutionarycomrade/dst
|
revolutionarycomrade
| 2023-09-07T09:33:18Z | 14 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-09-07T09:26:11Z |
---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: >-
This is a form to enable access to Llama 2 on Hugging Face after you have been
granted access from Meta. Please visit the [Meta
website](https://ai.meta.com/resources/models-and-libraries/llama-downloads)
and accept our license terms and acceptable use policy before submitting this
form. Requests will be processed in 1-2 days.
extra_gated_prompt: >-
**Your Hugging Face account email address MUST match the email you provide on
the Meta website, or your request will not be approved.**
extra_gated_button_content: Submit
extra_gated_fields:
I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
duplicated_from: NousResearch/Llama-2-70b-chat-hf
---
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 70B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
dwitidibyajyoti/test
|
dwitidibyajyoti
| 2023-09-07T09:32:13Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/layoutlmv3-base",
"base_model:finetune:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-07T09:31:19Z |
---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2843
- Precision: 0.4118
- Recall: 0.8235
- F1: 0.5490
- Accuracy: 0.9485
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 8.33 | 100 | 0.6271 | 0.1864 | 0.6471 | 0.2895 | 0.8757 |
| No log | 16.67 | 200 | 0.1736 | 0.52 | 0.7647 | 0.6190 | 0.9734 |
| No log | 25.0 | 300 | 0.1302 | 0.5714 | 0.9412 | 0.7111 | 0.9734 |
| No log | 33.33 | 400 | 0.2835 | 0.5333 | 0.9412 | 0.6809 | 0.9556 |
| 0.287 | 41.67 | 500 | 0.0924 | 0.4828 | 0.8235 | 0.6087 | 0.9805 |
| 0.287 | 50.0 | 600 | 0.2594 | 0.4412 | 0.8824 | 0.5882 | 0.9485 |
| 0.287 | 58.33 | 700 | 0.3172 | 0.4412 | 0.8824 | 0.5882 | 0.9467 |
| 0.287 | 66.67 | 800 | 0.2447 | 0.4545 | 0.8824 | 0.6 | 0.9520 |
| 0.287 | 75.0 | 900 | 0.2941 | 0.4118 | 0.8235 | 0.5490 | 0.9485 |
| 0.013 | 83.33 | 1000 | 0.2843 | 0.4118 | 0.8235 | 0.5490 | 0.9485 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
bigmorning/whisper_4_with_init_sun_char_0090
|
bigmorning
| 2023-09-07T09:27:38Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-07T09:27:30Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun_char_0090
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun_char_0090
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1247
- Train Accuracy: 0.0664
- Train Wermet: 0.7725
- Validation Loss: 2.1606
- Validation Accuracy: 0.0315
- Validation Wermet: 0.8301
- Epoch: 89
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 3.2071 | 0.0313 | 0.1237 | 2.8546 | 0.0225 | 0.1109 | 0 |
| 3.0365 | 0.0325 | 0.0375 | 2.8115 | 0.0228 | 0.1215 | 1 |
| 3.0162 | 0.0326 | 0.0484 | 2.7884 | 0.0231 | 0.1318 | 2 |
| 3.0042 | 0.0327 | 0.0555 | 2.7853 | 0.0233 | 0.1393 | 3 |
| 2.9934 | 0.0328 | 0.0614 | 2.7657 | 0.0232 | 0.1273 | 4 |
| 2.9858 | 0.0329 | 0.0654 | 2.7542 | 0.0234 | 0.1073 | 5 |
| 2.9735 | 0.0330 | 0.0673 | 2.7367 | 0.0234 | 0.1414 | 6 |
| 2.9574 | 0.0332 | 0.0704 | 2.6961 | 0.0240 | 0.1429 | 7 |
| 2.9320 | 0.0335 | 0.0723 | 2.6652 | 0.0239 | 0.0990 | 8 |
| 2.8976 | 0.0339 | 0.0729 | 2.5997 | 0.0245 | 0.0944 | 9 |
| 2.8460 | 0.0343 | 0.0728 | 2.5378 | 0.0248 | 0.1435 | 10 |
| 2.7781 | 0.0347 | 0.0741 | 2.4355 | 0.0254 | 0.1372 | 11 |
| 2.7083 | 0.0352 | 0.0747 | 2.5163 | 0.0248 | 0.0987 | 12 |
| 2.6445 | 0.0356 | 0.0720 | 2.2997 | 0.0261 | 0.1484 | 13 |
| 2.5838 | 0.0360 | 0.0724 | 2.2386 | 0.0266 | 0.1419 | 14 |
| 2.5294 | 0.0363 | 0.0721 | 2.1855 | 0.0269 | 0.1289 | 15 |
| 2.4760 | 0.0367 | 0.0711 | 2.1682 | 0.0271 | 0.1214 | 16 |
| 2.4339 | 0.0370 | 0.0698 | 2.1018 | 0.0273 | 0.1264 | 17 |
| 2.3867 | 0.0373 | 0.0684 | 2.0647 | 0.0275 | 0.1403 | 18 |
| 2.3528 | 0.0376 | 0.0669 | 2.0705 | 0.0275 | 0.1089 | 19 |
| 2.3145 | 0.0379 | 0.0658 | 2.0179 | 0.0280 | 0.1209 | 20 |
| 2.2765 | 0.0382 | 0.0654 | 2.0182 | 0.0279 | 0.1023 | 21 |
| 2.2415 | 0.0385 | 0.0650 | 1.9558 | 0.0284 | 0.1523 | 22 |
| 2.2102 | 0.0388 | 0.0643 | 1.9395 | 0.0285 | 0.1123 | 23 |
| 2.1717 | 0.0392 | 0.0635 | 1.9791 | 0.0282 | 0.0928 | 24 |
| 2.1457 | 0.0395 | 0.0626 | 1.8907 | 0.0291 | 0.1078 | 25 |
| 2.1159 | 0.0398 | 0.0633 | 1.8930 | 0.0290 | 0.1098 | 26 |
| 2.0892 | 0.0401 | 0.0638 | 1.8696 | 0.0292 | 0.1078 | 27 |
| 2.0609 | 0.0405 | 0.0659 | 1.8555 | 0.0296 | 0.1051 | 28 |
| 2.0342 | 0.0409 | 0.0639 | 1.8589 | 0.0293 | 0.1092 | 29 |
| 2.0044 | 0.0413 | 0.0653 | 1.8375 | 0.0299 | 0.1015 | 30 |
| 1.9831 | 0.0416 | 0.0649 | 1.7954 | 0.0302 | 0.1194 | 31 |
| 1.9535 | 0.0421 | 0.0689 | 1.7937 | 0.0302 | 0.1168 | 32 |
| 1.9290 | 0.0425 | 0.0706 | 1.8385 | 0.0299 | 0.1074 | 33 |
| 1.8933 | 0.0432 | 0.0682 | 1.8761 | 0.0295 | 0.1173 | 34 |
| 1.8724 | 0.0435 | 0.0752 | 1.7929 | 0.0304 | 0.1220 | 35 |
| 1.8407 | 0.0442 | 0.0760 | 1.7865 | 0.0306 | 0.1266 | 36 |
| 1.8179 | 0.0446 | 0.0832 | 1.8108 | 0.0304 | 0.1226 | 37 |
| 1.7977 | 0.0451 | 0.0888 | 1.8024 | 0.0306 | 0.1161 | 38 |
| 1.7846 | 0.0454 | 0.0855 | 1.8107 | 0.0305 | 0.1385 | 39 |
| 1.7516 | 0.0461 | 0.0922 | 1.8258 | 0.0307 | 0.1365 | 40 |
| 1.7358 | 0.0465 | 0.1070 | 1.8837 | 0.0302 | 0.1461 | 41 |
| 1.7036 | 0.0474 | 0.1106 | 1.8589 | 0.0306 | 0.1201 | 42 |
| 1.6779 | 0.0481 | 0.1052 | 1.8831 | 0.0305 | 0.1755 | 43 |
| 1.6539 | 0.0487 | 0.1192 | 1.8249 | 0.0309 | 0.1901 | 44 |
| 1.6500 | 0.0488 | 0.1149 | 1.8435 | 0.0310 | 0.1313 | 45 |
| 1.6401 | 0.0490 | 0.1468 | 1.8509 | 0.0310 | 0.1597 | 46 |
| 1.6232 | 0.0495 | 0.1443 | 1.8573 | 0.0310 | 0.1588 | 47 |
| 1.5947 | 0.0503 | 0.1315 | 1.8350 | 0.0311 | 0.1476 | 48 |
| 1.5659 | 0.0512 | 0.1890 | 1.8934 | 0.0310 | 0.1507 | 49 |
| 1.5409 | 0.0521 | 0.1410 | 1.9782 | 0.0299 | 0.1663 | 50 |
| 1.5417 | 0.0520 | 0.1805 | 1.9223 | 0.0309 | 0.2287 | 51 |
| 1.5330 | 0.0522 | 0.1907 | 1.9174 | 0.0313 | 0.2481 | 52 |
| 1.5182 | 0.0527 | 0.1963 | 1.9254 | 0.0312 | 0.1440 | 53 |
| 1.5008 | 0.0532 | 0.2386 | 1.9368 | 0.0309 | 0.2045 | 54 |
| 1.4700 | 0.0543 | 0.2347 | 1.9171 | 0.0310 | 0.3189 | 55 |
| 1.4517 | 0.0549 | 0.2159 | 1.9880 | 0.0308 | 0.4000 | 56 |
| 1.4421 | 0.0553 | 0.2616 | 1.9647 | 0.0310 | 0.3311 | 57 |
| 1.4393 | 0.0552 | 0.2959 | 1.9191 | 0.0314 | 0.3403 | 58 |
| 1.4163 | 0.0560 | 0.3296 | 2.0068 | 0.0313 | 0.3711 | 59 |
| 1.4174 | 0.0559 | 0.3499 | 2.0338 | 0.0310 | 0.2981 | 60 |
| 1.4112 | 0.0561 | 0.3553 | 2.0262 | 0.0312 | 0.3595 | 61 |
| 1.3840 | 0.0572 | 0.4110 | 1.9913 | 0.0313 | 0.2975 | 62 |
| 1.3662 | 0.0578 | 0.3471 | 2.0969 | 0.0307 | 0.2794 | 63 |
| 1.3596 | 0.0579 | 0.3211 | 2.0164 | 0.0314 | 0.9982 | 64 |
| 1.3819 | 0.0571 | 0.3542 | 1.9052 | 0.0315 | 0.9802 | 65 |
| 1.3823 | 0.0569 | 0.3757 | 1.9371 | 0.0315 | 1.0860 | 66 |
| 1.3364 | 0.0587 | 0.4048 | 2.0912 | 0.0311 | 0.2807 | 67 |
| 1.3494 | 0.0582 | 0.3723 | 1.9475 | 0.0317 | 0.3295 | 68 |
| 1.3321 | 0.0587 | 0.3546 | 2.1066 | 0.0314 | 0.6181 | 69 |
| 1.3198 | 0.0592 | 0.4076 | 2.0759 | 0.0314 | 0.4974 | 70 |
| 1.2896 | 0.0603 | 0.4556 | 1.9717 | 0.0316 | 0.7519 | 71 |
| 1.2842 | 0.0604 | 0.5363 | 2.0598 | 0.0315 | 0.5596 | 72 |
| 1.2841 | 0.0604 | 0.5000 | 1.9914 | 0.0314 | 0.5531 | 73 |
| 1.2803 | 0.0606 | 0.5457 | 2.0848 | 0.0316 | 0.9665 | 74 |
| 1.2412 | 0.0620 | 0.5956 | 2.2020 | 0.0307 | 0.9376 | 75 |
| 1.2320 | 0.0624 | 0.5726 | 2.2278 | 0.0308 | 1.5467 | 76 |
| 1.2235 | 0.0626 | 0.7086 | 2.1929 | 0.0314 | 0.5619 | 77 |
| 1.2520 | 0.0614 | 0.7158 | 2.1414 | 0.0315 | 0.8414 | 78 |
| 1.2306 | 0.0621 | 0.7386 | 2.2487 | 0.0313 | 0.8498 | 79 |
| 1.2182 | 0.0627 | 0.6691 | 2.0785 | 0.0317 | 1.2870 | 80 |
| 1.2080 | 0.0630 | 0.7715 | 2.2775 | 0.0310 | 1.6700 | 81 |
| 1.2217 | 0.0624 | 0.7984 | 2.1358 | 0.0314 | 2.0753 | 82 |
| 1.2117 | 0.0628 | 0.8299 | 2.2871 | 0.0305 | 1.4698 | 83 |
| 1.1786 | 0.0642 | 0.6979 | 2.2602 | 0.0315 | 1.6544 | 84 |
| 1.1776 | 0.0643 | 0.7391 | 2.2246 | 0.0314 | 1.0500 | 85 |
| 1.1613 | 0.0651 | 0.7607 | 2.2078 | 0.0316 | 0.9168 | 86 |
| 1.1323 | 0.0660 | 0.7046 | 2.3419 | 0.0315 | 0.8306 | 87 |
| 1.1172 | 0.0667 | 0.7140 | 2.3248 | 0.0310 | 1.3227 | 88 |
| 1.1247 | 0.0664 | 0.7725 | 2.1606 | 0.0315 | 0.8301 | 89 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
ThuyNT03/PhoBERT-Final_Mixed-aug_replace_tfidf-2
|
ThuyNT03
| 2023-09-07T09:26:58Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T07:40:56Z |
---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-aug_replace_tfidf-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_replace_tfidf-2
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8971
- Accuracy: 0.71
- F1: 0.7064
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0056 | 1.0 | 88 | 0.8277 | 0.67 | 0.6501 |
| 0.7703 | 2.0 | 176 | 0.7912 | 0.57 | 0.5253 |
| 0.642 | 3.0 | 264 | 0.7158 | 0.71 | 0.7036 |
| 0.5139 | 4.0 | 352 | 0.6648 | 0.73 | 0.7272 |
| 0.3862 | 5.0 | 440 | 0.7784 | 0.72 | 0.7150 |
| 0.3029 | 6.0 | 528 | 0.8894 | 0.7 | 0.6924 |
| 0.2315 | 7.0 | 616 | 0.8696 | 0.71 | 0.7050 |
| 0.1903 | 8.0 | 704 | 0.8971 | 0.71 | 0.7064 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ThuyNT03/PhoBERT-Final_Mixed-aug_replace_w2v-2
|
ThuyNT03
| 2023-09-07T09:21:13Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T07:32:16Z |
---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-aug_replace_w2v-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_replace_w2v-2
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0071
- Accuracy: 0.73
- F1: 0.7272
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.962 | 1.0 | 86 | 0.7741 | 0.72 | 0.7110 |
| 0.6927 | 2.0 | 172 | 0.7040 | 0.67 | 0.6458 |
| 0.5162 | 3.0 | 258 | 0.7437 | 0.72 | 0.7157 |
| 0.3641 | 4.0 | 344 | 0.7528 | 0.74 | 0.7353 |
| 0.244 | 5.0 | 430 | 0.8498 | 0.73 | 0.7262 |
| 0.1787 | 6.0 | 516 | 0.8976 | 0.73 | 0.7290 |
| 0.1143 | 7.0 | 602 | 0.9672 | 0.74 | 0.7378 |
| 0.0887 | 8.0 | 688 | 1.0071 | 0.73 | 0.7272 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ThuyNT03/PhoBERT-Final_Mixed-aug_replace_synonym-2
|
ThuyNT03
| 2023-09-07T09:14:53Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T07:25:14Z |
---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-aug_replace_synonym-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_replace_synonym-2
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0785
- Accuracy: 0.71
- F1: 0.7107
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9487 | 1.0 | 87 | 0.8199 | 0.65 | 0.6439 |
| 0.6757 | 2.0 | 174 | 0.7670 | 0.68 | 0.6589 |
| 0.4716 | 3.0 | 261 | 0.7577 | 0.71 | 0.7099 |
| 0.352 | 4.0 | 348 | 0.7988 | 0.71 | 0.7092 |
| 0.241 | 5.0 | 435 | 0.9008 | 0.72 | 0.7218 |
| 0.1783 | 6.0 | 522 | 0.9248 | 0.75 | 0.7514 |
| 0.1221 | 7.0 | 609 | 1.0217 | 0.73 | 0.7313 |
| 0.108 | 8.0 | 696 | 1.0785 | 0.71 | 0.7107 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
bigmorning/whisper_4_with_init_sun_char_0085
|
bigmorning
| 2023-09-07T09:12:26Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-07T09:12:18Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun_char_0085
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun_char_0085
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1786
- Train Accuracy: 0.0642
- Train Wermet: 0.6979
- Validation Loss: 2.2602
- Validation Accuracy: 0.0315
- Validation Wermet: 1.6544
- Epoch: 84
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 3.2071 | 0.0313 | 0.1237 | 2.8546 | 0.0225 | 0.1109 | 0 |
| 3.0365 | 0.0325 | 0.0375 | 2.8115 | 0.0228 | 0.1215 | 1 |
| 3.0162 | 0.0326 | 0.0484 | 2.7884 | 0.0231 | 0.1318 | 2 |
| 3.0042 | 0.0327 | 0.0555 | 2.7853 | 0.0233 | 0.1393 | 3 |
| 2.9934 | 0.0328 | 0.0614 | 2.7657 | 0.0232 | 0.1273 | 4 |
| 2.9858 | 0.0329 | 0.0654 | 2.7542 | 0.0234 | 0.1073 | 5 |
| 2.9735 | 0.0330 | 0.0673 | 2.7367 | 0.0234 | 0.1414 | 6 |
| 2.9574 | 0.0332 | 0.0704 | 2.6961 | 0.0240 | 0.1429 | 7 |
| 2.9320 | 0.0335 | 0.0723 | 2.6652 | 0.0239 | 0.0990 | 8 |
| 2.8976 | 0.0339 | 0.0729 | 2.5997 | 0.0245 | 0.0944 | 9 |
| 2.8460 | 0.0343 | 0.0728 | 2.5378 | 0.0248 | 0.1435 | 10 |
| 2.7781 | 0.0347 | 0.0741 | 2.4355 | 0.0254 | 0.1372 | 11 |
| 2.7083 | 0.0352 | 0.0747 | 2.5163 | 0.0248 | 0.0987 | 12 |
| 2.6445 | 0.0356 | 0.0720 | 2.2997 | 0.0261 | 0.1484 | 13 |
| 2.5838 | 0.0360 | 0.0724 | 2.2386 | 0.0266 | 0.1419 | 14 |
| 2.5294 | 0.0363 | 0.0721 | 2.1855 | 0.0269 | 0.1289 | 15 |
| 2.4760 | 0.0367 | 0.0711 | 2.1682 | 0.0271 | 0.1214 | 16 |
| 2.4339 | 0.0370 | 0.0698 | 2.1018 | 0.0273 | 0.1264 | 17 |
| 2.3867 | 0.0373 | 0.0684 | 2.0647 | 0.0275 | 0.1403 | 18 |
| 2.3528 | 0.0376 | 0.0669 | 2.0705 | 0.0275 | 0.1089 | 19 |
| 2.3145 | 0.0379 | 0.0658 | 2.0179 | 0.0280 | 0.1209 | 20 |
| 2.2765 | 0.0382 | 0.0654 | 2.0182 | 0.0279 | 0.1023 | 21 |
| 2.2415 | 0.0385 | 0.0650 | 1.9558 | 0.0284 | 0.1523 | 22 |
| 2.2102 | 0.0388 | 0.0643 | 1.9395 | 0.0285 | 0.1123 | 23 |
| 2.1717 | 0.0392 | 0.0635 | 1.9791 | 0.0282 | 0.0928 | 24 |
| 2.1457 | 0.0395 | 0.0626 | 1.8907 | 0.0291 | 0.1078 | 25 |
| 2.1159 | 0.0398 | 0.0633 | 1.8930 | 0.0290 | 0.1098 | 26 |
| 2.0892 | 0.0401 | 0.0638 | 1.8696 | 0.0292 | 0.1078 | 27 |
| 2.0609 | 0.0405 | 0.0659 | 1.8555 | 0.0296 | 0.1051 | 28 |
| 2.0342 | 0.0409 | 0.0639 | 1.8589 | 0.0293 | 0.1092 | 29 |
| 2.0044 | 0.0413 | 0.0653 | 1.8375 | 0.0299 | 0.1015 | 30 |
| 1.9831 | 0.0416 | 0.0649 | 1.7954 | 0.0302 | 0.1194 | 31 |
| 1.9535 | 0.0421 | 0.0689 | 1.7937 | 0.0302 | 0.1168 | 32 |
| 1.9290 | 0.0425 | 0.0706 | 1.8385 | 0.0299 | 0.1074 | 33 |
| 1.8933 | 0.0432 | 0.0682 | 1.8761 | 0.0295 | 0.1173 | 34 |
| 1.8724 | 0.0435 | 0.0752 | 1.7929 | 0.0304 | 0.1220 | 35 |
| 1.8407 | 0.0442 | 0.0760 | 1.7865 | 0.0306 | 0.1266 | 36 |
| 1.8179 | 0.0446 | 0.0832 | 1.8108 | 0.0304 | 0.1226 | 37 |
| 1.7977 | 0.0451 | 0.0888 | 1.8024 | 0.0306 | 0.1161 | 38 |
| 1.7846 | 0.0454 | 0.0855 | 1.8107 | 0.0305 | 0.1385 | 39 |
| 1.7516 | 0.0461 | 0.0922 | 1.8258 | 0.0307 | 0.1365 | 40 |
| 1.7358 | 0.0465 | 0.1070 | 1.8837 | 0.0302 | 0.1461 | 41 |
| 1.7036 | 0.0474 | 0.1106 | 1.8589 | 0.0306 | 0.1201 | 42 |
| 1.6779 | 0.0481 | 0.1052 | 1.8831 | 0.0305 | 0.1755 | 43 |
| 1.6539 | 0.0487 | 0.1192 | 1.8249 | 0.0309 | 0.1901 | 44 |
| 1.6500 | 0.0488 | 0.1149 | 1.8435 | 0.0310 | 0.1313 | 45 |
| 1.6401 | 0.0490 | 0.1468 | 1.8509 | 0.0310 | 0.1597 | 46 |
| 1.6232 | 0.0495 | 0.1443 | 1.8573 | 0.0310 | 0.1588 | 47 |
| 1.5947 | 0.0503 | 0.1315 | 1.8350 | 0.0311 | 0.1476 | 48 |
| 1.5659 | 0.0512 | 0.1890 | 1.8934 | 0.0310 | 0.1507 | 49 |
| 1.5409 | 0.0521 | 0.1410 | 1.9782 | 0.0299 | 0.1663 | 50 |
| 1.5417 | 0.0520 | 0.1805 | 1.9223 | 0.0309 | 0.2287 | 51 |
| 1.5330 | 0.0522 | 0.1907 | 1.9174 | 0.0313 | 0.2481 | 52 |
| 1.5182 | 0.0527 | 0.1963 | 1.9254 | 0.0312 | 0.1440 | 53 |
| 1.5008 | 0.0532 | 0.2386 | 1.9368 | 0.0309 | 0.2045 | 54 |
| 1.4700 | 0.0543 | 0.2347 | 1.9171 | 0.0310 | 0.3189 | 55 |
| 1.4517 | 0.0549 | 0.2159 | 1.9880 | 0.0308 | 0.4000 | 56 |
| 1.4421 | 0.0553 | 0.2616 | 1.9647 | 0.0310 | 0.3311 | 57 |
| 1.4393 | 0.0552 | 0.2959 | 1.9191 | 0.0314 | 0.3403 | 58 |
| 1.4163 | 0.0560 | 0.3296 | 2.0068 | 0.0313 | 0.3711 | 59 |
| 1.4174 | 0.0559 | 0.3499 | 2.0338 | 0.0310 | 0.2981 | 60 |
| 1.4112 | 0.0561 | 0.3553 | 2.0262 | 0.0312 | 0.3595 | 61 |
| 1.3840 | 0.0572 | 0.4110 | 1.9913 | 0.0313 | 0.2975 | 62 |
| 1.3662 | 0.0578 | 0.3471 | 2.0969 | 0.0307 | 0.2794 | 63 |
| 1.3596 | 0.0579 | 0.3211 | 2.0164 | 0.0314 | 0.9982 | 64 |
| 1.3819 | 0.0571 | 0.3542 | 1.9052 | 0.0315 | 0.9802 | 65 |
| 1.3823 | 0.0569 | 0.3757 | 1.9371 | 0.0315 | 1.0860 | 66 |
| 1.3364 | 0.0587 | 0.4048 | 2.0912 | 0.0311 | 0.2807 | 67 |
| 1.3494 | 0.0582 | 0.3723 | 1.9475 | 0.0317 | 0.3295 | 68 |
| 1.3321 | 0.0587 | 0.3546 | 2.1066 | 0.0314 | 0.6181 | 69 |
| 1.3198 | 0.0592 | 0.4076 | 2.0759 | 0.0314 | 0.4974 | 70 |
| 1.2896 | 0.0603 | 0.4556 | 1.9717 | 0.0316 | 0.7519 | 71 |
| 1.2842 | 0.0604 | 0.5363 | 2.0598 | 0.0315 | 0.5596 | 72 |
| 1.2841 | 0.0604 | 0.5000 | 1.9914 | 0.0314 | 0.5531 | 73 |
| 1.2803 | 0.0606 | 0.5457 | 2.0848 | 0.0316 | 0.9665 | 74 |
| 1.2412 | 0.0620 | 0.5956 | 2.2020 | 0.0307 | 0.9376 | 75 |
| 1.2320 | 0.0624 | 0.5726 | 2.2278 | 0.0308 | 1.5467 | 76 |
| 1.2235 | 0.0626 | 0.7086 | 2.1929 | 0.0314 | 0.5619 | 77 |
| 1.2520 | 0.0614 | 0.7158 | 2.1414 | 0.0315 | 0.8414 | 78 |
| 1.2306 | 0.0621 | 0.7386 | 2.2487 | 0.0313 | 0.8498 | 79 |
| 1.2182 | 0.0627 | 0.6691 | 2.0785 | 0.0317 | 1.2870 | 80 |
| 1.2080 | 0.0630 | 0.7715 | 2.2775 | 0.0310 | 1.6700 | 81 |
| 1.2217 | 0.0624 | 0.7984 | 2.1358 | 0.0314 | 2.0753 | 82 |
| 1.2117 | 0.0628 | 0.8299 | 2.2871 | 0.0305 | 1.4698 | 83 |
| 1.1786 | 0.0642 | 0.6979 | 2.2602 | 0.0315 | 1.6544 | 84 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
obiwan92/llama2-13b-chat-hf-qlora-adapter_model
|
obiwan92
| 2023-09-07T09:04:51Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-07T09:04:38Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
prognosis/cardio-llama-2-7b-miniguanaco-lora-v16
|
prognosis
| 2023-09-07T09:04:05Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-07T08:52:31Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
flyswot/convnext-tiny-224_flyswot
|
flyswot
| 2023-09-07T08:59:03Z | 231 | 1 |
transformers
|
[
"transformers",
"pytorch",
"coreml",
"onnx",
"safetensors",
"convnext",
"image-classification",
"generated_from_trainer",
"dataset:image_folder",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-04-05T13:30:32Z |
---
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- f1
model-index:
- name: convnext-tiny-224_flyswot
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: F1
type: f1
value: 0.9756290792360154
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224_flyswot
This model was trained from scratch on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5319
- F1: 0.9756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 666
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 52 | 0.5478 | 0.9720 |
| No log | 2.0 | 104 | 0.5432 | 0.9709 |
| No log | 3.0 | 156 | 0.5437 | 0.9731 |
| No log | 4.0 | 208 | 0.5433 | 0.9712 |
| No log | 5.0 | 260 | 0.5373 | 0.9745 |
| No log | 6.0 | 312 | 0.5371 | 0.9756 |
| No log | 7.0 | 364 | 0.5381 | 0.9737 |
| No log | 8.0 | 416 | 0.5376 | 0.9744 |
| No log | 9.0 | 468 | 0.5431 | 0.9694 |
| 0.4761 | 10.0 | 520 | 0.5468 | 0.9725 |
| 0.4761 | 11.0 | 572 | 0.5404 | 0.9755 |
| 0.4761 | 12.0 | 624 | 0.5481 | 0.9669 |
| 0.4761 | 13.0 | 676 | 0.5432 | 0.9687 |
| 0.4761 | 14.0 | 728 | 0.5409 | 0.9731 |
| 0.4761 | 15.0 | 780 | 0.5403 | 0.9737 |
| 0.4761 | 16.0 | 832 | 0.5393 | 0.9737 |
| 0.4761 | 17.0 | 884 | 0.5412 | 0.9719 |
| 0.4761 | 18.0 | 936 | 0.5433 | 0.9674 |
| 0.4761 | 19.0 | 988 | 0.5367 | 0.9755 |
| 0.4705 | 20.0 | 1040 | 0.5389 | 0.9737 |
| 0.4705 | 21.0 | 1092 | 0.5396 | 0.9737 |
| 0.4705 | 22.0 | 1144 | 0.5514 | 0.9683 |
| 0.4705 | 23.0 | 1196 | 0.5550 | 0.9617 |
| 0.4705 | 24.0 | 1248 | 0.5428 | 0.9719 |
| 0.4705 | 25.0 | 1300 | 0.5371 | 0.9719 |
| 0.4705 | 26.0 | 1352 | 0.5455 | 0.9719 |
| 0.4705 | 27.0 | 1404 | 0.5409 | 0.9680 |
| 0.4705 | 28.0 | 1456 | 0.5345 | 0.9756 |
| 0.4696 | 29.0 | 1508 | 0.5381 | 0.9756 |
| 0.4696 | 30.0 | 1560 | 0.5387 | 0.9705 |
| 0.4696 | 31.0 | 1612 | 0.5540 | 0.9605 |
| 0.4696 | 32.0 | 1664 | 0.5467 | 0.9706 |
| 0.4696 | 33.0 | 1716 | 0.5322 | 0.9756 |
| 0.4696 | 34.0 | 1768 | 0.5325 | 0.9756 |
| 0.4696 | 35.0 | 1820 | 0.5305 | 0.9737 |
| 0.4696 | 36.0 | 1872 | 0.5305 | 0.9769 |
| 0.4696 | 37.0 | 1924 | 0.5345 | 0.9756 |
| 0.4696 | 38.0 | 1976 | 0.5315 | 0.9737 |
| 0.4699 | 39.0 | 2028 | 0.5333 | 0.9756 |
| 0.4699 | 40.0 | 2080 | 0.5316 | 0.9756 |
| 0.4699 | 41.0 | 2132 | 0.5284 | 0.9756 |
| 0.4699 | 42.0 | 2184 | 0.5325 | 0.9756 |
| 0.4699 | 43.0 | 2236 | 0.5321 | 0.9756 |
| 0.4699 | 44.0 | 2288 | 0.5322 | 0.9756 |
| 0.4699 | 45.0 | 2340 | 0.5323 | 0.9756 |
| 0.4699 | 46.0 | 2392 | 0.5318 | 0.9756 |
| 0.4699 | 47.0 | 2444 | 0.5329 | 0.9756 |
| 0.4699 | 48.0 | 2496 | 0.5317 | 0.9756 |
| 0.4701 | 49.0 | 2548 | 0.5317 | 0.9756 |
| 0.4701 | 50.0 | 2600 | 0.5319 | 0.9756 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
bigmorning/whisper_4_with_init_sun_char_0080
|
bigmorning
| 2023-09-07T08:57:19Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-07T08:57:09Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun_char_0080
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun_char_0080
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.2306
- Train Accuracy: 0.0621
- Train Wermet: 0.7386
- Validation Loss: 2.2487
- Validation Accuracy: 0.0313
- Validation Wermet: 0.8498
- Epoch: 79
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 3.2071 | 0.0313 | 0.1237 | 2.8546 | 0.0225 | 0.1109 | 0 |
| 3.0365 | 0.0325 | 0.0375 | 2.8115 | 0.0228 | 0.1215 | 1 |
| 3.0162 | 0.0326 | 0.0484 | 2.7884 | 0.0231 | 0.1318 | 2 |
| 3.0042 | 0.0327 | 0.0555 | 2.7853 | 0.0233 | 0.1393 | 3 |
| 2.9934 | 0.0328 | 0.0614 | 2.7657 | 0.0232 | 0.1273 | 4 |
| 2.9858 | 0.0329 | 0.0654 | 2.7542 | 0.0234 | 0.1073 | 5 |
| 2.9735 | 0.0330 | 0.0673 | 2.7367 | 0.0234 | 0.1414 | 6 |
| 2.9574 | 0.0332 | 0.0704 | 2.6961 | 0.0240 | 0.1429 | 7 |
| 2.9320 | 0.0335 | 0.0723 | 2.6652 | 0.0239 | 0.0990 | 8 |
| 2.8976 | 0.0339 | 0.0729 | 2.5997 | 0.0245 | 0.0944 | 9 |
| 2.8460 | 0.0343 | 0.0728 | 2.5378 | 0.0248 | 0.1435 | 10 |
| 2.7781 | 0.0347 | 0.0741 | 2.4355 | 0.0254 | 0.1372 | 11 |
| 2.7083 | 0.0352 | 0.0747 | 2.5163 | 0.0248 | 0.0987 | 12 |
| 2.6445 | 0.0356 | 0.0720 | 2.2997 | 0.0261 | 0.1484 | 13 |
| 2.5838 | 0.0360 | 0.0724 | 2.2386 | 0.0266 | 0.1419 | 14 |
| 2.5294 | 0.0363 | 0.0721 | 2.1855 | 0.0269 | 0.1289 | 15 |
| 2.4760 | 0.0367 | 0.0711 | 2.1682 | 0.0271 | 0.1214 | 16 |
| 2.4339 | 0.0370 | 0.0698 | 2.1018 | 0.0273 | 0.1264 | 17 |
| 2.3867 | 0.0373 | 0.0684 | 2.0647 | 0.0275 | 0.1403 | 18 |
| 2.3528 | 0.0376 | 0.0669 | 2.0705 | 0.0275 | 0.1089 | 19 |
| 2.3145 | 0.0379 | 0.0658 | 2.0179 | 0.0280 | 0.1209 | 20 |
| 2.2765 | 0.0382 | 0.0654 | 2.0182 | 0.0279 | 0.1023 | 21 |
| 2.2415 | 0.0385 | 0.0650 | 1.9558 | 0.0284 | 0.1523 | 22 |
| 2.2102 | 0.0388 | 0.0643 | 1.9395 | 0.0285 | 0.1123 | 23 |
| 2.1717 | 0.0392 | 0.0635 | 1.9791 | 0.0282 | 0.0928 | 24 |
| 2.1457 | 0.0395 | 0.0626 | 1.8907 | 0.0291 | 0.1078 | 25 |
| 2.1159 | 0.0398 | 0.0633 | 1.8930 | 0.0290 | 0.1098 | 26 |
| 2.0892 | 0.0401 | 0.0638 | 1.8696 | 0.0292 | 0.1078 | 27 |
| 2.0609 | 0.0405 | 0.0659 | 1.8555 | 0.0296 | 0.1051 | 28 |
| 2.0342 | 0.0409 | 0.0639 | 1.8589 | 0.0293 | 0.1092 | 29 |
| 2.0044 | 0.0413 | 0.0653 | 1.8375 | 0.0299 | 0.1015 | 30 |
| 1.9831 | 0.0416 | 0.0649 | 1.7954 | 0.0302 | 0.1194 | 31 |
| 1.9535 | 0.0421 | 0.0689 | 1.7937 | 0.0302 | 0.1168 | 32 |
| 1.9290 | 0.0425 | 0.0706 | 1.8385 | 0.0299 | 0.1074 | 33 |
| 1.8933 | 0.0432 | 0.0682 | 1.8761 | 0.0295 | 0.1173 | 34 |
| 1.8724 | 0.0435 | 0.0752 | 1.7929 | 0.0304 | 0.1220 | 35 |
| 1.8407 | 0.0442 | 0.0760 | 1.7865 | 0.0306 | 0.1266 | 36 |
| 1.8179 | 0.0446 | 0.0832 | 1.8108 | 0.0304 | 0.1226 | 37 |
| 1.7977 | 0.0451 | 0.0888 | 1.8024 | 0.0306 | 0.1161 | 38 |
| 1.7846 | 0.0454 | 0.0855 | 1.8107 | 0.0305 | 0.1385 | 39 |
| 1.7516 | 0.0461 | 0.0922 | 1.8258 | 0.0307 | 0.1365 | 40 |
| 1.7358 | 0.0465 | 0.1070 | 1.8837 | 0.0302 | 0.1461 | 41 |
| 1.7036 | 0.0474 | 0.1106 | 1.8589 | 0.0306 | 0.1201 | 42 |
| 1.6779 | 0.0481 | 0.1052 | 1.8831 | 0.0305 | 0.1755 | 43 |
| 1.6539 | 0.0487 | 0.1192 | 1.8249 | 0.0309 | 0.1901 | 44 |
| 1.6500 | 0.0488 | 0.1149 | 1.8435 | 0.0310 | 0.1313 | 45 |
| 1.6401 | 0.0490 | 0.1468 | 1.8509 | 0.0310 | 0.1597 | 46 |
| 1.6232 | 0.0495 | 0.1443 | 1.8573 | 0.0310 | 0.1588 | 47 |
| 1.5947 | 0.0503 | 0.1315 | 1.8350 | 0.0311 | 0.1476 | 48 |
| 1.5659 | 0.0512 | 0.1890 | 1.8934 | 0.0310 | 0.1507 | 49 |
| 1.5409 | 0.0521 | 0.1410 | 1.9782 | 0.0299 | 0.1663 | 50 |
| 1.5417 | 0.0520 | 0.1805 | 1.9223 | 0.0309 | 0.2287 | 51 |
| 1.5330 | 0.0522 | 0.1907 | 1.9174 | 0.0313 | 0.2481 | 52 |
| 1.5182 | 0.0527 | 0.1963 | 1.9254 | 0.0312 | 0.1440 | 53 |
| 1.5008 | 0.0532 | 0.2386 | 1.9368 | 0.0309 | 0.2045 | 54 |
| 1.4700 | 0.0543 | 0.2347 | 1.9171 | 0.0310 | 0.3189 | 55 |
| 1.4517 | 0.0549 | 0.2159 | 1.9880 | 0.0308 | 0.4000 | 56 |
| 1.4421 | 0.0553 | 0.2616 | 1.9647 | 0.0310 | 0.3311 | 57 |
| 1.4393 | 0.0552 | 0.2959 | 1.9191 | 0.0314 | 0.3403 | 58 |
| 1.4163 | 0.0560 | 0.3296 | 2.0068 | 0.0313 | 0.3711 | 59 |
| 1.4174 | 0.0559 | 0.3499 | 2.0338 | 0.0310 | 0.2981 | 60 |
| 1.4112 | 0.0561 | 0.3553 | 2.0262 | 0.0312 | 0.3595 | 61 |
| 1.3840 | 0.0572 | 0.4110 | 1.9913 | 0.0313 | 0.2975 | 62 |
| 1.3662 | 0.0578 | 0.3471 | 2.0969 | 0.0307 | 0.2794 | 63 |
| 1.3596 | 0.0579 | 0.3211 | 2.0164 | 0.0314 | 0.9982 | 64 |
| 1.3819 | 0.0571 | 0.3542 | 1.9052 | 0.0315 | 0.9802 | 65 |
| 1.3823 | 0.0569 | 0.3757 | 1.9371 | 0.0315 | 1.0860 | 66 |
| 1.3364 | 0.0587 | 0.4048 | 2.0912 | 0.0311 | 0.2807 | 67 |
| 1.3494 | 0.0582 | 0.3723 | 1.9475 | 0.0317 | 0.3295 | 68 |
| 1.3321 | 0.0587 | 0.3546 | 2.1066 | 0.0314 | 0.6181 | 69 |
| 1.3198 | 0.0592 | 0.4076 | 2.0759 | 0.0314 | 0.4974 | 70 |
| 1.2896 | 0.0603 | 0.4556 | 1.9717 | 0.0316 | 0.7519 | 71 |
| 1.2842 | 0.0604 | 0.5363 | 2.0598 | 0.0315 | 0.5596 | 72 |
| 1.2841 | 0.0604 | 0.5000 | 1.9914 | 0.0314 | 0.5531 | 73 |
| 1.2803 | 0.0606 | 0.5457 | 2.0848 | 0.0316 | 0.9665 | 74 |
| 1.2412 | 0.0620 | 0.5956 | 2.2020 | 0.0307 | 0.9376 | 75 |
| 1.2320 | 0.0624 | 0.5726 | 2.2278 | 0.0308 | 1.5467 | 76 |
| 1.2235 | 0.0626 | 0.7086 | 2.1929 | 0.0314 | 0.5619 | 77 |
| 1.2520 | 0.0614 | 0.7158 | 2.1414 | 0.0315 | 0.8414 | 78 |
| 1.2306 | 0.0621 | 0.7386 | 2.2487 | 0.0313 | 0.8498 | 79 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
ThuyNT03/PhoBERT-Final_Mixed-aug_insert_synonym-2
|
ThuyNT03
| 2023-09-07T08:48:31Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T06:57:44Z |
---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-aug_insert_synonym-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_insert_synonym-2
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2465
- Accuracy: 0.69
- F1: 0.6880
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9109 | 1.0 | 88 | 0.8214 | 0.65 | 0.6425 |
| 0.6223 | 2.0 | 176 | 0.6999 | 0.7 | 0.7021 |
| 0.424 | 3.0 | 264 | 0.7126 | 0.73 | 0.7305 |
| 0.2932 | 4.0 | 352 | 0.8673 | 0.72 | 0.7172 |
| 0.1692 | 5.0 | 440 | 1.0126 | 0.68 | 0.6806 |
| 0.1192 | 6.0 | 528 | 1.1561 | 0.69 | 0.6889 |
| 0.067 | 7.0 | 616 | 1.2002 | 0.68 | 0.6835 |
| 0.0481 | 8.0 | 704 | 1.2465 | 0.69 | 0.6880 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
eliept1/rl_course_vizdoom_health_gathering_supreme
|
eliept1
| 2023-09-07T08:47:04Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-07T08:46:44Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.73 +/- 5.22
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r eliept1/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
ThuyNT03/PhoBERT-Final_Mixed-aug_delete-2
|
ThuyNT03
| 2023-09-07T08:40:25Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T06:52:03Z |
---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-aug_delete-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_delete-2
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1510
- Accuracy: 0.72
- F1: 0.7195
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9659 | 1.0 | 88 | 0.7877 | 0.69 | 0.6816 |
| 0.6618 | 2.0 | 176 | 0.7038 | 0.7 | 0.6940 |
| 0.4684 | 3.0 | 264 | 0.7258 | 0.72 | 0.7216 |
| 0.3104 | 4.0 | 352 | 0.8347 | 0.71 | 0.7082 |
| 0.2059 | 5.0 | 440 | 1.0095 | 0.7 | 0.6985 |
| 0.1641 | 6.0 | 528 | 1.0901 | 0.7 | 0.6950 |
| 0.1145 | 7.0 | 616 | 1.0998 | 0.71 | 0.7091 |
| 0.0823 | 8.0 | 704 | 1.1510 | 0.72 | 0.7195 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
osieosie/mnli-4bit-7b-bnb-seed87
|
osieosie
| 2023-09-07T08:35:36Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-07T08:35:35Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
ThuyNT03/PhoBERT-Final_Mixed-train-2
|
ThuyNT03
| 2023-09-07T08:34:39Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T06:48:41Z |
---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-train-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-train-2
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8480
- Accuracy: 0.71
- F1: 0.7072
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0327 | 1.0 | 44 | 0.8923 | 0.61 | 0.5137 |
| 0.7617 | 2.0 | 88 | 0.7842 | 0.68 | 0.6626 |
| 0.5896 | 3.0 | 132 | 0.7527 | 0.68 | 0.6725 |
| 0.4922 | 4.0 | 176 | 0.7139 | 0.7 | 0.7005 |
| 0.4037 | 5.0 | 220 | 0.8216 | 0.7 | 0.6952 |
| 0.3524 | 6.0 | 264 | 0.7636 | 0.71 | 0.7045 |
| 0.2854 | 7.0 | 308 | 0.9140 | 0.66 | 0.6517 |
| 0.2299 | 8.0 | 352 | 0.8480 | 0.71 | 0.7072 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
CyberHarem/ichihara_nina_theidolmastercinderellagirlsu149
|
CyberHarem
| 2023-09-07T08:32:25Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/ichihara_nina_theidolmastercinderellagirlsu149",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-07T08:18:15Z |
---
license: mit
datasets:
- CyberHarem/ichihara_nina_theidolmastercinderellagirlsu149
pipeline_tag: text-to-image
tags:
- art
---
# Lora of ichihara_nina_theidolmastercinderellagirlsu149
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5880, you need to download `5880/ichihara_nina_theidolmastercinderellagirlsu149.pt` as the embedding and `5880/ichihara_nina_theidolmastercinderellagirlsu149.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5880**, with the score of 0.987. The trigger words are:
1. `ichihara_nina_theidolmastercinderellagirlsu149`
2. `brown_hair, long_hair, bangs, brown_eyes, blunt_bangs, smile, open_mouth, cosplay, bow, kigurumi, yellow_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 6300 | 0.926 | [Download](6300/ichihara_nina_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6300/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6300/previews/nude.png) | [<NSFW, click to see>](6300/previews/nude2.png) |  |  |
| **5880** | **0.987** | [**Download**](5880/ichihara_nina_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5880/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5880/previews/nude.png) | [<NSFW, click to see>](5880/previews/nude2.png) |  |  |
| 5460 | 0.971 | [Download](5460/ichihara_nina_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5460/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5460/previews/nude.png) | [<NSFW, click to see>](5460/previews/nude2.png) |  |  |
| 5040 | 0.879 | [Download](5040/ichihara_nina_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5040/previews/nude.png) | [<NSFW, click to see>](5040/previews/nude2.png) |  |  |
| 4620 | 0.856 | [Download](4620/ichihara_nina_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4620/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4620/previews/nude.png) | [<NSFW, click to see>](4620/previews/nude2.png) |  |  |
| 4200 | 0.905 | [Download](4200/ichihara_nina_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4200/previews/nude.png) | [<NSFW, click to see>](4200/previews/nude2.png) |  |  |
| 3780 | 0.945 | [Download](3780/ichihara_nina_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3780/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3780/previews/nude.png) | [<NSFW, click to see>](3780/previews/nude2.png) |  |  |
| 3360 | 0.931 | [Download](3360/ichihara_nina_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3360/previews/nude.png) | [<NSFW, click to see>](3360/previews/nude2.png) |  |  |
| 2940 | 0.789 | [Download](2940/ichihara_nina_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2940/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2940/previews/nude.png) | [<NSFW, click to see>](2940/previews/nude2.png) |  |  |
| 2520 | 0.863 | [Download](2520/ichihara_nina_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2520/previews/nude.png) | [<NSFW, click to see>](2520/previews/nude2.png) |  |  |
| 2100 | 0.801 | [Download](2100/ichihara_nina_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2100/previews/nude.png) | [<NSFW, click to see>](2100/previews/nude2.png) |  |  |
| 1680 | 0.848 | [Download](1680/ichihara_nina_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1680/previews/nude.png) | [<NSFW, click to see>](1680/previews/nude2.png) |  |  |
| 1260 | 0.748 | [Download](1260/ichihara_nina_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1260/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1260/previews/nude.png) | [<NSFW, click to see>](1260/previews/nude2.png) |  |  |
| 840 | 0.692 | [Download](840/ichihara_nina_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](840/previews/bondage.png) |  |  |  | [<NSFW, click to see>](840/previews/nude.png) | [<NSFW, click to see>](840/previews/nude2.png) |  |  |
| 420 | 0.187 | [Download](420/ichihara_nina_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](420/previews/nude.png) | [<NSFW, click to see>](420/previews/nude2.png) |  |  |
|
bigmorning/whisper_4_with_init_sun_char_0070
|
bigmorning
| 2023-09-07T08:26:54Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-07T08:26:46Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun_char_0070
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun_char_0070
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.3321
- Train Accuracy: 0.0587
- Train Wermet: 0.3546
- Validation Loss: 2.1066
- Validation Accuracy: 0.0314
- Validation Wermet: 0.6181
- Epoch: 69
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 3.2071 | 0.0313 | 0.1237 | 2.8546 | 0.0225 | 0.1109 | 0 |
| 3.0365 | 0.0325 | 0.0375 | 2.8115 | 0.0228 | 0.1215 | 1 |
| 3.0162 | 0.0326 | 0.0484 | 2.7884 | 0.0231 | 0.1318 | 2 |
| 3.0042 | 0.0327 | 0.0555 | 2.7853 | 0.0233 | 0.1393 | 3 |
| 2.9934 | 0.0328 | 0.0614 | 2.7657 | 0.0232 | 0.1273 | 4 |
| 2.9858 | 0.0329 | 0.0654 | 2.7542 | 0.0234 | 0.1073 | 5 |
| 2.9735 | 0.0330 | 0.0673 | 2.7367 | 0.0234 | 0.1414 | 6 |
| 2.9574 | 0.0332 | 0.0704 | 2.6961 | 0.0240 | 0.1429 | 7 |
| 2.9320 | 0.0335 | 0.0723 | 2.6652 | 0.0239 | 0.0990 | 8 |
| 2.8976 | 0.0339 | 0.0729 | 2.5997 | 0.0245 | 0.0944 | 9 |
| 2.8460 | 0.0343 | 0.0728 | 2.5378 | 0.0248 | 0.1435 | 10 |
| 2.7781 | 0.0347 | 0.0741 | 2.4355 | 0.0254 | 0.1372 | 11 |
| 2.7083 | 0.0352 | 0.0747 | 2.5163 | 0.0248 | 0.0987 | 12 |
| 2.6445 | 0.0356 | 0.0720 | 2.2997 | 0.0261 | 0.1484 | 13 |
| 2.5838 | 0.0360 | 0.0724 | 2.2386 | 0.0266 | 0.1419 | 14 |
| 2.5294 | 0.0363 | 0.0721 | 2.1855 | 0.0269 | 0.1289 | 15 |
| 2.4760 | 0.0367 | 0.0711 | 2.1682 | 0.0271 | 0.1214 | 16 |
| 2.4339 | 0.0370 | 0.0698 | 2.1018 | 0.0273 | 0.1264 | 17 |
| 2.3867 | 0.0373 | 0.0684 | 2.0647 | 0.0275 | 0.1403 | 18 |
| 2.3528 | 0.0376 | 0.0669 | 2.0705 | 0.0275 | 0.1089 | 19 |
| 2.3145 | 0.0379 | 0.0658 | 2.0179 | 0.0280 | 0.1209 | 20 |
| 2.2765 | 0.0382 | 0.0654 | 2.0182 | 0.0279 | 0.1023 | 21 |
| 2.2415 | 0.0385 | 0.0650 | 1.9558 | 0.0284 | 0.1523 | 22 |
| 2.2102 | 0.0388 | 0.0643 | 1.9395 | 0.0285 | 0.1123 | 23 |
| 2.1717 | 0.0392 | 0.0635 | 1.9791 | 0.0282 | 0.0928 | 24 |
| 2.1457 | 0.0395 | 0.0626 | 1.8907 | 0.0291 | 0.1078 | 25 |
| 2.1159 | 0.0398 | 0.0633 | 1.8930 | 0.0290 | 0.1098 | 26 |
| 2.0892 | 0.0401 | 0.0638 | 1.8696 | 0.0292 | 0.1078 | 27 |
| 2.0609 | 0.0405 | 0.0659 | 1.8555 | 0.0296 | 0.1051 | 28 |
| 2.0342 | 0.0409 | 0.0639 | 1.8589 | 0.0293 | 0.1092 | 29 |
| 2.0044 | 0.0413 | 0.0653 | 1.8375 | 0.0299 | 0.1015 | 30 |
| 1.9831 | 0.0416 | 0.0649 | 1.7954 | 0.0302 | 0.1194 | 31 |
| 1.9535 | 0.0421 | 0.0689 | 1.7937 | 0.0302 | 0.1168 | 32 |
| 1.9290 | 0.0425 | 0.0706 | 1.8385 | 0.0299 | 0.1074 | 33 |
| 1.8933 | 0.0432 | 0.0682 | 1.8761 | 0.0295 | 0.1173 | 34 |
| 1.8724 | 0.0435 | 0.0752 | 1.7929 | 0.0304 | 0.1220 | 35 |
| 1.8407 | 0.0442 | 0.0760 | 1.7865 | 0.0306 | 0.1266 | 36 |
| 1.8179 | 0.0446 | 0.0832 | 1.8108 | 0.0304 | 0.1226 | 37 |
| 1.7977 | 0.0451 | 0.0888 | 1.8024 | 0.0306 | 0.1161 | 38 |
| 1.7846 | 0.0454 | 0.0855 | 1.8107 | 0.0305 | 0.1385 | 39 |
| 1.7516 | 0.0461 | 0.0922 | 1.8258 | 0.0307 | 0.1365 | 40 |
| 1.7358 | 0.0465 | 0.1070 | 1.8837 | 0.0302 | 0.1461 | 41 |
| 1.7036 | 0.0474 | 0.1106 | 1.8589 | 0.0306 | 0.1201 | 42 |
| 1.6779 | 0.0481 | 0.1052 | 1.8831 | 0.0305 | 0.1755 | 43 |
| 1.6539 | 0.0487 | 0.1192 | 1.8249 | 0.0309 | 0.1901 | 44 |
| 1.6500 | 0.0488 | 0.1149 | 1.8435 | 0.0310 | 0.1313 | 45 |
| 1.6401 | 0.0490 | 0.1468 | 1.8509 | 0.0310 | 0.1597 | 46 |
| 1.6232 | 0.0495 | 0.1443 | 1.8573 | 0.0310 | 0.1588 | 47 |
| 1.5947 | 0.0503 | 0.1315 | 1.8350 | 0.0311 | 0.1476 | 48 |
| 1.5659 | 0.0512 | 0.1890 | 1.8934 | 0.0310 | 0.1507 | 49 |
| 1.5409 | 0.0521 | 0.1410 | 1.9782 | 0.0299 | 0.1663 | 50 |
| 1.5417 | 0.0520 | 0.1805 | 1.9223 | 0.0309 | 0.2287 | 51 |
| 1.5330 | 0.0522 | 0.1907 | 1.9174 | 0.0313 | 0.2481 | 52 |
| 1.5182 | 0.0527 | 0.1963 | 1.9254 | 0.0312 | 0.1440 | 53 |
| 1.5008 | 0.0532 | 0.2386 | 1.9368 | 0.0309 | 0.2045 | 54 |
| 1.4700 | 0.0543 | 0.2347 | 1.9171 | 0.0310 | 0.3189 | 55 |
| 1.4517 | 0.0549 | 0.2159 | 1.9880 | 0.0308 | 0.4000 | 56 |
| 1.4421 | 0.0553 | 0.2616 | 1.9647 | 0.0310 | 0.3311 | 57 |
| 1.4393 | 0.0552 | 0.2959 | 1.9191 | 0.0314 | 0.3403 | 58 |
| 1.4163 | 0.0560 | 0.3296 | 2.0068 | 0.0313 | 0.3711 | 59 |
| 1.4174 | 0.0559 | 0.3499 | 2.0338 | 0.0310 | 0.2981 | 60 |
| 1.4112 | 0.0561 | 0.3553 | 2.0262 | 0.0312 | 0.3595 | 61 |
| 1.3840 | 0.0572 | 0.4110 | 1.9913 | 0.0313 | 0.2975 | 62 |
| 1.3662 | 0.0578 | 0.3471 | 2.0969 | 0.0307 | 0.2794 | 63 |
| 1.3596 | 0.0579 | 0.3211 | 2.0164 | 0.0314 | 0.9982 | 64 |
| 1.3819 | 0.0571 | 0.3542 | 1.9052 | 0.0315 | 0.9802 | 65 |
| 1.3823 | 0.0569 | 0.3757 | 1.9371 | 0.0315 | 1.0860 | 66 |
| 1.3364 | 0.0587 | 0.4048 | 2.0912 | 0.0311 | 0.2807 | 67 |
| 1.3494 | 0.0582 | 0.3723 | 1.9475 | 0.0317 | 0.3295 | 68 |
| 1.3321 | 0.0587 | 0.3546 | 2.1066 | 0.0314 | 0.6181 | 69 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
mHossain/en_bn_summarize_v3
|
mHossain
| 2023-09-07T08:24:33Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:mHossain/en_bn_summarize_v2",
"base_model:finetune:mHossain/en_bn_summarize_v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-07T07:38:41Z |
---
license: apache-2.0
base_model: mHossain/en_bn_summarize_v2
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: en_bn_summarize_v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# en_bn_summarize_v3
This model is a fine-tuned version of [mHossain/en_bn_summarize_v2](https://huggingface.co/mHossain/en_bn_summarize_v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8686
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
- Gen Len: 18.882
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 3.4784 | 1.0 | 615 | 2.8990 | 0.0 | 0.0 | 0.0 | 0.0 | 18.6957 |
| 3.4631 | 2.0 | 1230 | 2.8953 | 0.0 | 0.0 | 0.0 | 0.0 | 18.7578 |
| 3.4549 | 3.0 | 1845 | 2.8887 | 0.0 | 0.0 | 0.0 | 0.0 | 18.795 |
| 3.4307 | 4.0 | 2460 | 2.8875 | 0.0 | 0.0 | 0.0 | 0.0 | 18.8447 |
| 3.333 | 5.0 | 3075 | 2.8744 | 0.0 | 0.0 | 0.0 | 0.0 | 18.8696 |
| 3.3185 | 6.0 | 3690 | 2.8791 | 0.0 | 0.0 | 0.0 | 0.0 | 18.8509 |
| 3.2546 | 7.0 | 4305 | 2.8844 | 0.0 | 0.0 | 0.0 | 0.0 | 18.8944 |
| 3.218 | 8.0 | 4920 | 2.8943 | 0.0 | 0.0 | 0.0 | 0.0 | 18.8696 |
| 3.169 | 9.0 | 5535 | 2.8829 | 0.0 | 0.0 | 0.0 | 0.0 | 18.8944 |
| 3.2386 | 10.0 | 6150 | 2.8686 | 0.0 | 0.0 | 0.0 | 0.0 | 18.882 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
STomoya/resnet34.st_safebooru_1k
|
STomoya
| 2023-09-07T07:58:49Z | 15 | 0 |
timm
|
[
"timm",
"pytorch",
"safetensors",
"image-classification",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2023-09-07T07:58:23Z |
---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
---
# Model card for resnet34.st_safebooru_1k
## Model Details
- **metrics:**
|Precision|Recall|F1-score|
|-|-|-|
|0.8132254324107959|0.34921620371712875|0.46296339495901484|
|
ThuyNT03/PhoBERT-Final_Mixed-aug_replace_w2v-1
|
ThuyNT03
| 2023-09-07T07:58:23Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T06:13:40Z |
---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-aug_replace_w2v-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_replace_w2v-1
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0746
- Accuracy: 0.73
- F1: 0.7281
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9414 | 1.0 | 86 | 0.7396 | 0.69 | 0.6554 |
| 0.6476 | 2.0 | 172 | 0.6620 | 0.75 | 0.7502 |
| 0.4651 | 3.0 | 258 | 0.6393 | 0.78 | 0.7841 |
| 0.3542 | 4.0 | 344 | 0.8022 | 0.7 | 0.6905 |
| 0.2252 | 5.0 | 430 | 0.8766 | 0.71 | 0.7105 |
| 0.1639 | 6.0 | 516 | 0.9983 | 0.72 | 0.7189 |
| 0.1194 | 7.0 | 602 | 1.0347 | 0.73 | 0.7306 |
| 0.0817 | 8.0 | 688 | 1.0746 | 0.73 | 0.7281 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
s3nh/Voicelab-trurl-2-7b-8bit-GGUF
|
s3nh
| 2023-09-07T07:54:49Z | 0 | 0 |
transformers
|
[
"transformers",
"text-generation",
"pl",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-07T07:54:49Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- pl
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/Voicelab/trurl-2-7b-8bit).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
ThuyNT03/PhoBERT-Final_Mixed-aug_replace_synonym-1
|
ThuyNT03
| 2023-09-07T07:52:13Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T06:06:48Z |
---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-aug_replace_synonym-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_replace_synonym-1
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1974
- Accuracy: 0.68
- F1: 0.6775
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9517 | 1.0 | 87 | 0.7514 | 0.66 | 0.6270 |
| 0.67 | 2.0 | 174 | 0.7394 | 0.69 | 0.6814 |
| 0.4899 | 3.0 | 261 | 0.7871 | 0.69 | 0.6812 |
| 0.3787 | 4.0 | 348 | 0.8098 | 0.7 | 0.6957 |
| 0.2649 | 5.0 | 435 | 0.9906 | 0.71 | 0.7045 |
| 0.2069 | 6.0 | 522 | 1.0679 | 0.69 | 0.6886 |
| 0.1483 | 7.0 | 609 | 1.1639 | 0.67 | 0.6669 |
| 0.119 | 8.0 | 696 | 1.1974 | 0.68 | 0.6775 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ThuyNT03/PhoBERT-Final_Mixed-aug_insert_BERT-1
|
ThuyNT03
| 2023-09-07T07:44:08Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T06:00:46Z |
---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-aug_insert_BERT-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_insert_BERT-1
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1594
- Accuracy: 0.72
- F1: 0.7195
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9312 | 1.0 | 88 | 0.7278 | 0.68 | 0.6701 |
| 0.6476 | 2.0 | 176 | 0.7024 | 0.71 | 0.7039 |
| 0.4815 | 3.0 | 264 | 0.7657 | 0.7 | 0.6959 |
| 0.341 | 4.0 | 352 | 0.8302 | 0.7 | 0.6994 |
| 0.2368 | 5.0 | 440 | 0.8699 | 0.72 | 0.7229 |
| 0.1705 | 6.0 | 528 | 1.0489 | 0.71 | 0.7094 |
| 0.1169 | 7.0 | 616 | 1.1685 | 0.71 | 0.7094 |
| 0.1176 | 8.0 | 704 | 1.1594 | 0.72 | 0.7195 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
GNReplay/bert-finetuned-ner
|
GNReplay
| 2023-09-07T07:43:04Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-07T07:28:52Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9332341761692282
- name: Recall
type: recall
value: 0.9503534163581285
- name: F1
type: f1
value: 0.9417160010005836
- name: Accuracy
type: accuracy
value: 0.9864602342968152
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0581
- Precision: 0.9332
- Recall: 0.9504
- F1: 0.9417
- Accuracy: 0.9865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0794 | 1.0 | 1756 | 0.0834 | 0.9045 | 0.9322 | 0.9181 | 0.9787 |
| 0.0393 | 2.0 | 3512 | 0.0552 | 0.9257 | 0.9480 | 0.9367 | 0.9853 |
| 0.0259 | 3.0 | 5268 | 0.0581 | 0.9332 | 0.9504 | 0.9417 | 0.9865 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
turing-motors/heron-preliminary-git-Llama-2-70b-v0
|
turing-motors
| 2023-09-07T07:41:54Z | 36 | 1 |
transformers
|
[
"transformers",
"pytorch",
"git_llama",
"text-generation",
"heron",
"vision",
"image-captioning",
"image-to-text",
"ja",
"arxiv:2205.14100",
"arxiv:2307.09288",
"license:llama2",
"autotrain_compatible",
"region:us"
] |
image-to-text
| 2023-09-07T01:08:09Z |
---
language:
- ja
tags:
- heron
- vision
- image-captioning
pipeline_tag: image-to-text
license:
- llama2
inference: false
---
# Heron GIT Llama 2 70B Preliminary

## Model Details
Heron GIT Llama 2 70B Preliminary is a vision-language model that was pretrained with image-text pairs.<br>
This model was trained using [the heron library](https://github.com/turingmotors/heron). Please refer to the code for details.
<b>*Note: This model is a preliminary trained version. Its accuracy and performance are under verification, and we do not provide any guarantees. We plan to update it with a further trained version in the future.*</b>
## Usage
Follow [the installation guide](https://github.com/turingmotors/heron/#1-clone-this-repository).
## Model Details
* **Developed by**: [Turing Inc.](https://www.turing-motors.com/)
* **Adaptor type**: [GIT](https://arxiv.org/abs/2205.14100)
* **Lamguage Model**: [Llama-2 70B chat hf](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf)
* **Language(s)**: English
* **License**: This model is licensed under [the LLAMA 2 Community License](https://github.com/facebookresearch/llama/blob/main/LICENSE).
### Training
This model was trained with the Adaptor using M3IT Coco Captions.
### Training Dataset
- [MMInstruction M3IT](https://huggingface.co/datasets/MMInstruction/M3IT)
## Use and Limitations
### Intended Use
This model is intended for use in chat-like applications and for research purposes.
### Limitations
The model may produce inaccurate or false information, and its accuracy is not guaranteed. It is still in the research and development stage.
## How to cite
```bibtex
@misc{GitElyzaFast,
url = {[https://huggingface.co/turing-motors/heron-preliminary-git-Llama-2-70b-v0](https://huggingface.co/turing-motors/heron-preliminary-git-Llama-2-70b-v0)},
title = {Heron GIT Llama 2 70B Preliminary},
author = {Yuichi Inoue, Kotaro Tanahashi, and Yu Yamaguchi}
}
```
## Citations
```bibtex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint={2307.09288},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
---
license: llama2
---
|
ThuyNT03/PhoBERT-Final_Mixed-aug_insert_tfidf-1
|
ThuyNT03
| 2023-09-07T07:38:27Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T05:54:38Z |
---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-aug_insert_tfidf-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_insert_tfidf-1
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1845
- Accuracy: 0.71
- F1: 0.7075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9115 | 1.0 | 88 | 0.7285 | 0.71 | 0.6983 |
| 0.5972 | 2.0 | 176 | 0.7379 | 0.73 | 0.7238 |
| 0.3991 | 3.0 | 264 | 0.7867 | 0.72 | 0.7169 |
| 0.2894 | 4.0 | 352 | 0.8736 | 0.73 | 0.7310 |
| 0.2112 | 5.0 | 440 | 0.9920 | 0.74 | 0.7403 |
| 0.1393 | 6.0 | 528 | 1.0496 | 0.75 | 0.7486 |
| 0.1191 | 7.0 | 616 | 1.1640 | 0.72 | 0.7177 |
| 0.098 | 8.0 | 704 | 1.1845 | 0.71 | 0.7075 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
s3nh/Tap-M-Luna-AI-Llama2-Uncensored-GGUF
|
s3nh
| 2023-09-07T07:33:20Z | 11 | 5 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-07T07:23:09Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/Tap-M/Luna-AI-Llama2-Uncensored).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
ThuyNT03/PhoBERT-Final_Mixed-aug_insert_w2v-1
|
ThuyNT03
| 2023-09-07T07:32:12Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T05:47:13Z |
---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-aug_insert_w2v-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_insert_w2v-1
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0909
- Accuracy: 0.76
- F1: 0.7596
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9106 | 1.0 | 86 | 0.7115 | 0.73 | 0.7319 |
| 0.5874 | 2.0 | 172 | 0.6895 | 0.71 | 0.7119 |
| 0.4037 | 3.0 | 258 | 0.8004 | 0.69 | 0.6842 |
| 0.2653 | 4.0 | 344 | 0.7982 | 0.72 | 0.7264 |
| 0.1761 | 5.0 | 430 | 0.9948 | 0.76 | 0.7608 |
| 0.1044 | 6.0 | 516 | 1.0613 | 0.75 | 0.7518 |
| 0.0844 | 7.0 | 602 | 1.0984 | 0.75 | 0.7478 |
| 0.0604 | 8.0 | 688 | 1.0909 | 0.76 | 0.7596 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ivy8228/ddpm-celebahq-finetuned-butterflies-2epochs
|
ivy8228
| 2023-09-07T07:31:51Z | 44 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-09-07T07:25:56Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# 用法
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(' ivy8228/ddpm-celebahq-finetuned-butterflies-2epochs')
image = pipeline().images[0]
image
'''
|
ThuyNT03/PhoBERT-Final_Mixed-aug_delete-1
|
ThuyNT03
| 2023-09-07T07:18:25Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T05:33:45Z |
---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-aug_delete-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_delete-1
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1448
- Accuracy: 0.71
- F1: 0.7085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9286 | 1.0 | 88 | 0.7463 | 0.64 | 0.6201 |
| 0.6411 | 2.0 | 176 | 0.7227 | 0.7 | 0.6922 |
| 0.4576 | 3.0 | 264 | 0.7157 | 0.69 | 0.6887 |
| 0.3081 | 4.0 | 352 | 0.9218 | 0.67 | 0.6559 |
| 0.2039 | 5.0 | 440 | 0.9434 | 0.69 | 0.6866 |
| 0.1494 | 6.0 | 528 | 1.0428 | 0.7 | 0.6967 |
| 0.1042 | 7.0 | 616 | 1.1137 | 0.71 | 0.7085 |
| 0.0829 | 8.0 | 704 | 1.1448 | 0.71 | 0.7085 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ThuyNT03/PhoBERT-Final_Mixed-aug_swap-1
|
ThuyNT03
| 2023-09-07T07:09:43Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T05:24:15Z |
---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-aug_swap-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_swap-1
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2440
- Accuracy: 0.69
- F1: 0.6896
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8972 | 1.0 | 87 | 0.7698 | 0.62 | 0.5744 |
| 0.5881 | 2.0 | 174 | 0.7581 | 0.64 | 0.6314 |
| 0.3953 | 3.0 | 261 | 0.8167 | 0.68 | 0.6791 |
| 0.2472 | 4.0 | 348 | 0.8476 | 0.74 | 0.7435 |
| 0.1639 | 5.0 | 435 | 1.0144 | 0.71 | 0.7139 |
| 0.0969 | 6.0 | 522 | 1.1456 | 0.7 | 0.7004 |
| 0.079 | 7.0 | 609 | 1.1831 | 0.7 | 0.7009 |
| 0.0576 | 8.0 | 696 | 1.2440 | 0.69 | 0.6896 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ryanyip7777/pmc_vit-l-14_hf
|
ryanyip7777
| 2023-09-07T07:05:56Z | 106 | 1 |
transformers
|
[
"transformers",
"pytorch",
"clip",
"zero-shot-image-classification",
"generated_from_trainer",
"base_model:openai/clip-vit-large-patch14",
"base_model:finetune:openai/clip-vit-large-patch14",
"endpoints_compatible",
"region:us"
] |
zero-shot-image-classification
| 2023-09-07T05:58:49Z |
---
base_model: openai/clip-vit-large-patch14
tags:
- generated_from_trainer
model-index:
- name: clip-vit-l-14-pmc-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-vit-l-14-pmc-finetuned
This model is a fine-tuned version of [openai/clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) on an **pmc_oa** (https://huggingface.co/datasets/axiong/pmc_oa) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
### finetune this model use the script from *run_clip.py* (https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text)
```shell
python -W ignore run_clip.py --model_name_or_path openai/clip-vit-large-patch14 \
--output_dir ./clip-vit-l-14-pmc-finetuned \
--train_file data/pmc_roco_train.csv \
--validation_file data/pmc_roco_valid.csv \
--image_column image --caption_column caption \
--max_seq_length 77 \
--do_train --do_eval \
--per_device_train_batch_size 16 --per_device_eval_batch_size 8 \
--remove_unused_columns=False \
--learning_rate="5e-5" --warmup_steps="0" --weight_decay 0.1 \
--overwrite_output_dir \
--num_train_epochs 10 \
--logging_dir ./pmc_vit_logs \
--save_total_limit 2 \
--report_to tensorboard
```
### usage
```python
from PIL import Image
import requests
from transformers import CLIPProcessor, CLIPModel
model = CLIPModel.from_pretrained("ryanyip7777/pmc_vit-l-14_hf")
processor = CLIPProcessor.from_pretrained("ryanyip7777/pmc_vit-l-14_hf")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
```
|
ivy8228/sd-class-butterflies-32
|
ivy8228
| 2023-09-07T07:03:42Z | 45 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-09-07T07:01:40Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# 这个模型用于生成蝴蝶图像的无条件图像生成扩散模型
'''python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('ivy8228/sd-class-butterflies-32')
image = pipeline().images[0]
image
|
bigmorning/whisper_4_with_init_sun_char_0040
|
bigmorning
| 2023-09-07T06:56:02Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-07T06:55:53Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun_char_0040
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun_char_0040
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.7846
- Train Accuracy: 0.0454
- Train Wermet: 0.0855
- Validation Loss: 1.8107
- Validation Accuracy: 0.0305
- Validation Wermet: 0.1385
- Epoch: 39
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 3.2071 | 0.0313 | 0.1237 | 2.8546 | 0.0225 | 0.1109 | 0 |
| 3.0365 | 0.0325 | 0.0375 | 2.8115 | 0.0228 | 0.1215 | 1 |
| 3.0162 | 0.0326 | 0.0484 | 2.7884 | 0.0231 | 0.1318 | 2 |
| 3.0042 | 0.0327 | 0.0555 | 2.7853 | 0.0233 | 0.1393 | 3 |
| 2.9934 | 0.0328 | 0.0614 | 2.7657 | 0.0232 | 0.1273 | 4 |
| 2.9858 | 0.0329 | 0.0654 | 2.7542 | 0.0234 | 0.1073 | 5 |
| 2.9735 | 0.0330 | 0.0673 | 2.7367 | 0.0234 | 0.1414 | 6 |
| 2.9574 | 0.0332 | 0.0704 | 2.6961 | 0.0240 | 0.1429 | 7 |
| 2.9320 | 0.0335 | 0.0723 | 2.6652 | 0.0239 | 0.0990 | 8 |
| 2.8976 | 0.0339 | 0.0729 | 2.5997 | 0.0245 | 0.0944 | 9 |
| 2.8460 | 0.0343 | 0.0728 | 2.5378 | 0.0248 | 0.1435 | 10 |
| 2.7781 | 0.0347 | 0.0741 | 2.4355 | 0.0254 | 0.1372 | 11 |
| 2.7083 | 0.0352 | 0.0747 | 2.5163 | 0.0248 | 0.0987 | 12 |
| 2.6445 | 0.0356 | 0.0720 | 2.2997 | 0.0261 | 0.1484 | 13 |
| 2.5838 | 0.0360 | 0.0724 | 2.2386 | 0.0266 | 0.1419 | 14 |
| 2.5294 | 0.0363 | 0.0721 | 2.1855 | 0.0269 | 0.1289 | 15 |
| 2.4760 | 0.0367 | 0.0711 | 2.1682 | 0.0271 | 0.1214 | 16 |
| 2.4339 | 0.0370 | 0.0698 | 2.1018 | 0.0273 | 0.1264 | 17 |
| 2.3867 | 0.0373 | 0.0684 | 2.0647 | 0.0275 | 0.1403 | 18 |
| 2.3528 | 0.0376 | 0.0669 | 2.0705 | 0.0275 | 0.1089 | 19 |
| 2.3145 | 0.0379 | 0.0658 | 2.0179 | 0.0280 | 0.1209 | 20 |
| 2.2765 | 0.0382 | 0.0654 | 2.0182 | 0.0279 | 0.1023 | 21 |
| 2.2415 | 0.0385 | 0.0650 | 1.9558 | 0.0284 | 0.1523 | 22 |
| 2.2102 | 0.0388 | 0.0643 | 1.9395 | 0.0285 | 0.1123 | 23 |
| 2.1717 | 0.0392 | 0.0635 | 1.9791 | 0.0282 | 0.0928 | 24 |
| 2.1457 | 0.0395 | 0.0626 | 1.8907 | 0.0291 | 0.1078 | 25 |
| 2.1159 | 0.0398 | 0.0633 | 1.8930 | 0.0290 | 0.1098 | 26 |
| 2.0892 | 0.0401 | 0.0638 | 1.8696 | 0.0292 | 0.1078 | 27 |
| 2.0609 | 0.0405 | 0.0659 | 1.8555 | 0.0296 | 0.1051 | 28 |
| 2.0342 | 0.0409 | 0.0639 | 1.8589 | 0.0293 | 0.1092 | 29 |
| 2.0044 | 0.0413 | 0.0653 | 1.8375 | 0.0299 | 0.1015 | 30 |
| 1.9831 | 0.0416 | 0.0649 | 1.7954 | 0.0302 | 0.1194 | 31 |
| 1.9535 | 0.0421 | 0.0689 | 1.7937 | 0.0302 | 0.1168 | 32 |
| 1.9290 | 0.0425 | 0.0706 | 1.8385 | 0.0299 | 0.1074 | 33 |
| 1.8933 | 0.0432 | 0.0682 | 1.8761 | 0.0295 | 0.1173 | 34 |
| 1.8724 | 0.0435 | 0.0752 | 1.7929 | 0.0304 | 0.1220 | 35 |
| 1.8407 | 0.0442 | 0.0760 | 1.7865 | 0.0306 | 0.1266 | 36 |
| 1.8179 | 0.0446 | 0.0832 | 1.8108 | 0.0304 | 0.1226 | 37 |
| 1.7977 | 0.0451 | 0.0888 | 1.8024 | 0.0306 | 0.1161 | 38 |
| 1.7846 | 0.0454 | 0.0855 | 1.8107 | 0.0305 | 0.1385 | 39 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
syoius/hfdrl_unit2
|
syoius
| 2023-09-07T06:55:43Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-07T06:55:40Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: hfdrl_unit2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="syoius/hfdrl_unit2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
syoius/q-FrozenLake-v1-4x4-noSlippery
|
syoius
| 2023-09-07T06:48:19Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-07T06:48:17Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="syoius/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
aegon-h/mpt-7b
|
aegon-h
| 2023-09-07T06:44:26Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mpt",
"text-generation",
"Composer",
"MosaicML",
"llm-foundry",
"StreamingDatasets",
"custom_code",
"dataset:mc4",
"dataset:c4",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:bigcode/the-stack",
"dataset:allenai/s2orc",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-09-07T05:16:11Z |
---
license: apache-2.0
tags:
- Composer
- MosaicML
- llm-foundry
- StreamingDatasets
datasets:
- mc4
- c4
- togethercomputer/RedPajama-Data-1T
- bigcode/the-stack
- allenai/s2orc
model_creator: mosaicml
model_link: https://huggingface.co/mosaicml/mpt-7b
model_name: mpt-7b
edited_by: agonh
inference: false
---
# MPT-7B
Model creator: [MosaicML](https://www.mosaicml.com).
Original model: [mpt-7b](https://huggingface.co/mosaicml/mpt-7b).
## Description
This repo contains model files for [mosaicml's mpt-7b](https://huggingface.co/mosaicml/mpt-7b).
|
SateeshAmbesange/my_awesome_model
|
SateeshAmbesange
| 2023-09-07T06:43:44Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T03:59:42Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: SateeshAmbesange/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# SateeshAmbesange/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0670
- Validation Loss: 0.2178
- Train Accuracy: 0.9323
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2514 | 0.1844 | 0.9288 | 0 |
| 0.1344 | 0.2147 | 0.9206 | 1 |
| 0.0670 | 0.2178 | 0.9323 | 2 |
### Framework versions
- Transformers 4.33.1
- TensorFlow 2.12.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ThuyNT03/xlm-roberta-base-Final_Mixed-aug_backtranslation-1
|
ThuyNT03
| 2023-09-07T06:43:03Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T21:37:50Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_Mixed-aug_backtranslation-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-aug_backtranslation-1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3364
- Accuracy: 0.7
- F1: 0.6913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9909 | 1.0 | 87 | 0.8850 | 0.6 | 0.5586 |
| 0.7303 | 2.0 | 174 | 0.6941 | 0.69 | 0.6767 |
| 0.5713 | 3.0 | 261 | 0.7149 | 0.73 | 0.7215 |
| 0.4254 | 4.0 | 348 | 0.6955 | 0.75 | 0.7492 |
| 0.331 | 5.0 | 435 | 0.9854 | 0.69 | 0.6737 |
| 0.2373 | 6.0 | 522 | 1.0423 | 0.7 | 0.6909 |
| 0.1995 | 7.0 | 609 | 1.2707 | 0.69 | 0.6806 |
| 0.1713 | 8.0 | 696 | 1.3364 | 0.7 | 0.6913 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ThuyNT03/xlm-roberta-base-Final_Mixed-aug_replace_BERT-1
|
ThuyNT03
| 2023-09-07T06:35:24Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T21:30:13Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_Mixed-aug_replace_BERT-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-aug_replace_BERT-1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7460
- Accuracy: 0.75
- F1: 0.7473
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0554 | 1.0 | 88 | 0.9377 | 0.5 | 0.4177 |
| 0.8929 | 2.0 | 176 | 0.8133 | 0.64 | 0.5654 |
| 0.7778 | 3.0 | 264 | 0.6756 | 0.73 | 0.7154 |
| 0.6686 | 4.0 | 352 | 0.6923 | 0.75 | 0.7378 |
| 0.5672 | 5.0 | 440 | 0.6880 | 0.77 | 0.7706 |
| 0.5009 | 6.0 | 528 | 0.7243 | 0.77 | 0.7668 |
| 0.3978 | 7.0 | 616 | 0.7148 | 0.76 | 0.7584 |
| 0.3843 | 8.0 | 704 | 0.7460 | 0.75 | 0.7473 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
0x3e9/Xisumavoid_RVC
|
0x3e9
| 2023-09-07T06:34:50Z | 0 | 0 | null |
[
"rvc",
"audio-to-audio",
"region:us"
] |
audio-to-audio
| 2023-09-04T09:00:56Z |
---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# Xisumavoid

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| Xisumavoid | 200 | RVC V2 | [Download](https://huggingface.co/0x3e9/Xisumavoid_RVC/resolve/main/xisumavoid.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1130321704141471935) |
|
ThuyNT03/xlm-roberta-base-Final_VietNam-aug_replace_BERT-1
|
ThuyNT03
| 2023-09-07T06:33:40Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T21:25:53Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_VietNam-aug_replace_BERT-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_VietNam-aug_replace_BERT-1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6983
- Accuracy: 0.73
- F1: 0.7337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0732 | 1.0 | 87 | 1.0154 | 0.54 | 0.4487 |
| 0.9821 | 2.0 | 174 | 0.8279 | 0.63 | 0.6060 |
| 0.8118 | 3.0 | 261 | 0.7501 | 0.66 | 0.6519 |
| 0.7278 | 4.0 | 348 | 0.6890 | 0.73 | 0.7285 |
| 0.6158 | 5.0 | 435 | 0.7055 | 0.66 | 0.6604 |
| 0.5639 | 6.0 | 522 | 0.6927 | 0.69 | 0.6909 |
| 0.4855 | 7.0 | 609 | 0.6941 | 0.72 | 0.7251 |
| 0.4694 | 8.0 | 696 | 0.6983 | 0.73 | 0.7337 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ThuyNT03/xlm-roberta-base-Final_Mixed-aug_replace_tfidf-1
|
ThuyNT03
| 2023-09-07T06:28:00Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T21:22:29Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_Mixed-aug_replace_tfidf-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-aug_replace_tfidf-1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7044
- Accuracy: 0.76
- F1: 0.7519
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.083 | 1.0 | 88 | 0.9789 | 0.6 | 0.4933 |
| 0.9576 | 2.0 | 176 | 0.7989 | 0.66 | 0.6019 |
| 0.8381 | 3.0 | 264 | 0.8103 | 0.67 | 0.6320 |
| 0.744 | 4.0 | 352 | 0.6355 | 0.74 | 0.7250 |
| 0.6186 | 5.0 | 440 | 0.6820 | 0.77 | 0.7660 |
| 0.5534 | 6.0 | 528 | 0.6782 | 0.76 | 0.7519 |
| 0.4677 | 7.0 | 616 | 0.6447 | 0.79 | 0.7810 |
| 0.4132 | 8.0 | 704 | 0.7044 | 0.76 | 0.7519 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
bigmorning/whisper_4_with_init_sun_char_0030
|
bigmorning
| 2023-09-07T06:25:49Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-07T06:25:41Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun_char_0030
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun_char_0030
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.0342
- Train Accuracy: 0.0409
- Train Wermet: 0.0639
- Validation Loss: 1.8589
- Validation Accuracy: 0.0293
- Validation Wermet: 0.1092
- Epoch: 29
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 3.2071 | 0.0313 | 0.1237 | 2.8546 | 0.0225 | 0.1109 | 0 |
| 3.0365 | 0.0325 | 0.0375 | 2.8115 | 0.0228 | 0.1215 | 1 |
| 3.0162 | 0.0326 | 0.0484 | 2.7884 | 0.0231 | 0.1318 | 2 |
| 3.0042 | 0.0327 | 0.0555 | 2.7853 | 0.0233 | 0.1393 | 3 |
| 2.9934 | 0.0328 | 0.0614 | 2.7657 | 0.0232 | 0.1273 | 4 |
| 2.9858 | 0.0329 | 0.0654 | 2.7542 | 0.0234 | 0.1073 | 5 |
| 2.9735 | 0.0330 | 0.0673 | 2.7367 | 0.0234 | 0.1414 | 6 |
| 2.9574 | 0.0332 | 0.0704 | 2.6961 | 0.0240 | 0.1429 | 7 |
| 2.9320 | 0.0335 | 0.0723 | 2.6652 | 0.0239 | 0.0990 | 8 |
| 2.8976 | 0.0339 | 0.0729 | 2.5997 | 0.0245 | 0.0944 | 9 |
| 2.8460 | 0.0343 | 0.0728 | 2.5378 | 0.0248 | 0.1435 | 10 |
| 2.7781 | 0.0347 | 0.0741 | 2.4355 | 0.0254 | 0.1372 | 11 |
| 2.7083 | 0.0352 | 0.0747 | 2.5163 | 0.0248 | 0.0987 | 12 |
| 2.6445 | 0.0356 | 0.0720 | 2.2997 | 0.0261 | 0.1484 | 13 |
| 2.5838 | 0.0360 | 0.0724 | 2.2386 | 0.0266 | 0.1419 | 14 |
| 2.5294 | 0.0363 | 0.0721 | 2.1855 | 0.0269 | 0.1289 | 15 |
| 2.4760 | 0.0367 | 0.0711 | 2.1682 | 0.0271 | 0.1214 | 16 |
| 2.4339 | 0.0370 | 0.0698 | 2.1018 | 0.0273 | 0.1264 | 17 |
| 2.3867 | 0.0373 | 0.0684 | 2.0647 | 0.0275 | 0.1403 | 18 |
| 2.3528 | 0.0376 | 0.0669 | 2.0705 | 0.0275 | 0.1089 | 19 |
| 2.3145 | 0.0379 | 0.0658 | 2.0179 | 0.0280 | 0.1209 | 20 |
| 2.2765 | 0.0382 | 0.0654 | 2.0182 | 0.0279 | 0.1023 | 21 |
| 2.2415 | 0.0385 | 0.0650 | 1.9558 | 0.0284 | 0.1523 | 22 |
| 2.2102 | 0.0388 | 0.0643 | 1.9395 | 0.0285 | 0.1123 | 23 |
| 2.1717 | 0.0392 | 0.0635 | 1.9791 | 0.0282 | 0.0928 | 24 |
| 2.1457 | 0.0395 | 0.0626 | 1.8907 | 0.0291 | 0.1078 | 25 |
| 2.1159 | 0.0398 | 0.0633 | 1.8930 | 0.0290 | 0.1098 | 26 |
| 2.0892 | 0.0401 | 0.0638 | 1.8696 | 0.0292 | 0.1078 | 27 |
| 2.0609 | 0.0405 | 0.0659 | 1.8555 | 0.0296 | 0.1051 | 28 |
| 2.0342 | 0.0409 | 0.0639 | 1.8589 | 0.0293 | 0.1092 | 29 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
0x3e9/Trump_RVC
|
0x3e9
| 2023-09-07T06:25:46Z | 0 | 0 | null |
[
"rvc",
"audio-to-audio",
"region:us"
] |
audio-to-audio
| 2023-09-04T09:00:55Z |
---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# Trump

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| Trump | 600 | RVC V2 | [Download](https://huggingface.co/0x3e9/Trump_RVC/resolve/main/trump.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1124550350276407347) |
|
0x3e9/Bad_mic__Stable_Ronaldo_RVC
|
0x3e9
| 2023-09-07T06:14:59Z | 0 | 0 | null |
[
"rvc",
"audio-to-audio",
"region:us"
] |
audio-to-audio
| 2023-09-04T09:00:53Z |
---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# Bad mic Stable Ronaldo

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| Bad mic Stable Ronaldo | 50 | RVC V2 | [Download](https://huggingface.co/0x3e9/Bad_mic__Stable_Ronaldo_RVC/resolve/main/stableronaldo.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1130310529500586034) |
|
0x3e9/Software_Automatic_Mouth_RVC
|
0x3e9
| 2023-09-07T06:13:16Z | 0 | 0 | null |
[
"rvc",
"audio-to-audio",
"region:us"
] |
audio-to-audio
| 2023-09-04T09:00:53Z |
---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# Software Automatic Mouth

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| Software Automatic Mouth | 300 | RVC V2 | [Download](https://huggingface.co/0x3e9/Software_Automatic_Mouth_RVC/resolve/main/SAM.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1122991657168687104) |
|
bigmorning/whisper_4_with_init_sun_char_0025
|
bigmorning
| 2023-09-07T06:10:46Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-07T06:10:38Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun_char_0025
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun_char_0025
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.1717
- Train Accuracy: 0.0392
- Train Wermet: 0.0635
- Validation Loss: 1.9791
- Validation Accuracy: 0.0282
- Validation Wermet: 0.0928
- Epoch: 24
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 3.2071 | 0.0313 | 0.1237 | 2.8546 | 0.0225 | 0.1109 | 0 |
| 3.0365 | 0.0325 | 0.0375 | 2.8115 | 0.0228 | 0.1215 | 1 |
| 3.0162 | 0.0326 | 0.0484 | 2.7884 | 0.0231 | 0.1318 | 2 |
| 3.0042 | 0.0327 | 0.0555 | 2.7853 | 0.0233 | 0.1393 | 3 |
| 2.9934 | 0.0328 | 0.0614 | 2.7657 | 0.0232 | 0.1273 | 4 |
| 2.9858 | 0.0329 | 0.0654 | 2.7542 | 0.0234 | 0.1073 | 5 |
| 2.9735 | 0.0330 | 0.0673 | 2.7367 | 0.0234 | 0.1414 | 6 |
| 2.9574 | 0.0332 | 0.0704 | 2.6961 | 0.0240 | 0.1429 | 7 |
| 2.9320 | 0.0335 | 0.0723 | 2.6652 | 0.0239 | 0.0990 | 8 |
| 2.8976 | 0.0339 | 0.0729 | 2.5997 | 0.0245 | 0.0944 | 9 |
| 2.8460 | 0.0343 | 0.0728 | 2.5378 | 0.0248 | 0.1435 | 10 |
| 2.7781 | 0.0347 | 0.0741 | 2.4355 | 0.0254 | 0.1372 | 11 |
| 2.7083 | 0.0352 | 0.0747 | 2.5163 | 0.0248 | 0.0987 | 12 |
| 2.6445 | 0.0356 | 0.0720 | 2.2997 | 0.0261 | 0.1484 | 13 |
| 2.5838 | 0.0360 | 0.0724 | 2.2386 | 0.0266 | 0.1419 | 14 |
| 2.5294 | 0.0363 | 0.0721 | 2.1855 | 0.0269 | 0.1289 | 15 |
| 2.4760 | 0.0367 | 0.0711 | 2.1682 | 0.0271 | 0.1214 | 16 |
| 2.4339 | 0.0370 | 0.0698 | 2.1018 | 0.0273 | 0.1264 | 17 |
| 2.3867 | 0.0373 | 0.0684 | 2.0647 | 0.0275 | 0.1403 | 18 |
| 2.3528 | 0.0376 | 0.0669 | 2.0705 | 0.0275 | 0.1089 | 19 |
| 2.3145 | 0.0379 | 0.0658 | 2.0179 | 0.0280 | 0.1209 | 20 |
| 2.2765 | 0.0382 | 0.0654 | 2.0182 | 0.0279 | 0.1023 | 21 |
| 2.2415 | 0.0385 | 0.0650 | 1.9558 | 0.0284 | 0.1523 | 22 |
| 2.2102 | 0.0388 | 0.0643 | 1.9395 | 0.0285 | 0.1123 | 23 |
| 2.1717 | 0.0392 | 0.0635 | 1.9791 | 0.0282 | 0.0928 | 24 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
kristonai/falco
|
kristonai
| 2023-09-07T06:06:41Z | 0 | 0 | null |
[
"license:bsd",
"region:us"
] | null | 2023-08-21T07:08:02Z |
---
license: bsd
---
# Model Card for FALCO-TTS
<!-- Provide a quick summary of what the model is/does. -->
This model implements a three-stage, SPEAR-TTS-like model, supporting zero-shot and cross-language speech synthesis. </p>
We trained this model on the corpus MLS (https://openslr.org/94/) and WenetSpeech (https://openslr.org/121/), utilizing about 20,000 hours data, including English and Mandarin part. </p>
This model have the auto code-switch capability.
## Model Details
|Model |Parameters |Attention |Output Vocab size
|:--- |:---- |:--- |:---
|text_to_semantic |240 M |Causal |1024
|semantic_to_acoustic |370 M |Causal |8x 1,024
|
ThuyNT03/xlm-roberta-base-Final_Mixed-aug_insert_BERT-1
|
ThuyNT03
| 2023-09-07T06:03:25Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T20:56:52Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_Mixed-aug_insert_BERT-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-aug_insert_BERT-1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9415
- Accuracy: 0.75
- F1: 0.7417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0342 | 1.0 | 88 | 0.9411 | 0.55 | 0.4859 |
| 0.8344 | 2.0 | 176 | 0.7628 | 0.67 | 0.6334 |
| 0.6763 | 3.0 | 264 | 0.7131 | 0.73 | 0.7224 |
| 0.5531 | 4.0 | 352 | 0.7576 | 0.74 | 0.7328 |
| 0.4726 | 5.0 | 440 | 0.7842 | 0.72 | 0.7176 |
| 0.389 | 6.0 | 528 | 0.8293 | 0.74 | 0.7342 |
| 0.3051 | 7.0 | 616 | 0.8358 | 0.74 | 0.7311 |
| 0.2798 | 8.0 | 704 | 0.9415 | 0.75 | 0.7417 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
whywynn/ppo-LunarLander-v2-Unit8
|
whywynn
| 2023-09-07T06:01:26Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-07T06:01:21Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -194.99 +/- 123.30
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
|
randomnumb/ppo-Huggy
|
randomnumb
| 2023-09-07T05:54:39Z | 8 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-09-07T05:54:33Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: randomnumb/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
chuntali/distilbert-base-uncased-finetuned-cola
|
chuntali
| 2023-09-07T05:50:24Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T05:23:57Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5238347808517775
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8609
- Matthews Correlation: 0.5238
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.526 | 1.0 | 535 | 0.4680 | 0.4756 |
| 0.3486 | 2.0 | 1070 | 0.5359 | 0.4605 |
| 0.2267 | 3.0 | 1605 | 0.6567 | 0.5059 |
| 0.1735 | 4.0 | 2140 | 0.7533 | 0.5179 |
| 0.1282 | 5.0 | 2675 | 0.8609 | 0.5238 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
0x3e9/MumboJumbo_RVC
|
0x3e9
| 2023-09-07T05:50:13Z | 0 | 0 | null |
[
"rvc",
"audio-to-audio",
"region:us"
] |
audio-to-audio
| 2023-09-04T09:00:49Z |
---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# MumboJumbo

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| MumboJumbo | 300 | RVC V2 | [Download](https://huggingface.co/0x3e9/MumboJumbo_RVC/resolve/main/mumbojumbo.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1128212529139695636) |
|
ThuyNT03/xlm-roberta-base-Final_VietNam-aug_insert_tfidf-1
|
ThuyNT03
| 2023-09-07T05:49:45Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T20:39:22Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_VietNam-aug_insert_tfidf-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_VietNam-aug_insert_tfidf-1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1592
- Accuracy: 0.72
- F1: 0.7269
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9839 | 1.0 | 87 | 0.7510 | 0.63 | 0.5632 |
| 0.6788 | 2.0 | 174 | 0.7245 | 0.71 | 0.7109 |
| 0.5471 | 3.0 | 261 | 0.7273 | 0.66 | 0.6683 |
| 0.3945 | 4.0 | 348 | 0.7304 | 0.72 | 0.7261 |
| 0.3062 | 5.0 | 435 | 0.9655 | 0.73 | 0.7360 |
| 0.2197 | 6.0 | 522 | 0.9765 | 0.73 | 0.7357 |
| 0.1692 | 7.0 | 609 | 1.1266 | 0.73 | 0.7357 |
| 0.1331 | 8.0 | 696 | 1.1592 | 0.72 | 0.7269 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
CyberHarem/yuki_haru_theidolmastercinderellagirlsu149
|
CyberHarem
| 2023-09-07T05:49:08Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/yuki_haru_theidolmastercinderellagirlsu149",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-07T05:28:42Z |
---
license: mit
datasets:
- CyberHarem/yuki_haru_theidolmastercinderellagirlsu149
pipeline_tag: text-to-image
tags:
- art
---
# Lora of yuki_haru_theidolmastercinderellagirlsu149
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 3360, you need to download `3360/yuki_haru_theidolmastercinderellagirlsu149.pt` as the embedding and `3360/yuki_haru_theidolmastercinderellagirlsu149.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 3360**, with the score of 0.940. The trigger words are:
1. `yuki_haru_theidolmastercinderellagirlsu149`
2. `orange_hair, purple_eyes, long_hair, bangs, upper_body, blonde_hair, hair_between_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | pattern_16 | pattern_17 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:--------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7200 | 0.910 | [Download](7200/yuki_haru_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7200/previews/nude.png) | [<NSFW, click to see>](7200/previews/nude2.png) |  |  |
| 6720 | 0.888 | [Download](6720/yuki_haru_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6720/previews/nude.png) | [<NSFW, click to see>](6720/previews/nude2.png) |  |  |
| 6240 | 0.878 | [Download](6240/yuki_haru_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) |  |  |
| 5760 | 0.843 | [Download](5760/yuki_haru_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5760/previews/nude.png) | [<NSFW, click to see>](5760/previews/nude2.png) |  |  |
| 5280 | 0.906 | [Download](5280/yuki_haru_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5280/previews/nude.png) | [<NSFW, click to see>](5280/previews/nude2.png) |  |  |
| 4800 | 0.891 | [Download](4800/yuki_haru_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4800/previews/nude.png) | [<NSFW, click to see>](4800/previews/nude2.png) |  |  |
| 4320 | 0.912 | [Download](4320/yuki_haru_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) |  |  |
| 3840 | 0.908 | [Download](3840/yuki_haru_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3840/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3840/previews/nude.png) | [<NSFW, click to see>](3840/previews/nude2.png) |  |  |
| **3360** | **0.940** | [**Download**](3360/yuki_haru_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3360/previews/nude.png) | [<NSFW, click to see>](3360/previews/nude2.png) |  |  |
| 2880 | 0.858 | [Download](2880/yuki_haru_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2880/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2880/previews/nude.png) | [<NSFW, click to see>](2880/previews/nude2.png) |  |  |
| 2400 | 0.848 | [Download](2400/yuki_haru_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2400/previews/nude.png) | [<NSFW, click to see>](2400/previews/nude2.png) |  |  |
| 1920 | 0.770 | [Download](1920/yuki_haru_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1920/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1920/previews/nude.png) | [<NSFW, click to see>](1920/previews/nude2.png) |  |  |
| 1440 | 0.842 | [Download](1440/yuki_haru_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1440/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1440/previews/nude.png) | [<NSFW, click to see>](1440/previews/nude2.png) |  |  |
| 960 | 0.858 | [Download](960/yuki_haru_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](960/previews/bondage.png) |  |  |  | [<NSFW, click to see>](960/previews/nude.png) | [<NSFW, click to see>](960/previews/nude2.png) |  |  |
| 480 | 0.802 | [Download](480/yuki_haru_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](480/previews/nude.png) | [<NSFW, click to see>](480/previews/nude2.png) |  |  |
|
0x3e9/Mana_Renewal_RVC
|
0x3e9
| 2023-09-07T05:45:43Z | 0 | 0 | null |
[
"rvc",
"audio-to-audio",
"region:us"
] |
audio-to-audio
| 2023-09-04T09:00:49Z |
---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# Mana Renewal

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| Mana Renewal | 300 | RVC V2 | [Download](https://huggingface.co/0x3e9/Mana_Renewal_RVC/resolve/main/ManaRenewal.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1131869655867334717) |
|
gyesibiney/Distilbert-sentimental-movie-review-classifier-2
|
gyesibiney
| 2023-09-07T05:45:07Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-06T21:40:51Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: test_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7046
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6555 | 0.2 | 500 | 0.5368 |
| 0.5163 | 0.4 | 1000 | 0.6619 |
| 0.4749 | 0.6 | 1500 | 0.4899 |
| 0.4463 | 0.8 | 2000 | 0.4240 |
| 0.4358 | 1.0 | 2500 | 0.4450 |
| 0.3586 | 1.2 | 3000 | 0.4560 |
| 0.3248 | 1.41 | 3500 | 0.5100 |
| 0.336 | 1.61 | 4000 | 0.5952 |
| 0.3443 | 1.81 | 4500 | 0.5189 |
| 0.3075 | 2.01 | 5000 | 0.5482 |
| 0.2318 | 2.21 | 5500 | 0.7007 |
| 0.2128 | 2.41 | 6000 | 0.7401 |
| 0.2168 | 2.61 | 6500 | 0.7252 |
| 0.2349 | 2.81 | 7000 | 0.7046 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
0x3e9/Luke_Lafreniere_RVC
|
0x3e9
| 2023-09-07T05:44:25Z | 0 | 0 | null |
[
"rvc",
"audio-to-audio",
"region:us"
] |
audio-to-audio
| 2023-09-04T09:00:48Z |
---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# Luke Lafreniere

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| Luke Lafreniere | 300 | RVC V2 | [Download](https://huggingface.co/0x3e9/Luke_Lafreniere_RVC/resolve/main/lukeltt.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1125873286702706850) |
|
ThuyNT03/xlm-roberta-base-Final_VietNam-aug_insert_w2v-1
|
ThuyNT03
| 2023-09-07T05:41:21Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T20:29:54Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_VietNam-aug_insert_w2v-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_VietNam-aug_insert_w2v-1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2040
- Accuracy: 0.72
- F1: 0.7257
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9886 | 1.0 | 85 | 0.7499 | 0.65 | 0.5970 |
| 0.6861 | 2.0 | 170 | 0.7312 | 0.7 | 0.7029 |
| 0.5673 | 3.0 | 255 | 0.6732 | 0.73 | 0.7328 |
| 0.4086 | 4.0 | 340 | 0.8771 | 0.73 | 0.7308 |
| 0.2958 | 5.0 | 425 | 0.9051 | 0.74 | 0.7453 |
| 0.2039 | 6.0 | 510 | 1.0350 | 0.73 | 0.7314 |
| 0.1743 | 7.0 | 595 | 1.1745 | 0.7 | 0.7097 |
| 0.1458 | 8.0 | 680 | 1.2040 | 0.72 | 0.7257 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
bigmorning/whisper_4_with_init_sun_char_0015
|
bigmorning
| 2023-09-07T05:40:43Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-07T05:40:35Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun_char_0015
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun_char_0015
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.5838
- Train Accuracy: 0.0360
- Train Wermet: 0.0724
- Validation Loss: 2.2386
- Validation Accuracy: 0.0266
- Validation Wermet: 0.1419
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 3.2071 | 0.0313 | 0.1237 | 2.8546 | 0.0225 | 0.1109 | 0 |
| 3.0365 | 0.0325 | 0.0375 | 2.8115 | 0.0228 | 0.1215 | 1 |
| 3.0162 | 0.0326 | 0.0484 | 2.7884 | 0.0231 | 0.1318 | 2 |
| 3.0042 | 0.0327 | 0.0555 | 2.7853 | 0.0233 | 0.1393 | 3 |
| 2.9934 | 0.0328 | 0.0614 | 2.7657 | 0.0232 | 0.1273 | 4 |
| 2.9858 | 0.0329 | 0.0654 | 2.7542 | 0.0234 | 0.1073 | 5 |
| 2.9735 | 0.0330 | 0.0673 | 2.7367 | 0.0234 | 0.1414 | 6 |
| 2.9574 | 0.0332 | 0.0704 | 2.6961 | 0.0240 | 0.1429 | 7 |
| 2.9320 | 0.0335 | 0.0723 | 2.6652 | 0.0239 | 0.0990 | 8 |
| 2.8976 | 0.0339 | 0.0729 | 2.5997 | 0.0245 | 0.0944 | 9 |
| 2.8460 | 0.0343 | 0.0728 | 2.5378 | 0.0248 | 0.1435 | 10 |
| 2.7781 | 0.0347 | 0.0741 | 2.4355 | 0.0254 | 0.1372 | 11 |
| 2.7083 | 0.0352 | 0.0747 | 2.5163 | 0.0248 | 0.0987 | 12 |
| 2.6445 | 0.0356 | 0.0720 | 2.2997 | 0.0261 | 0.1484 | 13 |
| 2.5838 | 0.0360 | 0.0724 | 2.2386 | 0.0266 | 0.1419 | 14 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
0x3e9/Jerma_Teacher_Noise_RVC
|
0x3e9
| 2023-09-07T05:39:20Z | 0 | 0 | null |
[
"rvc",
"audio-to-audio",
"region:us"
] |
audio-to-audio
| 2023-09-04T09:00:47Z |
---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# Jerma Teacher Noise

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| Jerma Teacher Noise | 2000 | RVC V2 | [Download](https://huggingface.co/0x3e9/Jerma_Teacher_Noise_RVC/resolve/main/teachernoise.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1143302989172461598) |
|
ThuyNT03/xlm-roberta-base-Final_VietNam-aug_insert_synonym-1
|
ThuyNT03
| 2023-09-07T05:32:22Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T20:19:22Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_VietNam-aug_insert_synonym-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_VietNam-aug_insert_synonym-1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0320
- Accuracy: 0.72
- F1: 0.7220
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0097 | 1.0 | 87 | 0.9465 | 0.59 | 0.5204 |
| 0.824 | 2.0 | 174 | 0.7438 | 0.68 | 0.6540 |
| 0.6486 | 3.0 | 261 | 0.7329 | 0.66 | 0.6590 |
| 0.4726 | 4.0 | 348 | 0.7294 | 0.7 | 0.7029 |
| 0.358 | 5.0 | 435 | 0.8954 | 0.69 | 0.6983 |
| 0.2555 | 6.0 | 522 | 0.8258 | 0.73 | 0.7315 |
| 0.2173 | 7.0 | 609 | 1.0117 | 0.73 | 0.7328 |
| 0.173 | 8.0 | 696 | 1.0320 | 0.72 | 0.7220 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
0x3e9/Grian_RVC
|
0x3e9
| 2023-09-07T05:29:01Z | 0 | 0 | null |
[
"rvc",
"audio-to-audio",
"region:us"
] |
audio-to-audio
| 2023-09-04T09:00:45Z |
---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# Grian

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| Grian | 150 | RVC V2 | [Download](https://huggingface.co/0x3e9/Grian_RVC/resolve/main/grian.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1127884963774201856) |
|
ThuyNT03/xlm-roberta-base-Final_Mixed-aug_delete-1
|
ThuyNT03
| 2023-09-07T05:27:12Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T20:20:11Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_Mixed-aug_delete-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-aug_delete-1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8662
- Accuracy: 0.73
- F1: 0.7284
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0296 | 1.0 | 88 | 0.9369 | 0.61 | 0.4985 |
| 0.8507 | 2.0 | 176 | 0.7051 | 0.72 | 0.6898 |
| 0.6817 | 3.0 | 264 | 0.6856 | 0.75 | 0.7399 |
| 0.5683 | 4.0 | 352 | 0.7131 | 0.71 | 0.6991 |
| 0.4328 | 5.0 | 440 | 0.7520 | 0.71 | 0.7119 |
| 0.3489 | 6.0 | 528 | 0.7355 | 0.72 | 0.7214 |
| 0.2746 | 7.0 | 616 | 0.8066 | 0.73 | 0.7296 |
| 0.233 | 8.0 | 704 | 0.8662 | 0.73 | 0.7284 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
bigmorning/whisper_4_with_init_sun_char_0010
|
bigmorning
| 2023-09-07T05:25:47Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-07T05:25:39Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun_char_0010
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun_char_0010
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.8976
- Train Accuracy: 0.0339
- Train Wermet: 0.0729
- Validation Loss: 2.5997
- Validation Accuracy: 0.0245
- Validation Wermet: 0.0944
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 3.2071 | 0.0313 | 0.1237 | 2.8546 | 0.0225 | 0.1109 | 0 |
| 3.0365 | 0.0325 | 0.0375 | 2.8115 | 0.0228 | 0.1215 | 1 |
| 3.0162 | 0.0326 | 0.0484 | 2.7884 | 0.0231 | 0.1318 | 2 |
| 3.0042 | 0.0327 | 0.0555 | 2.7853 | 0.0233 | 0.1393 | 3 |
| 2.9934 | 0.0328 | 0.0614 | 2.7657 | 0.0232 | 0.1273 | 4 |
| 2.9858 | 0.0329 | 0.0654 | 2.7542 | 0.0234 | 0.1073 | 5 |
| 2.9735 | 0.0330 | 0.0673 | 2.7367 | 0.0234 | 0.1414 | 6 |
| 2.9574 | 0.0332 | 0.0704 | 2.6961 | 0.0240 | 0.1429 | 7 |
| 2.9320 | 0.0335 | 0.0723 | 2.6652 | 0.0239 | 0.0990 | 8 |
| 2.8976 | 0.0339 | 0.0729 | 2.5997 | 0.0245 | 0.0944 | 9 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
0x3e9/GoodTimesWithScar_RVC
|
0x3e9
| 2023-09-07T05:25:35Z | 0 | 0 | null |
[
"rvc",
"audio-to-audio",
"region:us"
] |
audio-to-audio
| 2023-09-04T09:00:45Z |
---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# GoodTimesWithScar

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| GoodTimesWithScar | 300 | RVC V2 | [Download](https://huggingface.co/0x3e9/GoodTimesWithScar_RVC/resolve/main/goodtimeswithscar.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1128406096957145178) |
|
ThuyNT03/xlm-roberta-base-Final_VietNam-train-1
|
ThuyNT03
| 2023-09-07T05:22:11Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T20:15:03Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_VietNam-train-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_VietNam-train-1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7562
- Accuracy: 0.69
- F1: 0.6976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0985 | 1.0 | 44 | 1.0870 | 0.41 | 0.2928 |
| 0.9394 | 2.0 | 88 | 0.8415 | 0.66 | 0.6161 |
| 0.7884 | 3.0 | 132 | 0.8431 | 0.65 | 0.5722 |
| 0.6681 | 4.0 | 176 | 0.7143 | 0.68 | 0.6702 |
| 0.5849 | 5.0 | 220 | 0.7463 | 0.72 | 0.7155 |
| 0.4916 | 6.0 | 264 | 0.7391 | 0.7 | 0.7032 |
| 0.4252 | 7.0 | 308 | 0.7351 | 0.72 | 0.7195 |
| 0.3756 | 8.0 | 352 | 0.7562 | 0.69 | 0.6976 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
0x3e9/Gigguk_RVC
|
0x3e9
| 2023-09-07T05:22:03Z | 0 | 0 | null |
[
"rvc",
"audio-to-audio",
"region:us"
] |
audio-to-audio
| 2023-09-04T09:00:44Z |
---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# Gigguk

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| Gigguk | 300 | RVC V2 | [Download](https://huggingface.co/0x3e9/Gigguk_RVC/resolve/main/gigguk.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1123419008088162305) |
|
ThuyNT03/xlm-roberta-base-Final_Mixed-aug_swap-1
|
ThuyNT03
| 2023-09-07T05:20:10Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T20:12:55Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_Mixed-aug_swap-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-aug_swap-1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2104
- Accuracy: 0.75
- F1: 0.7434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0503 | 1.0 | 87 | 0.9473 | 0.62 | 0.5062 |
| 0.7772 | 2.0 | 174 | 0.6460 | 0.74 | 0.7214 |
| 0.5668 | 3.0 | 261 | 0.6739 | 0.76 | 0.7474 |
| 0.3978 | 4.0 | 348 | 0.7077 | 0.78 | 0.7737 |
| 0.2502 | 5.0 | 435 | 1.0460 | 0.75 | 0.7340 |
| 0.1757 | 6.0 | 522 | 1.0285 | 0.74 | 0.7355 |
| 0.1439 | 7.0 | 609 | 1.1870 | 0.75 | 0.7454 |
| 0.1178 | 8.0 | 696 | 1.2104 | 0.75 | 0.7434 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
0x3e9/Docm77_RVC
|
0x3e9
| 2023-09-07T05:19:02Z | 0 | 0 | null |
[
"rvc",
"audio-to-audio",
"region:us"
] |
audio-to-audio
| 2023-09-04T09:00:44Z |
---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# Docm77

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| Docm77 | 300 | RVC V2 | [Download](https://huggingface.co/0x3e9/Docm77_RVC/resolve/main/docm77.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1128972502790590474) |
|
ThuyNT03/xlm-roberta-base-Final_VietNam-aug_delete-1
|
ThuyNT03
| 2023-09-07T05:18:06Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T20:06:54Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_VietNam-aug_delete-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_VietNam-aug_delete-1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9870
- Accuracy: 0.69
- F1: 0.6960
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0155 | 1.0 | 87 | 0.8328 | 0.61 | 0.5255 |
| 0.7328 | 2.0 | 174 | 0.6919 | 0.67 | 0.6787 |
| 0.6021 | 3.0 | 261 | 0.7414 | 0.71 | 0.7170 |
| 0.4777 | 4.0 | 348 | 0.7597 | 0.7 | 0.7061 |
| 0.3666 | 5.0 | 435 | 0.8713 | 0.69 | 0.6997 |
| 0.2686 | 6.0 | 522 | 0.9487 | 0.7 | 0.7053 |
| 0.2407 | 7.0 | 609 | 0.9443 | 0.69 | 0.6985 |
| 0.2006 | 8.0 | 696 | 0.9870 | 0.69 | 0.6960 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
danorel/poca-SoccerTwos
|
danorel
| 2023-09-07T05:16:06Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-09-07T05:15:47Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: danorel/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
0x3e9/Darth_Vader_RVC
|
0x3e9
| 2023-09-07T05:15:25Z | 0 | 0 | null |
[
"rvc",
"audio-to-audio",
"region:us"
] |
audio-to-audio
| 2023-09-04T09:00:43Z |
---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# Darth Vader

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| Darth Vader | 150 | RVC V2 | [Download](https://huggingface.co/0x3e9/Darth_Vader_RVC/resolve/main/vader.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1113193444421161100) |
|
tiggerhelloworld/doom_health_gathering_supreme
|
tiggerhelloworld
| 2023-09-07T05:13:38Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-07T05:13:31Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 6.54 +/- 2.61
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r tiggerhelloworld/doom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=doom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=doom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
0x3e9/Biden_RVC
|
0x3e9
| 2023-09-07T05:10:30Z | 0 | 1 | null |
[
"rvc",
"audio-to-audio",
"region:us"
] |
audio-to-audio
| 2023-09-04T09:00:42Z |
---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# Biden

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| Biden | 500 | RVC V2 | [Download](https://huggingface.co/0x3e9/Biden_RVC/resolve/main/biden.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1111937280257564782) |
|
tiggerhelloworld/rl_course_vizdoom_health_gathering_supreme
|
tiggerhelloworld
| 2023-09-07T05:07:13Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-07T05:07:05Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 9.86 +/- 4.82
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r tiggerhelloworld/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
0x3e9/BdoubleO100_RVC
|
0x3e9
| 2023-09-07T05:06:33Z | 0 | 0 | null |
[
"rvc",
"audio-to-audio",
"region:us"
] |
audio-to-audio
| 2023-09-04T09:00:42Z |
---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# BdoubleO100

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| BdoubleO100 | 200 | RVC V2 | [Download](https://huggingface.co/0x3e9/BdoubleO100_RVC/resolve/main/BdoubleO100.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1129709422260801546) |
|
0x3e9/Akuma_Nihmune_RVC
|
0x3e9
| 2023-09-07T05:03:28Z | 0 | 0 | null |
[
"rvc",
"audio-to-audio",
"region:us"
] |
audio-to-audio
| 2023-09-04T08:57:15Z |
---
pipeline_tag: audio-to-audio
tags:
- rvc
---
# Akuma Nihmune

## This repo was autogenerated from [0x3e9/0x3e9_RVC_models](https://huggingface.co/0x3e9/0x3e9_RVC_models)
Its just easier to find models when they are in their own seperate model repo.
Recommend to visit the original repo for full list of my rvc models and samples for some.
## Model Info
| Model Name | Epoch | Version | Direct zip link | AI Hub Thread |
| ---------- | ----- | ------- | --------------- | ------------- |
| Akuma Nihmune | 300 | RVC V2 | [Download](https://huggingface.co/0x3e9/Akuma_Nihmune_RVC/resolve/main/numi.zip) | [AI Hub](https://discord.com/channels/1089076875999072296/1135118376461664327) |
|
tensor-diffusion/melaura-v1-1
|
tensor-diffusion
| 2023-09-07T04:50:54Z | 2 | 3 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"text-to-image",
"DiffusionPipeline",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-05T12:35:01Z |
---
license: openrail++
pipeline_tag: text-to-image
tags:
- stable-diffusion
- text-to-image
- diffusers
- DiffusionPipeline
inference:
parameter:
negative_prompt: >-
lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit,
fewer digits, cropped, worst quality, low quality, normal quality, jpeg,
artifacts, signature, watermark, username, blurry, ugly, duplicate,
morbid, mutilated, extra fingers, mutated hands, poorly drawn hands,
poorly drawn face, mutation, deformed, blurry, bad anatomy, bad
proportions, cloned face, disfigured, out of frame, extra limbs, bad
anatomy, gross proportions, malformed limbs, missing arms, missing legs,
extra arms, extra legs, mutated hands, fused fingers, too many fingers,
long neck, text, letters, signature, web address, copyright name,
username, error, extra digit, fewer digits, loadscreen, grid, stock image,
a stock photo, promo poster, fat, text, logo, brand, watermark, water
mark, low quality,
widget:
- text: melaura, girl, hd, pink lips, detailed, age 16, Off-shoulder top
example_title: Off-shoulder top
- text: melaura, girl, hd, shiny cheeks
example_title: shiny cheeks
library_name: diffusers
---
|
t1m0/detr-resnet-50_finetuned_cppe5_t1m0
|
t1m0
| 2023-09-07T04:48:30Z | 183 | 0 |
transformers
|
[
"transformers",
"pytorch",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-09-07T04:45:55Z |
---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: detr-resnet-50_finetuned_cppe5_t1m0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5_t1m0
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
CyberHarem/matoba_risa_theidolmastercinderellagirlsu149
|
CyberHarem
| 2023-09-07T04:46:24Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/matoba_risa_theidolmastercinderellagirlsu149",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-07T04:29:30Z |
---
license: mit
datasets:
- CyberHarem/matoba_risa_theidolmastercinderellagirlsu149
pipeline_tag: text-to-image
tags:
- art
---
# Lora of matoba_risa_theidolmastercinderellagirlsu149
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 3080, you need to download `3080/matoba_risa_theidolmastercinderellagirlsu149.pt` as the embedding and `3080/matoba_risa_theidolmastercinderellagirlsu149.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 3080**, with the score of 0.895. The trigger words are:
1. `matoba_risa_theidolmastercinderellagirlsu149`
2. `black_hair, long_hair, twintails, yellow_eyes, ribbon, hair_ribbon, jewelry, necklace, hair_between_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:----------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 6600 | 0.843 | [Download](6600/matoba_risa_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6600/previews/nude.png) | [<NSFW, click to see>](6600/previews/nude2.png) |  |  |
| 6160 | 0.793 | [Download](6160/matoba_risa_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6160/previews/nude.png) | [<NSFW, click to see>](6160/previews/nude2.png) |  |  |
| 5720 | 0.850 | [Download](5720/matoba_risa_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5720/previews/nude.png) | [<NSFW, click to see>](5720/previews/nude2.png) |  |  |
| 5280 | 0.834 | [Download](5280/matoba_risa_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5280/previews/nude.png) | [<NSFW, click to see>](5280/previews/nude2.png) |  |  |
| 4840 | 0.784 | [Download](4840/matoba_risa_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4840/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4840/previews/nude.png) | [<NSFW, click to see>](4840/previews/nude2.png) |  |  |
| 4400 | 0.873 | [Download](4400/matoba_risa_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4400/previews/nude.png) | [<NSFW, click to see>](4400/previews/nude2.png) |  |  |
| 3960 | 0.842 | [Download](3960/matoba_risa_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3960/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3960/previews/nude.png) | [<NSFW, click to see>](3960/previews/nude2.png) |  |  |
| 3520 | 0.805 | [Download](3520/matoba_risa_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3520/previews/nude.png) | [<NSFW, click to see>](3520/previews/nude2.png) |  |  |
| **3080** | **0.895** | [**Download**](3080/matoba_risa_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3080/previews/nude.png) | [<NSFW, click to see>](3080/previews/nude2.png) |  |  |
| 2640 | 0.785 | [Download](2640/matoba_risa_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2640/previews/nude.png) | [<NSFW, click to see>](2640/previews/nude2.png) |  |  |
| 2200 | 0.774 | [Download](2200/matoba_risa_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2200/previews/nude.png) | [<NSFW, click to see>](2200/previews/nude2.png) |  |  |
| 1760 | 0.890 | [Download](1760/matoba_risa_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1760/previews/nude.png) | [<NSFW, click to see>](1760/previews/nude2.png) |  |  |
| 1320 | 0.862 | [Download](1320/matoba_risa_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1320/previews/nude.png) | [<NSFW, click to see>](1320/previews/nude2.png) |  |  |
| 880 | 0.808 | [Download](880/matoba_risa_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](880/previews/bondage.png) |  |  |  | [<NSFW, click to see>](880/previews/nude.png) | [<NSFW, click to see>](880/previews/nude2.png) |  |  |
| 440 | 0.759 | [Download](440/matoba_risa_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](440/previews/bondage.png) |  |  |  | [<NSFW, click to see>](440/previews/nude.png) | [<NSFW, click to see>](440/previews/nude2.png) |  |  |
|
newronai/clma2-13b-Chat-Adapter-Plus
|
newronai
| 2023-09-07T04:46:03Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-07T04:45:56Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
yesj1234/mbart_cycle0_ko-ja
|
yesj1234
| 2023-09-07T04:42:27Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"generated_from_trainer",
"ko",
"ja",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-07T03:40:57Z |
---
language:
- ko
- ja
base_model: ./ja_reduced_model
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart_cycle0_ko-ja
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart_cycle0_ko-ja
This model is a fine-tuned version of [mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on an custom dataset.
It achieves the following results on the evaluation set:
- Loss: 7.0107
- Bleu: 25.8676
- Gen Len: 20.5833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:--------:|
| No log | 3.57 | 50 | 12.5219 | 0.0216 | 443.0833 |
| No log | 7.14 | 100 | 9.2255 | 0.0315 | 1024.0 |
| No log | 10.71 | 150 | 6.4885 | 0.0151 | 779.0 |
| No log | 14.29 | 200 | 5.3925 | 0.928 | 101.5 |
| No log | 17.86 | 250 | 5.4016 | 13.1472 | 105.6667 |
| No log | 21.43 | 300 | 6.5062 | 11.5401 | 158.3333 |
| No log | 25.0 | 350 | 6.0911 | 20.6997 | 25.1667 |
| No log | 28.57 | 400 | 6.5541 | 18.9521 | 20.6667 |
| No log | 32.14 | 450 | 6.6978 | 21.2662 | 25.1667 |
| 6.3858 | 35.71 | 500 | 6.9643 | 10.1265 | 17.3333 |
| 6.3858 | 39.29 | 550 | 6.6467 | 25.8218 | 19.6667 |
| 6.3858 | 42.86 | 600 | 7.1260 | 13.6948 | 18.75 |
| 6.3858 | 46.43 | 650 | 7.0505 | 19.5121 | 21.0 |
| 6.3858 | 50.0 | 700 | 7.0107 | 25.8676 | 20.5833 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
shengqin/bloomz-xss-sqli-30000
|
shengqin
| 2023-09-07T04:30:40Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-07T04:26:09Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
chenzhwsysu57/my_awesome_opus_books_model
|
chenzhwsysu57
| 2023-09-07T04:18:08Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:opus_books",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-27T08:29:59Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- opus_books
metrics:
- bleu
model-index:
- name: my_awesome_opus_books_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus_books
type: opus_books
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 5.2349
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the opus_books dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6669
- Bleu: 5.2349
- Gen Len: 17.6184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 1.9307 | 1.0 | 1589 | 1.6894 | 5.0111 | 17.6243 |
| 1.8897 | 2.0 | 3178 | 1.6669 | 5.2349 | 17.6184 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ngoan/Llama-2-7b-vietnamese-20k
|
ngoan
| 2023-09-07T03:59:05Z | 143 | 10 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-2",
"llama-2-7B",
"llama2-vietnamese",
"vietnamese",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-24T06:54:42Z |
---
tags:
- text-generation
- llama-2
- llama-2-7B
- llama2-vietnamese
- vietnamese
---
# Model Card for Llama 2 Fine-Tuned on Vietnamese Instructions
## Model Details
- Model Name: Llama-2-7b-vietnamese-20k
- Architecture: Llama 2 7B
- Fine-tuning Data Size: 20,000 instruction samples
- Purpose: To demonstrate the performance of the Llama 2 model on Vietnamese and gather initial insights. A more comprehensive model and evaluation will be released soon.
- Availability: The model checkpoint can be accessed on Hugging Face: ngoantech/Llama-2-7b-vietnamese-20k
## Intended Use
This model is intended for researchers, developers, and enthusiasts who are interested in understanding the performance of the Llama 2 model on Vietnamese. It can be used for generating Vietnamese text based on given instructions or for any other task that requires a Vietnamese language model.
## Example Output

## Limitations
- Data Size: The model was fine-tuned on a relatively small dataset of 20,000 instruction samples, which might not capture the full complexity and nuances of the Vietnamese language.
- Preliminary Model: This is an initial experiment with the Llama 2 architecture on Vietnamese. More refined versions and evaluations will be available soon.
- Performance:
Specific performance metrics on this fine-tuned model will be provided in the upcoming comprehensive evaluation.
## Ethical Considerations
- Bias and Fairness: Like any other machine learning model, there is a possibility that this model might reproduce or amplify biases present in the training data.
- Use in Critical Systems: As this is a preliminary model, it is recommended not to use it for mission-critical applications without proper validation.
- Fine-tuning Data:
The model was fine-tuned on a custom dataset of 20,000 instruction samples in Vietnamese. More details about the composition and source of this dataset will be provided in the detailed evaluation report.
## Credits
I would like to express our gratitude to the creators of the Llama 2 architecture and the Hugging Face community for their tools and resources.
## Contact
[email protected]
https://github.com/ngoanpv
|
XianTong/sovits4.1-genshin
|
XianTong
| 2023-09-07T03:53:22Z | 0 | 4 | null |
[
"region:us"
] | null | 2023-08-15T09:24:03Z |
sovits4.1原神角色语音模型
作者:在下先通
# 声明
使用者应当遵循以下规则:
1. 模型仅用于个人娱乐研究使用,不可商用,不可用于违法用途。
2. 使用时请填写完整借物表。
3. 若模型因使用不当而导致不良影响,一切应由使用者自行承担,与模型作者无关。
4. 声音版权归属于miHoYo及角色cv,如有侵权,请联系删除。
# 使用方法
模型分为两部分:sovits主体模型和扩散模型。
扩散模型不是必须的,但是使用它效果会更好一些。
- 主要模型:
主体模型为`角色名_G_xx000.pth`文件,与其它模型`角色名_kmeans_xxxxx.pt`或者`角色名_feature_and_index`文件放于`logs/44k/`文件夹下;
其中kmeans和feature分别为聚类模型以及特征检索模型,与扩散模型一样不是必要的模型文件。
`角色名.json`为配置文件,放于`config`文件夹下。
- 扩散模型
连同`diffusion`文件夹放到`logs/44k/`文件夹下。
`角色名_diffusion.yaml`为配置文件,与上面的`.json`文件一样放于`config`文件夹下。
|
CyberHarem/akagi_miria_theidolmastercinderellagirlsu149
|
CyberHarem
| 2023-09-07T03:51:31Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/akagi_miria_theidolmastercinderellagirlsu149",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-07T03:34:46Z |
---
license: mit
datasets:
- CyberHarem/akagi_miria_theidolmastercinderellagirlsu149
pipeline_tag: text-to-image
tags:
- art
---
# Lora of akagi_miria_theidolmastercinderellagirlsu149
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5600, you need to download `5600/akagi_miria_theidolmastercinderellagirlsu149.pt` as the embedding and `5600/akagi_miria_theidolmastercinderellagirlsu149.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5600**, with the score of 0.960. The trigger words are:
1. `akagi_miria_theidolmastercinderellagirlsu149`
2. `short_hair, black_hair, brown_eyes, brown_hair, two_side_up, smile, open_mouth, upper_body, hair_ornament`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:----------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 6000 | 0.953 | [Download](6000/akagi_miria_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6000/previews/nude.png) | [<NSFW, click to see>](6000/previews/nude2.png) |  |  |
| **5600** | **0.960** | [**Download**](5600/akagi_miria_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5600/previews/nude.png) | [<NSFW, click to see>](5600/previews/nude2.png) |  |  |
| 5200 | 0.947 | [Download](5200/akagi_miria_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5200/previews/nude.png) | [<NSFW, click to see>](5200/previews/nude2.png) |  |  |
| 4800 | 0.893 | [Download](4800/akagi_miria_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4800/previews/nude.png) | [<NSFW, click to see>](4800/previews/nude2.png) |  |  |
| 4400 | 0.925 | [Download](4400/akagi_miria_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4400/previews/nude.png) | [<NSFW, click to see>](4400/previews/nude2.png) |  |  |
| 4000 | 0.927 | [Download](4000/akagi_miria_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4000/previews/nude.png) | [<NSFW, click to see>](4000/previews/nude2.png) |  |  |
| 3600 | 0.900 | [Download](3600/akagi_miria_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3600/previews/nude.png) | [<NSFW, click to see>](3600/previews/nude2.png) |  |  |
| 3200 | 0.933 | [Download](3200/akagi_miria_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3200/previews/nude.png) | [<NSFW, click to see>](3200/previews/nude2.png) |  |  |
| 2800 | 0.883 | [Download](2800/akagi_miria_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2800/previews/nude.png) | [<NSFW, click to see>](2800/previews/nude2.png) |  |  |
| 2400 | 0.946 | [Download](2400/akagi_miria_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2400/previews/nude.png) | [<NSFW, click to see>](2400/previews/nude2.png) |  |  |
| 2000 | 0.837 | [Download](2000/akagi_miria_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2000/previews/nude.png) | [<NSFW, click to see>](2000/previews/nude2.png) |  |  |
| 1600 | 0.827 | [Download](1600/akagi_miria_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1600/previews/nude.png) | [<NSFW, click to see>](1600/previews/nude2.png) |  |  |
| 1200 | 0.846 | [Download](1200/akagi_miria_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [<NSFW, click to see>](1200/previews/nude2.png) |  |  |
| 800 | 0.851 | [Download](800/akagi_miria_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](800/previews/nude.png) | [<NSFW, click to see>](800/previews/nude2.png) |  |  |
| 400 | 0.621 | [Download](400/akagi_miria_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](400/previews/nude.png) | [<NSFW, click to see>](400/previews/nude2.png) |  |  |
|
budecosystem/Tansen
|
budecosystem
| 2023-09-07T03:47:50Z | 0 | 6 | null |
[
"license:openrail++",
"region:us"
] | null | 2023-09-05T15:00:57Z |
---
license: openrail++
---
<p align="center">
<img src="https://raw.githubusercontent.com/BudEcosystem/Tansen/main/Instagram%20post%20-%204.png" alt="Tensen Logo" width="300" height="300"/>
</p>
---
<p align="center"><i>Democratizing access to LLMs, Multi-Modal Gen AI models for the open-source community.<br>Let's advance AI, together. </i></p>
---
Tansen is a text-to-speech program built with the following priorities:
1. Strong multi-voice capabilities.
2. Highly realistic prosody and intonation.
3. Speaking rate control
<a href="https://github.com/BudEcosystem/Tansen"><img src="https://img.shields.io/badge/GitHub-100000?style=for-the-badge&logo=github&logoColor=white" /> </a>
<h2 align="left">🎧 Demos </h2>
### Demos
[random_0_0.webm](https://github.com/BudEcosystem/Tansen/assets/4546714/9a6ce191-2646-497e-bf48-003f2bf0bb8d)
[random_0_1.webm](https://github.com/BudEcosystem/Tansen/assets/4546714/87bf5f7c-ae47-4aa4-a110-b5c9899e4446)
[random_0_2.webm](https://github.com/BudEcosystem/Tansen/assets/4546714/5549c464-c670-4e7a-987c-c5d79b32bf4b)
<h2 align="left">💻 Getting Started on GitHub </h2>
Ready to dive in? Here's how you can get started with our repo on GitHub.
<h3 align="left">1️⃣ : Clone our GitHub repository</h3>
First things first, you'll need to clone our repository. Open up your terminal, navigate to the directory where you want the repository to be cloned, and run the following command:
```bash
conda create --name Tansen python=3.9 numba inflect
conda activate Tansen
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
conda install transformers=4.29.2
git clone https://github.com/BudEcosystem/Tansen.git
cd Tansen
```
<h3 align="left">2️⃣ : Install dependencies</h3>
```bash
python setup.py install
```
<h3 align="left">3️⃣ : Generate Audio</h3>
### do_tts.py
This script allows you to speak a single phrase with one or more voices.
```shell
python do_tts.py --text "I'm going to speak this" --voice random --preset fast
```
### read.py
This script provides tools for reading large amounts of text.
```shell
python Tansen/read.py --textfile <your text to be read> --voice random
```
This will break up the textfile into sentences, and then convert them to speech one at a time. It will output a series
of spoken clips as they are generated. Once all the clips are generated, it will combine them into a single file and
output that as well.
Sometimes Tansen screws up an output. You can re-generate any bad clips by re-running `read.py` with the --regenerate
argument.
Intrested in running as as API ?
### 🐍 Usage in Python
Tansen can be used programmatically :
```python
reference_clips = [utils.audio.load_audio(p, 22050) for p in clips_paths]
tts = api.TextToSpeech(use_deepspeed=True, kv_cache=True, half=True)
pcm_audio = tts.tts_with_preset("your text here", voice_samples=reference_clips, preset='fast')
```
## Loss Curves
<p align="center">
<img src="https://raw.githubusercontent.com/BudEcosystem/Tansen/main/results/images/loss_mel_ce.png" alt="" width="500"/>
<span>loss_mel_ce</span>
<p>
<p align="center">
<img src="https://raw.githubusercontent.com/BudEcosystem/Tansen/main/results/images/loss_text_ce.png" alt="" width="500" />
<span>loss_text_ce</span>
<p>
## Training Information
Device : A Single A100
Dataset : 876 hours
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.